markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Component MakeupWe can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.Note that the components are again ordered from smallest to largest and so I am getting the correct rows by calling N_COMPONENTS-1 to get the top, 1, component. | import seaborn as sns
def display_component(v, features_list, component_num, n_weights=10):
# get index of component (last row - component_num)
row_idx = N_COMPONENTS-component_num
# get the list of weights from a row in v, dataframe
v_1_row = v.iloc[:, row_idx]
v_1 = np.squeeze(v_1_row.values)
# match weights to features in counties_scaled dataframe, using list comporehension
comps = pd.DataFrame(list(zip(v_1, features_list)),
columns=['weights', 'features'])
# we'll want to sort by the largest n_weights
# weights can be neg/pos and we'll sort by magnitude
comps['abs_weights']=comps['weights'].apply(lambda x: np.abs(x))
sorted_weight_data = comps.sort_values('abs_weights', ascending=False).head(n_weights)
# display using seaborn
ax=plt.subplots(figsize=(10,6))
ax=sns.barplot(data=sorted_weight_data,
x="weights",
y="features",
palette="Blues_d")
ax.set_title("PCA Component Makeup, Component #" + str(component_num))
plt.show()
# display makeup of first component
num=1
display_component(v, counties_scaled.columns.values, component_num=num, n_weights=10) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Deploying the PCA ModelWe can now deploy this model and use it to make "predictions". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point. Run the cell below to deploy/host this model on an instance_type that we specify. | %%time
# this takes a little while, around 7mins
pca_predictor = pca_SM.deploy(initial_instance_count=1,
instance_type='ml.t2.medium') | -----------------!CPU times: user 319 ms, sys: 14 ms, total: 333 ms
Wall time: 8min 32s
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data. | # pass np train data to the PCA model
train_pca = pca_predictor.predict(train_data_np)
# check out the first item in the produced training features
data_idx = 0
print(train_pca[data_idx]) | label {
key: "projection"
value {
float32_tensor {
values: 0.0002009272575378418
values: 0.0002455431967973709
values: -0.0005782842636108398
values: -0.0007815659046173096
values: -0.00041911262087523937
values: -0.0005133943632245064
values: -0.0011316537857055664
values: 0.0017268601804971695
values: -0.005361668765544891
values: -0.009066537022590637
values: -0.008141040802001953
values: -0.004735097289085388
values: -0.00716288760304451
values: 0.0003725700080394745
values: -0.01208949089050293
values: 0.02134685218334198
values: 0.0009293854236602783
values: 0.002417147159576416
values: -0.0034637749195098877
values: 0.01794189214706421
values: -0.01639425754547119
values: 0.06260128319263458
values: 0.06637358665466309
values: 0.002479255199432373
values: 0.10011336207389832
values: -0.1136140376329422
values: 0.02589476853609085
values: 0.04045158624649048
values: -0.01082391943782568
values: 0.1204797774553299
values: -0.0883558839559555
values: 0.16052711009979248
values: -0.06027412414550781
}
}
}
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Create a transformed DataFrameFor each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.You should end up with a DataFrame that looks something like the following:``` c_1 c_2 c_3 c_4 c_5 ...Alabama-Autauga -0.060274 0.160527 -0.088356 0.120480 -0.010824 ...Alabama-Baldwin -0.149684 0.185969 -0.145743 -0.023092 -0.068677 ...Alabama-Barbour 0.506202 0.296662 0.146258 0.297829 0.093111 ......``` | # create dimensionality-reduced data
def create_transformed_df(train_pca, counties_scaled, n_top_components):
''' Return a dataframe of data points with component features.
The dataframe should be indexed by State-County and contain component values.
:param train_pca: A list of pca training data, returned by a PCA model.
:param counties_scaled: A dataframe of normalized, original features.
:param n_top_components: An integer, the number of top components to use.
:return: A dataframe, indexed by State-County, with n_top_component values as columns.
'''
# create new dataframe to add data to
counties_transformed=pd.DataFrame()
# for each of our new, transformed data points
# append the component values to the dataframe
for data in train_pca:
# get component values for each data point
components=data.label['projection'].float32_tensor.values
counties_transformed=counties_transformed.append([list(components)])
# index by county, just like counties_scaled
counties_transformed.index=counties_scaled.index
# keep only the top n components
start_idx = N_COMPONENTS - n_top_components
counties_transformed = counties_transformed.iloc[:,start_idx:]
# reverse columns, component order
return counties_transformed.iloc[:, ::-1]
| _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously. Define the `top_n` components to use in this transformed dataYour code should return data, indexed by 'State-County' and with as many columns as `top_n` components.You can also choose to add descriptive column names for this data; names that correspond to the component number or feature-level makeup. | ## Specify top n
top_n = 7
# call your function and create a new dataframe
counties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n)
## TODO: Add descriptive column names
PCA_list=['c_1', 'c_2', 'c_3', 'c_4', 'c_5', 'c_6', 'c_7']
counties_transformed.columns=PCA_list
# print result
counties_transformed.head() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Delete the Endpoint!Now that we've deployed the mode and created our new, transformed training data, we no longer need the PCA endpoint.As a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example). | # delete predictor endpoint
session.delete_endpoint(pca_predictor.endpoint) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
--- Population Segmentation Now, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we have ~3000 counties and 34 attributes in the original dataset, the large feature space may have made it difficult to cluster the counties effectively. Instead, we have reduced the feature space to 7 PCA components, and we’ll cluster on this transformed dataset. EXERCISE: Define a k-means modelYour task will be to instantiate a k-means model. A `KMeans` estimator requires a number of parameters to be instantiated, which allow us to specify the type of training instance to use, and the model hyperparameters. You can read about the required parameters, in the [`KMeans` documentation](https://sagemaker.readthedocs.io/en/stable/kmeans.html); note that not all of the possible parameters are required. Choosing a "Good" KOne method for choosing a "good" k, is to choose based on empirical data. A bad k would be one so *high* that only one or two very close data points are near it, and another bad k would be one so *low* that data points are really far away from the centers.You want to select a k such that data points in a single cluster are close together but that there are enough clusters to effectively separate the data. You can approximate this separation by measuring how close your data points are to each cluster center; the average centroid distance between cluster points and a centroid. After trying several values for k, the centroid distance typically reaches some "elbow"; it stops decreasing at a sharp rate and this indicates a good value of k. The graph below indicates the average centroid distance for value of k between 5 and 12.A distance elbow can be seen around 8 when the distance starts to increase and then decrease at a slower rate. This indicates that there is enough separation to distinguish the data points in each cluster, but also that you included enough clusters so that the data points aren’t *extremely* far away from each cluster. | # define a KMeans estimator
kmeans = sagemaker.KMeans(role = role,
train_instance_count = 1,
train_instance_type='ml.c4.xlarge',
output_path = output_path,
k = 8)
print('Training artifacts will be uploaded to: {}'.format(output_path)) | Training artifacts will be uploaded to: s3://sagemaker-eu-central-1-730357687813/counties/
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Create formatted, k-means training dataJust as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model. | # convert the transformed dataframe into record_set data
kmeans_train_data_np = counties_transformed.values.astype('float32')
kmeans_formatted_train_data = kmeans.record_set(kmeans_train_data_np) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Train the k-means modelPass in the formatted training data and train the k-means model. | %%time
kmeans.fit(kmeans_formatted_train_data) | 2020-05-23 06:55:58 Starting - Starting the training job...
2020-05-23 06:56:00 Starting - Launching requested ML instances......
2020-05-23 06:57:03 Starting - Preparing the instances for training......
2020-05-23 06:58:26 Downloading - Downloading input data
2020-05-23 06:58:26 Training - Downloading the training image...
2020-05-23 06:58:58 Uploading - Uploading generated training model
2020-05-23 06:58:58 Completed - Training job completed
[34mDocker entrypoint called with argument(s): train[0m
[34mRunning default environment configuration script[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-input.json: {u'_enable_profiler': u'false', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'_kvstore': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'true', u'epochs': u'1', u'init_method': u'random', u'local_lloyd_tol': u'0.0001', u'local_lloyd_max_iter': u'300', u'_disable_wait_to_read': u'false', u'extra_center_factor': u'auto', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'half_life_time_size': u'0', u'_num_slices': u'1'}[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'7', u'k': u'8', u'force_dense': u'True'}[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Final configuration: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'}[0m
[34m[05/23/2020 06:58:48 WARNING 140047905527616] Loggers have already been setup.[0m
[34mProcess 1 is a worker.[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Using default worker.[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Loaded iterator creator application/x-recordio-protobuf for content type ('application/x-recordio-protobuf', '1.0')[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Create Store: local[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] nvidia-smi took: 0.0252118110657 secs to identify 0 gpus[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Setting up with params: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'["msd"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'}[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] 'extra_center_factor' was set to 'auto', evaluated to 10.[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] number of center slices 1[0m
[34m[05/23/2020 06:58:48 WARNING 140047905527616] Batch size 5000 is bigger than the first batch data. Effective batch size used to initialize is 3218[0m
[34m#metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Batches Since Last Reset": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Records Since Last Reset": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Total Batches Seen": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Total Records Seen": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Max Records Seen Between Resets": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Reset Count": {"count": 1, "max": 0, "sum": 0.0, "min": 0}}, "EndTime": 1590217128.442506, "Dimensions": {"Host": "algo-1", "Meta": "init_train_data_iter", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.442472}
[0m
[34m[2020-05-23 06:58:48.442] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 0, "duration": 33, "num_examples": 1, "num_bytes": 167336}[0m
[34m[2020-05-23 06:58:48.489] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 1, "duration": 46, "num_examples": 1, "num_bytes": 167336}[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] processed a total of 3218 examples[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] #progress_metric: host=algo-1, completed 100 % of epochs[0m
[34m#metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Batches Since Last Reset": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "Number of Records Since Last Reset": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Total Batches Seen": {"count": 1, "max": 2, "sum": 2.0, "min": 2}, "Total Records Seen": {"count": 1, "max": 6436, "sum": 6436.0, "min": 6436}, "Max Records Seen Between Resets": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Reset Count": {"count": 1, "max": 1, "sum": 1.0, "min": 1}}, "EndTime": 1590217128.490535, "Dimensions": {"Host": "algo-1", "Meta": "training_data_iter", "Operation": "training", "Algorithm": "AWS/KMeansWebscale", "epoch": 0}, "StartTime": 1590217128.442763}
[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] #throughput_metric: host=algo-1, train throughput=67151.9347251 records/second[0m
[34m[05/23/2020 06:58:48 WARNING 140047905527616] wait_for_all_workers will not sync workers since the kv store is not running distributed[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] shrinking 80 centers into 8[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #0. Current mean square distance 0.062246[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #1. Current mean square distance 0.063014[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #2. Current mean square distance 0.059803[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #3. Current mean square distance 0.063063[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #4. Current mean square distance 0.064876[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #5. Current mean square distance 0.063535[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #6. Current mean square distance 0.063639[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #7. Current mean square distance 0.064357[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #8. Current mean square distance 0.061033[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] local kmeans attempt #9. Current mean square distance 0.060658[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] finished shrinking process. Mean Square Distance = 0[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] #quality_metric: host=algo-1, train msd <loss>=0.0598029382527[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: inner product took: 30.7809%, (0.017753 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] collect from kv store took: 18.8244%, (0.010857 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] splitting centers key-value pair took: 18.6784%, (0.010773 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] batch data loading with context took: 7.2903%, (0.004205 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: point norm took: 7.0377%, (0.004059 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] predict compute msd took: 6.0613%, (0.003496 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] gradient: one_hot took: 5.6905%, (0.003282 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] gradient: cluster size took: 2.3579%, (0.001360 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] gradient: cluster center took: 1.6853%, (0.000972 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] update state and report convergance took: 0.8408%, (0.000485 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] update set-up time took: 0.3795%, (0.000219 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] compute all data-center distances: center norm took: 0.3278%, (0.000189 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] predict minus dist took: 0.0451%, (0.000026 secs)[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] TOTAL took: 0.0576758384705[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Number of GPUs being used: 0[0m
[34m#metrics {"Metrics": {"finalize.time": {"count": 1, "max": 331.773042678833, "sum": 331.773042678833, "min": 331.773042678833}, "initialize.time": {"count": 1, "max": 28.280019760131836, "sum": 28.280019760131836, "min": 28.280019760131836}, "model.serialize.time": {"count": 1, "max": 0.14591217041015625, "sum": 0.14591217041015625, "min": 0.14591217041015625}, "update.time": {"count": 1, "max": 47.55997657775879, "sum": 47.55997657775879, "min": 47.55997657775879}, "epochs": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "state.serialize.time": {"count": 1, "max": 1.5878677368164062, "sum": 1.5878677368164062, "min": 1.5878677368164062}, "_shrink.time": {"count": 1, "max": 329.76484298706055, "sum": 329.76484298706055, "min": 329.76484298706055}}, "EndTime": 1590217128.824555, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.408428}
[0m
[34m[05/23/2020 06:58:48 INFO 140047905527616] Test data is not provided.[0m
[34m#metrics {"Metrics": {"totaltime": {"count": 1, "max": 479.54416275024414, "sum": 479.54416275024414, "min": 479.54416275024414}, "setuptime": {"count": 1, "max": 13.439178466796875, "sum": 13.439178466796875, "min": 13.439178466796875}}, "EndTime": 1590217128.824923, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "AWS/KMeansWebscale"}, "StartTime": 1590217128.824649}
[0m
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Deploy the k-means modelDeploy the trained model to create a `kmeans_predictor`. | %%time
# deploy the model to create a predictor
kmeans_predictor = kmeans.deploy(initial_instance_count=1,
instance_type='ml.t2.medium') | -----------------!CPU times: user 316 ms, sys: 14 ms, total: 330 ms
Wall time: 8min 32s
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Pass in the training data and assign predicted cluster labelsAfter deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point. | # get the predicted clusters for all the kmeans training data
cluster_info= kmeans_predictor.predict(train_data_kmeans_np) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Exploring the resultant clustersThe resulting predictions should give you information about the cluster that each data point belongs to.You should be able to answer the **question**: which cluster does a given data point belong to? | # print cluster info for first data point
data_idx = 3
print('County is: ', counties_transformed.index[data_idx])
print()
print(cluster_info[data_idx]) | County is: Alabama-Bibb
label {
key: "closest_cluster"
value {
float32_tensor {
values: 3.0
}
}
}
label {
key: "distance_to_cluster"
value {
float32_tensor {
values: 0.3843974173069
}
}
}
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Visualize the distribution of data over clustersGet the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster. | # get all cluster labels
cluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info]
# count up the points in each cluster
cluster_df = pd.DataFrame(cluster_labels)[0].value_counts()
print(cluster_df) | 3.0 907
6.0 842
0.0 386
7.0 375
1.0 368
5.0 167
2.0 87
4.0 86
Name: 0, dtype: int64
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster. Delete the Endpoint!Now that you've deployed the k-means model and extracted the cluster labels for each data point, you no longer need the k-means endpoint. | # delete kmeans endpoint
session.delete_endpoint(kmeans_predictor.endpoint) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
--- Model Attributes & ExplainabilityExplaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, based on the data. EXERCISE: Access the k-means model attributesExtract the k-means model attributes from where they are saved as a TAR file in an S3 bucket.You'll need to access the model by the k-means training job name, and then unzip the file into `model_algo-1`. Then you can load that file using MXNet, as before. | # download and unzip the kmeans model file
# use the name model_algo-1
# download and unzip the kmeans model file
kmeans_job_name = 'kmeans-2020-05-23-06-55-58-261'
model_key = os.path.join(prefix, kmeans_job_name, 'output/model.tar.gz')
# download the model file
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1')
# get the trained kmeans params using mxnet
kmeans_model_params = mx.ndarray.load('model_algo-1')
print(kmeans_model_params) | [
[[ 0.35492653 0.23771921 0.07889839 0.2500726 0.09919675 -0.05618306
0.04399072]
[-0.23379213 -0.3808242 0.07702101 0.08526881 0.0603863 -0.00519104
0.0597847 ]
[ 1.3077838 -0.2294502 -0.17610097 -0.42974427 -0.11858643 0.11248738
0.15853602]
[-0.02278126 0.07436099 0.12951738 -0.05602401 -0.04330579 0.05682565
-0.03048567]
[ 0.5819005 -0.45450625 -0.03150757 0.04155013 -0.09733208 -0.02300905
-0.13401571]
[ 0.25074974 -0.1768499 -0.10482205 -0.22392033 0.23187745 -0.19118813
-0.10258509]
[-0.24812227 0.04720467 -0.02500745 -0.06317183 -0.03199761 -0.04560736
0.00395537]
[-0.04086831 0.03606306 -0.3563783 0.10303619 -0.01080673 0.07729725
-0.01095549]]
<NDArray 8x7 @cpu(0)>]
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm. | # get all the centroids
cluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy())
cluster_centroids.columns=counties_transformed.columns
display(cluster_centroids) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Visualizing Centroids in Component SpaceYou can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space. This gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to interpret. This is one way to make use of the results of PCA + clustering techniques, together. Since you were able to examine the makeup of each PCA component, you can understand what each centroid represents in terms of the PCA components. | # generate a heatmap in component space, using the seaborn library
plt.figure(figsize = (12,9))
ax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu')
ax.set_xlabel("Cluster")
plt.yticks(fontsize = 16)
plt.xticks(fontsize = 16)
ax.set_title("Attribute Value by Centroid")
plt.show() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup. | # what do each of these components mean again?
# let's use the display function, from above
component_num=5
display_component(v, counties_scaled.columns.values, component_num=component_num) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Natural GroupingsYou can also map the cluster labels back to each individual county and examine which counties are naturally grouped together. | # add a 'labels' column to the dataframe
counties_transformed['labels']=list(map(int, cluster_labels))
# sort by cluster label 0-6
sorted_counties = counties_transformed.sort_values('labels', ascending=True)
# view some pts in cluster 0
sorted_counties.head(20) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description. | # get all counties with label == 1
cluster=counties_transformed[counties_transformed['labels']==1]
cluster.head() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Data Characteristics:The actual concrete compressive strength (MPa) for a given mixture under aspecific age (days) was determined from laboratory. Data is in raw form (not scaled).Summary Statistics:Number of instances (observations): 1030Number of Attributes: 9Attribute breakdown: 8 quantitative input variables, and 1 quantitative output variableMissing Attribute Values: NoneVariable Information:Given is the variable name, variable type, the measurement unit and a brief description.The concrete compressive strength is the regression problem. The order of this listingcorresponds to the order of numerals along the rows of the database.Name -- Data Type -- Measurement -- DescriptionCement (component 1) -- quantitative -- kg in a m3 mixture -- Input VariableBlast Furnace Slag (component 2) -- quantitative -- kg in a m3 mixture -- Input VariableFly Ash (component 3) -- quantitative -- kg in a m3 mixture -- Input VariableWater (component 4) -- quantitative -- kg in a m3 mixture -- Input VariableSuperplasticizer (component 5) -- quantitative -- kg in a m3 mixture -- Input VariableCoarse Aggregate (component 6) -- quantitative -- kg in a m3 mixture -- Input VariableFine Aggregate (component 7) -- quantitative -- kg in a m3 mixture -- Input VariableAge -- quantitative -- Day (1~365) -- Input VariableConcrete compressive strength -- quantitative -- MPa -- Output Variable | import pandas as pd
import numpy as np
import seaborn as sns
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
from sklearn.linear_model import SGDRegressor,GammaRegressor,Lasso,GammaRegressor,ElasticNet,Ridge
from sklearn.linear_model import RANSACRegressor,HuberRegressor, BayesianRidge,LinearRegression
from sklearn.ensemble import RandomForestRegressor, BaggingRegressor, AdaBoostRegressor, GradientBoostingRegressor, ExtraTreesRegressor
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor # Decision Tree Regression
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import Pipeline # Streaming pipelines
from sklearn.model_selection import learning_curve, validation_curve, GridSearchCV # Model evaluation
from sklearn.preprocessing import StandardScaler
data=pd.read_csv('/kaggle/input/concrete-compressive-strength/Concrete Compressive Strength.csv')
data | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
EXPLORATORY DATA ANALYSIS | data.columns
data.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1030 entries, 0 to 1029
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Cement (component 1)(kg in a m^3 mixture) 1030 non-null float64
1 Blast Furnace Slag (component 2)(kg in a m^3 mixture) 1030 non-null float64
2 Fly Ash (component 3)(kg in a m^3 mixture) 1030 non-null float64
3 Water (component 4)(kg in a m^3 mixture) 1030 non-null float64
4 Superplasticizer (component 5)(kg in a m^3 mixture) 1030 non-null float64
5 Coarse Aggregate (component 6)(kg in a m^3 mixture) 1030 non-null float64
6 Fine Aggregate (component 7)(kg in a m^3 mixture) 1030 non-null float64
7 Age (day) 1030 non-null int64
8 Concrete compressive strength(MPa, megapascals) 1030 non-null float64
dtypes: float64(8), int64(1)
memory usage: 72.5 KB
| Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
all the variable are numeric | data.describe()
data.isnull().sum() | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
no missing is present UNIVARIATE ANALYSIS | col=data.columns.to_list()
col
data.hist(figsize=(15,10),color='red')
plt.show()
i=1
plt.figure(figsize = (15,20))
for col in data.columns:
plt.subplot(4,3,i)
sns.boxplot(x = data[col], data = data)
i+=1 | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
here we have found some outliers,but we did't remove it due to getting loss of data BIVARIATE ANALYSIS | i=1
plt.figure(figsize = (18,18))
for col in data.columns:
plt.subplot(4,3,i)
sns.scatterplot(data=data,x='Concrete compressive strength(MPa, megapascals) ',y=col)
i+=1 | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see that compressive strength is highly correlated with cement | plt.figure(figsize=(10,10))
sns.heatmap(data.corr(),linewidths=1,cmap='PuBuGn_r',annot=True)
correlation=data.corr()['Concrete compressive strength(MPa, megapascals) '].sort_values()
correlation.plot(kind='barh',color='green') | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see that cement, superplasticizer,age,are +vely correlated, while water ,fine aggregate are negatively correlated with compressive strength. MODEL SELECTION | X=data.drop(columns='Concrete compressive strength(MPa, megapascals) ')
Y=data[['Concrete compressive strength(MPa, megapascals) ']]
sc=StandardScaler()
X_scaled=sc.fit_transform(X)
X_scaled=pd.DataFrame(X_scaled,columns=X.columns)
x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=0)
lr=LinearRegression()
sgd=SGDRegressor()
lasso=Lasso()
ridge=Ridge()
rf=RandomForestRegressor()
dt=DecisionTreeRegressor()
gboost=GradientBoostingRegressor()
bagging=BaggingRegressor()
adboost=AdaBoostRegressor()
knn=KNeighborsRegressor()
etr=ExtraTreesRegressor()
gamma=GammaRegressor()
algo=[lr,sgd,lasso,ridge,rf,dt,gboost,bagging,adboost,knn,etr]
model=[]
accuracy_test=[]
accuracy_train=[]
for i in range(len(algo)):
algo[i].fit(x_train,y_train)
accuracy_train.append(algo[i].score(x_train,y_train))
accuracy_test.append(algo[i].score(x_test,y_test))
model.append(algo[i])
mod=pd.DataFrame([model,accuracy_train,accuracy_test]).T
mod.columns=['model','score_train','score_test']
mod | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see that extra tree regressor has the highest accuracy level =90.7%,so we choose for our final model building MODEL BUILDING | etr1=ExtraTreesRegressor()
rs=[]
score=[]
for i in range(1,200,1):
x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=i)
etr1.fit(x_train,y_train)
score.append(etr1.score(x_test,y_test))
rs.append(i)
plt.figure(figsize=(20,6))
plt.plot(rs,score)
for i in range(len(score)):
print(rs[i],score[i]) | 1 0.89529318226024
2 0.9277744539369183
3 0.926825810368096
4 0.929277398220312
5 0.8946985733005189
6 0.9066382335271965
7 0.9375909152276649
8 0.8798177784082443
9 0.8792678508590264
10 0.9188761161352978
11 0.9248721043508471
12 0.9016606370091849
13 0.8790450510199522
14 0.90286206857159
15 0.9361845117635051
16 0.9103918559878086
17 0.9194389042700499
18 0.9155440974047644
19 0.9149623543026111
20 0.9152627650581631
21 0.9178825939342906
22 0.933442676595351
23 0.9038669999821688
24 0.9147860597553644
25 0.8974741270279977
26 0.9103415974014989
27 0.926171116031605
28 0.8901152376661319
29 0.9072214319234586
30 0.9069034544309591
31 0.8970305284171736
32 0.9049887830584175
33 0.9292951198961779
34 0.9173185581763424
35 0.8975881402027748
36 0.9307101720411162
37 0.9062267343439251
38 0.8926768812818899
39 0.9331845652934211
40 0.8956891147838116
41 0.9175997008124308
42 0.9004182578884321
43 0.8921783511284366
44 0.890816545901059
45 0.9033256046629572
46 0.91264162638476
47 0.9102845528486323
48 0.8926070994040652
49 0.8948750730859413
50 0.9250558398241144
51 0.8977749730713258
52 0.9141359524274064
53 0.9272097292568934
54 0.8940187101262826
55 0.9053256595779804
56 0.9102632255076534
57 0.9258405592676671
58 0.9091086234290273
59 0.9107175826425848
60 0.9083015118948643
61 0.9242459381919436
62 0.9226840828504406
63 0.8793673984988264
64 0.9064094380303714
65 0.9212710874280483
66 0.9086135993540179
67 0.8920255907491763
68 0.8997516006682192
69 0.9146011134592402
70 0.9037368695524626
71 0.9099123106690848
72 0.8968849213438918
73 0.8698487713052809
74 0.9251570458392945
75 0.911139105474144
76 0.9197288937003184
77 0.9420263760065384
78 0.8901469575408667
79 0.9174065090240028
80 0.9135348717280743
81 0.9193405053109891
82 0.9176744020331675
83 0.9157099858048742
84 0.9236440049375585
85 0.9096960662685826
86 0.8958943017704084
87 0.9141373473340262
88 0.9174506061218781
89 0.9202782740840457
90 0.9164562619726861
91 0.9278867464272998
92 0.9185593281447852
93 0.9158094189320314
94 0.91697911396183
95 0.9221607535310148
96 0.912905911582812
97 0.9154524971810701
98 0.8943985987646329
99 0.9280097640316576
100 0.9104633625466904
101 0.9203871816778284
102 0.9078549698666163
103 0.8904238060377717
104 0.9290634159998891
105 0.9131575698016983
106 0.9021645427912188
107 0.9002863065659155
108 0.9114210486507061
109 0.9235117999093678
110 0.9019974737508064
111 0.9052864492715343
112 0.9079408879989107
113 0.9390434617353796
114 0.9215598383792503
115 0.9052421284637482
116 0.9285260577433873
117 0.9059866804976253
118 0.9269265454594784
119 0.9172916857437821
120 0.8830374928260559
121 0.9170774634483768
122 0.9186296228191361
123 0.9127954527824342
124 0.8853452093122024
125 0.9058835642731625
126 0.9121821726491289
127 0.890905139533444
128 0.9158423632735686
129 0.9058979507644945
130 0.9167039256365345
131 0.9207861320443467
132 0.8867697837924595
133 0.911333405919124
134 0.9184891939657748
135 0.9128065337639947
136 0.8791450923209874
137 0.9235445611790237
138 0.9205362785073326
139 0.8989360768080421
140 0.9015958556449082
141 0.9247958900966756
142 0.9347606593729455
143 0.895182396741788
144 0.9108600968904917
145 0.9297227569104195
146 0.9326809494510843
147 0.905541363064909
148 0.9258237338234881
149 0.9337694736564791
150 0.9015384307195701
151 0.907376405740946
152 0.8998352192996377
153 0.906421221173074
154 0.9339890987006378
155 0.9023764046680294
156 0.9123423766384336
157 0.9124870458797895
158 0.9157593451133572
159 0.9103751538182557
160 0.9107960625548797
161 0.9197751762663666
162 0.9145619096371216
163 0.9203736944507968
164 0.9371642586526574
165 0.91046858685322
166 0.9250595002595737
167 0.910351726028797
168 0.9240589568889332
169 0.9252028165883652
170 0.9136243609396435
171 0.9073694274118068
172 0.9291536890562709
173 0.9207721036337553
174 0.9124238739389205
175 0.8921820304512027
176 0.9074826252809058
177 0.9014783862886651
178 0.9250600168758528
179 0.922552052206061
180 0.9349903198994561
181 0.9078509819938434
182 0.9288272802056655
183 0.9326562927853923
184 0.8887649393337306
185 0.9226222618701407
186 0.9169734617452634
187 0.9404185989813758
188 0.9219341581072451
189 0.9281335914442755
190 0.9074182739756548
191 0.8974597180226369
192 0.8938602722379191
193 0.9166000756685708
194 0.9169163105807522
195 0.9283554253936381
196 0.9101342353728978
197 0.9106007206909548
198 0.8973415852731137
199 0.9072253222734327
| Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see that at random state =77,we get a accuracy=94.39% | x_train,x_test,y_train,y_test=train_test_split(X_scaled,Y,test_size=.30,random_state=77)
etr2=ExtraTreesRegressor()
etr2.fit(x_train,y_train)
etr2.score(x_train,y_train)
etr2.score(x_test,y_test)
y_test_pred=etr2.predict(x_test)
y_test1=y_test.copy()
y_test1['pred']=y_test_pred
y_test1.corr() | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see here the accuracy is to be 97.17% | from sklearn.metrics import mean_squared_error,r2_score
mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred'])
rsme=np.sqrt(mean_squared_error(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred']))
rsme | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
we can see that root mean sqaure error is only 4.15 , which shows that our model is very good | r2_score(y_test1[ 'Concrete compressive strength(MPa, megapascals) '],y_test1['pred'])
plt.barh(X.columns,etr2.feature_importances_) | _____no_output_____ | Apache-2.0 | concrete-data-eda-model-acc-97.ipynb | NaveenKumarMaurya/my-datascience-end-to-end-project-portfolio |
Load data | adni.load(show_output=False) | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Display MetaData | meta_df = adni.meta_to_df()
sprint.pd_cols(meta_df) | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Display ImageFiles | files_df = adni.files_to_df()
sprint.pd_cols(files_df)
adni_df = adni.to_df()
sprint.pd_cols(adni_df) | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Analysis Overview | fig, axes = splot.meta_settings(rows=3)
splot.histplot(
adni_df,
x='subject.researchGroup',
hue='subject.subjectSex',
ax=axes[0,0],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'ResearchGroup distribution','xlabel':'Disorder'}
)
splot.histplot(
adni_df,
x='subject.subjectIdentifier',
ax=axes[0,1],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'SubjectIdentifier distribution','xlabel':'subjectIdentifier','rotation':90}
)
splot.histplot(
adni_df,
x='subject.subjectSex',
ax=axes[1,0],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'SubjectSex distribution','xlabel':'subjectSex'}
)
splot.histplot(
adni_df,
x='subject.study.subjectAge',
hue='subject.subjectSex',
discrete=False,
ax=axes[1,1],
plot_kws={'element':'poly','fill':False},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'SubjectAge distribution'}
)
splot.histplot(
adni_df,
x='subject.study.series.dateAcquired',
hue='subject.researchGroup',
discrete=False,
ax=axes[2,0],
plot_kws={},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'SubjectAge distribution'}
)
splot.histplot(
adni_df,
x='subject.study.weightKg',
hue='subject.subjectSex',
discrete=False,
ax=axes[2,1],
plot_kws={'element':'poly','fill':False},
legend_kws={'title':'subjectSex'},
setting_kws={'title':'weightKg distribution'}
)
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Data sizes | fig, axes = splot.meta_settings(rows=2,figsize=(15,10))
splot.histplot(
adni_df,
discrete=False,
x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Slices',
hue='subject.researchGroup',
multiple='stack',
ax=axes[0,0],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Number of Slices','xlabel':'Slices','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=False,
x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Columns',
hue='subject.researchGroup',
multiple='stack',
ax=axes[0,1],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Number of Columns','xlabel':'Slices','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=False,
x='subject.study.imagingProtocol.protocolTerm.protocol.Number_of_Rows',
hue='subject.researchGroup',
multiple='stack',
ax=axes[1,0],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Number of Rows','xlabel':'Slices','ylabel':'Frequency'}
)
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Scoring | fig, axes = splot.meta_settings(rows=3)
splot.histplot(
adni_df,
discrete=True,
x='subject.visit.assessment.component.assessmentScore.FAQTOTAL',
hue='subject.researchGroup',
multiple='stack',
ax=axes[0,0],
plot_kws={'stat':'frequency'},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Functional Activities Questionnaires (FAQTOTAL)','xlabel':'Score','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=True,
x='subject.visit.assessment.component.assessmentScore.NPISCORE',
hue='subject.researchGroup',
multiple='stack',
ax=axes[0,1],
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'assessmentScore_NPISCORE','xlabel':'Score','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=True,
x='subject.visit.assessment.component.assessmentScore.CDGLOBAL',
hue='subject.researchGroup',
multiple='stack',
ax=axes[1,0],
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Clinical Dementia Rating Scale (CDGLOBAL)','xlabel':'Score','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=True,
x='subject.visit.assessment.component.assessmentScore.GDTOTAL',
hue='subject.researchGroup',
multiple='stack',
ax=axes[1,1],
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'assessmentScore.GDTOTAL','xlabel':'Score','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
discrete=True,
x='subject.visit.assessment.component.assessmentScore.MMSCORE',
hue='subject.researchGroup',
multiple='stack',
ax=axes[2,0],
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'Mini-Mental State Examination (MMSCORE)','xlabel':'Score','ylabel':'Frequency'}
)
splot.histplot(
adni_df,
x='subject.visit.assessment.component.assessmentScore.MMSCORE',
hue='subject.researchGroup',
discrete=False,
ax=axes[2,1],
plot_kws={'element':'poly','fill':False},
legend_kws={'title':'ResearchGroup'},
setting_kws={'title':'MMSE Score per Condition'}
)
plt.show() | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Visualise brain slices Create Image generator | SKIP_LAYERS = 10
LIMIT_LAYERS = 70
image_AD_generator = adni.load_images(
files=adni.load_files(adni.path.category+'AD/', adni.filename_category, use_processed=True)
)
image_CN_generator = adni.load_images(
files=adni.load_files(adni.path.category+'CN/', adni.filename_category, use_processed=True)
)
image_MCI_generator = adni.load_images(
files=adni.load_files(adni.path.category+'MCI/', adni.filename_category, use_processed=True)
)
### Testing functions
from nilearn.plotting import view_img, plot_glass_brain, plot_anat, plot_epi
test_image = next(image_CN_generator)
test_image.shape
while True:
test_image = next(image_AD_generator)
plot_anat(test_image, draw_cross=False, display_mode='z',cut_coords=20,annotate=False)
plt.show()
break
images_AD_array = adni.to_array(list(image_AD_generator))
images_CN_array = adni.to_array(list(image_CN_generator))
images_MCI_array = adni.to_array(list(image_MCI_generator))
images_AD = next(images_AD_array)
images_CN = next(images_CN_array)
images_MCI = next(images_CN_array) | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Coronal plane (From top) | image_AD_slices = [images_AD[layer,:,:] for layer in range(0,images_AD.shape[0],SKIP_LAYERS)]
dplay.display_advanced_plot(image_AD_slices)
plt.suptitle("Coronal plane - AD")
image_CN_slices = [images_CN[layer,:,:] for layer in range(0,images_CN.shape[0],SKIP_LAYERS)]
dplay.display_advanced_plot(image_CN_slices)
plt.suptitle("Coronal plane - CN")
image_MCI_slices = [images_MCI[layer,:,:] for layer in range(0,images_MCI.shape[0],SKIP_LAYERS)]
dplay.display_advanced_plot(image_MCI_slices)
plt.suptitle("Coronal plane - MCI") | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Sagittal plane (From front) | image_slices = [images_AD[:,layer,:] for layer in range(0,images_AD.shape[1], SKIP_LAYERS)]
dplay.display_advanced_plot(image_slices)
plt.suptitle("Sagittal plane") | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
Horisontal plane (from side) | image_slices = [images_AD[:,:,layer] for layer in range(0,images_AD.shape[2], SKIP_LAYERS)]
dplay.display_advanced_plot(image_slices)
plt.suptitle("Horisonal plane") | _____no_output_____ | Apache-2.0 | notebooks/00 - visualisation.ipynb | FredrikM97/Medical-ROI |
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library: | import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1) | _____no_output_____ | MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ | y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss | 9
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below: | a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c) | Tensor("Mul:0", shape=(), dtype=int32)
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. | sess = tf.Session()
print(sess.run(c)) | 20
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. | # Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close() | 6
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly | # GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3, 1), name="X")
W = tf.constant(np.random.randn(4, 3), name="W")
b = tf.constant(np.random.randn(4, 1), name="b")
Y = tf.matmul(W, X) + b
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function())) | result = [[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
*** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)``` | # GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(dtype=tf.float32, name="x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict={x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12))) | sigmoid(0) = 0.5
sigmoid(12) = 0.999994
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ | # GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(dtype=tf.float32, name="logits")
y = tf.placeholder(dtype=tf.float32, name="labels")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z: logits, y: labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost)) | cost = [ 1.00538719 1.03664088 0.41385433 0.39956614]
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. | # GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, dtype=tf.int32, name="C")
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot)) | one_hot = [[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) | # GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3]))) | ones = [ 1. 1. 1.]
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset. | # Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() | _____no_output_____ | MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Change the index below and run the cell to visualize some examples in the dataset. | # Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) | y = 5
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. | # Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape)) | number of training examples = 1080
number of test examples = 120
X_train shape: (12288, 1080)
Y_train shape: (6, 1080)
X_test shape: (12288, 120)
Y_test shape: (6, 120)
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. | # GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(dtype=tf.float32, shape=[n_x, None], name="X")
Y = tf.placeholder(dtype=tf.float32, shape=[n_y, None], name= "Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y)) | X = Tensor("X_3:0", shape=(12288, ?), dtype=float32)
Y = Tensor("Y_2:0", shape=(6, ?), dtype=float32)
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours. | # GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"])) | W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>
b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>
W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>
b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! | # GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3)) | Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples. | # GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost)) | cost = Tensor("Mean:0", shape=(), dtype=float32)
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. | def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters | _____no_output_____ | MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! | parameters = model(X_train, Y_train, X_test, Y_test) | Cost after epoch 0: 1.855702
Cost after epoch 100: 1.016458
Cost after epoch 200: 0.733102
Cost after epoch 300: 0.572940
Cost after epoch 400: 0.468774
Cost after epoch 500: 0.381021
Cost after epoch 600: 0.313822
Cost after epoch 700: 0.254158
Cost after epoch 800: 0.203829
Cost after epoch 900: 0.166421
Cost after epoch 1000: 0.141486
Cost after epoch 1100: 0.107580
Cost after epoch 1200: 0.086270
Cost after epoch 1300: 0.059371
Cost after epoch 1400: 0.052228
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! | import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) | Your algorithm predicts: y = 3
| MIT | Week3/Tensorflow_Tutorial.ipynb | dhingratul/Practical_Aspect_of_Deep_Learning |
Python I-->-->Python is an interpreted high-level general-purpose programming language. Its design philosophy emphasizes code readability with its use of significant indentation. Its language constructs as well as its object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.* [Cheatsheet](https://perso.limsi.fr/pointal/_media/python:cours:mementopython3-english.pdf) The Python InterpreterExcecution of Python programs is often performed by an **interpreter**, meaning that program statements are converted to machine executable code at **runtime** (i.e., when the program is actually run) as opposed to **compiled** into executable code before it is run by the end user. This is one of the primary ways we'll interact with Python, especially at first. We'll type some Python code and then hit the `Enter` key. This causes the code to be translated and executed. Interpretation allows great flexibility (interpreted programs can modify their source code at run time), but it's often the case that interpreted programs run much more slowly than their compiled counterparts. It's also often more difficult to find errors in interpreted programs. We can interact with the Python interpreter via a prompt, which looks something like the following: (base) C:\Users\nimda>python Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.190064 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> print("Hello World") note: this will print something to the screen. Hello World >>>Above, the `print` function prints out a string representation of the argument. The `` denotes a comment, and the interpreter skips anything on the line after it (that is, it won't try to interpret anything after the ``). Of course, we can save Python code into a program file and execute it later, too. Jupyter NotebooksWe'll also interact with Python using Jupyter Notebooks (like this one). When we hit `Run` in the menu bar, we are performing an action analogous to hitting enter from a command prompt. The code in the active cell will be executed. Users should be aware that though we are intereacting with a Web page, there is a Web server and Python enviroment running behind the scenes. This adds a later of complexity, but the ability to mix well-formatted documentation and code makes using Jupyter Notebook worthwhile. Python Identifiers and Variables IdentifiersAn **identifier** in Python is a word (a string) used to identify a variable, function, class, etc. in a Python program. It can be thought of as a proper name. Identifiers start with a letter (A-Z) or an underscore `_`; this first character is followed by a sequence of letters numbers, and underscores.Certain identifiers, such as `class` or `if` are builtin keywords and cannot be redefined by users. VariablesAs in most programming languages, **variables** play a central role in Python. We need a way to store and refer to data in our programs, and variables are the primary way to do this. Specifically, we assign data values variables using the `=`. After the assignment has been made, we may use the variable to access the data as many times as we like. In general, the righthand side of an assignment is evaluated first (e.g., 1+1 is evaluated to 2), and afterwards the result is stored in the variable specified on the left. That explains why the last line below results in a value of 6 being printed. On evaluation of the righthand side, the current value of `blue_fish` (3) is added to itself, and the resulting value is assigned to `blue_fish`, overwriting the 3. | one_fish = 1
two_fish = one_fish + 1
blue_fish = one_fish + two_fish
print(one_fish)
print(two_fish)
print(blue_fish)
blue_fish = blue_fish + blue_fish
print(blue_fish) | 1
2
3
6
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Dynamic TypingNote that no data type (e.g., integer, string) is specified in an assignment, even the first time a variable is used. In general, variables and types are *not* declared in Python before a value is assigned. Python is said to be a **dynamically typed** language. The below code is perfectly fine in Python, but assigning a number and then a string in another lanauge such as Java would cause an error. | a = 1
print(a)
a = "hello"
print(a) | 1
hello
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Data TypesAs in most programming languages, each data value in a Python program has a **data type** (even though we typically don't specify it). We'll discuss some of the datatypes here. For a given data value, we can get its type using the `type` function, which takes an argument. The below print expressions show several of the built-in data types (and how literal values are parsed by default). | print(type(1)) # an integer
print(type(2.0)) # a float
print(type("hi!")) # a string
print(type(True)) # a boolean value
print(type([1, 2, 3, 4, 5])) # a list (a mutable collection)
print(type((1, 2, 3, 4, 5))) # a tuple (an immutable collection)
print(type({"fname": "john", "lname": "doe"})) # a dictionary (a collection of key-value pairs) | <class 'int'>
<class 'float'>
<class 'str'>
<class 'bool'>
<class 'list'>
<class 'tuple'>
<class 'dict'>
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
NumbersThe basic numerical data types of python are:* `int` (integer values), * `float` (floating point numbers), and * `complex` (complex numbers). | x = 1 #
y = 1.0
z = 1 + 2j
w = 1e10
v = 1.0
u = 2j
print(type(x), ": ", x)
print(type(y), ": ", y)
print(type(z), ": ", z)
print(type(w), ": ", w)
print(type(u), ": ", v)
print(type(u), ": ", u) | <class 'int'> : 1
<class 'float'> : 1.0
<class 'complex'> : (1+2j)
<class 'float'> : 10000000000.0
<class 'complex'> : 1.0
<class 'complex'> : 2j
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
In general, a number written as a simply integer will, unsurprisingly, be interpreted in Python as an `int`.Numbers written using a `.` or scientific notation are interpreted as floats. Numbers written using `j` are interpreted as complex numbers.**NOTE**: Unlike some other languages, Python 3 does not have minimum or maxium integer values (Python 2 does, however). ArithmeticThe arithmetic operations available in most languages are also present in Python (with a default precedence on operations). | 1 + 3 - (3 - 2) # simple addition and subtraction
4 * 2.0 # multiplication of an int and a float (yields a float)
5 / 2 # floating point division
print(5.6 // 2) # integer division
print(type(5.6 // 2))
5 % 2 # modulo operator (straightforwardly, the integer remainder of 5/2)
2 % -5 # (not so intuitive if negative numbers are involved)
2**4 # exponentiation | _____no_output_____ | MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Data Type of resultsWhen two numbers of different types are used in an arithmetic operation, the data type is usually what one would expect, but there are some cases where it's different than either operand. For instance, though 5 and 2 are both integers, the result of `5/2` is a `float`, and the result of `5.2//2` (integer division) is a float. StringsStrings in Python (datatype `str`) can be enclosed in single (`'`) or double (`"`) quotes. It doesn't matter which is used, but the opening and closing marks must be of the same type. The backslash `\` is used to escape quotes in a string as well as to indicate other escape characters (e.g., `\n` indicates a new line). Upon printing, the string is formatted appropriately. | print("This is a string")
print('this is a string containing "quotes"')
print('this is another string containing "quotes"')
print("this is string\nhas two lines") | This is a string
this is a string containing "quotes"
this is another string containing "quotes"
this is string
has two lines
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
To prevent processing of escape characters, you can use indicate a *raw* string by putting an `r` before the string. | print(r"this is string \n has only one line") | this is string \n has only one line
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Multiline StringsMultiline strings can be delineated using 3 quotes. If you do not wish to include a line end in the output, you can end the line with `\`. | print(
"""Line 1
Line 2
Line 3\
Line 3 continued"""
) | Line 1
Line 2
Line 3Line 3 continued
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
String Concatenation Strings can be concatenated. You must be careful when trying to concatenate other types to a string, however. They must be converted to strings first using `str()`. | print("This" + " line contains " + str(4) + " components")
print(
"Here are some things converted to strings: "
+ str(2.3)
+ ", "
+ str(True)
+ ", "
+ str((1, 2))
) | This line contains 4 components
Here are some things converted to strings: 2.3, True, (1, 2)
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
`print` can take an arbitrary number of arguments. Leveraging this eliminates the need to explicitly convert data values to strings (because we're no longer attempting to concatenate strings). | print("This", "line contains", 4, "components")
print("Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2)) | This line contains 4 components
Here are some things converted to strings: 2.3 , True , (1, 2)
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Note, however, that `print` will by default insert a space between elements. If you wish to change the separator between items (e.g. to `,`) , add `sep=","` as an argument. | print("This", "line contains", 4, "components", sep="---")
print(
"Here are some things converted to strings:", 2.3, ",", True, ",", (1, 2), sep="---"
) | This---line contains---4---components
Here are some things converted to strings:---2.3---,---True---,---(1, 2)
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
You can also create a string from another string by *multiplying* it with a number | word1 = "abba"
word2 = 3 * word1
print(word2) | abbaabbaabba
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Also, if multiple **string literals** (as opposed to variables or string expressions) appear consecutively, they will be combined into one string. | a = "this " "is " "the " "way " "the " "world " "ends."
print(a)
print(type(a))
a = "this ", "is ", "the ", "way ", "the ", "world ", "ends."
print(a)
print(type(a)) | this is the way the world ends.
<class 'str'>
('this ', 'is ', 'the ', 'way ', 'the ', 'world ', 'ends.')
<class 'tuple'>
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Substrings: Indexing and SlicingA character of a string can be extracted using an index (starting at 0), and a substring can be extracted using **slices**. Slices indicate a range of indexes. The notation is similar to that used for arrays in other languages.It also happens that indexing from the right (staring at -1) is possible. | string1 = "this is the way the world ends."
print(string1[12]) # the substring at index 12 (1 character)
print(string1[0:4]) # from the start of the string to index 4 (but 4 is excluded)
print(string1[5:]) # from index 5 to the end of the string
print(string1[:4]) # from the start of the string to index 4 (4 is excluded)
print(string1[-1]) # The last character of the string
print(string1[-5:-1]) # from index -5 to -1 (but excluding -1)
print(string1[-5:]) # from index -5 to the end of the string | w
this
is the way the world ends.
this
.
ends
ends.
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
**NOTE**: Strings are **immutable**. We cannot reassign a character or sequence in a string as we might assign values to an array in some other programming languages. When the below code is executed, an exception (error) will be raised. | a = "abc"
a[0] = "b" # this will raise an exception | _____no_output_____ | MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Splitting and Joining StringsIt's often the case that we want to split strings into multiple substrings, e.g., when reading a comma-delimited list of values. The `split` method of a string does just that. It retuns a list object (lists are covered later). To combine strings using a delimeter (e.g., to create a comma-delimited list), we can use `join`. | text = "The quick brown fox jumped over the lazy dog"
spl = text.split() # This returns a list of strings (lists are covered later)
print(spl)
joined = ",".join(spl)
print(joined) # and this re-joins them, separating words with commas
spl = joined.split(",") # and this re-splits them, again based on commas
print(spl)
joined = "-".join(spl) # and this re-joins them, separating words with dashes
print(joined) | ['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
The,quick,brown,fox,jumped,over,the,lazy,dog
['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
The-quick-brown-fox-jumped-over-the-lazy-dog
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Similarly, to split a multiline string into a list of lines (each a string), we can use `splitlines`. | lines = """one
two
three"""
li = lines.splitlines() # Split the multiple line string
print(li) | ['one', 'two', 'three']
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
To join strings into multiple lines, we can again use `join`. | lines = ["one", "two", "three"]
data = "\n".join(lines)# join list of strings to multiple line string
print(data) | one
two
three
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Boolean Values, and NonePython has two Boolean values, `True` and `False`. The normal logical operations (`and`, `or`, `not`) are present. | print(True and False)
print(True or False)
print(not True) | False
True
False
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
There is also the value `None` (the only value of the `NoneType` data type). `None` is used to stand for the absence of a value. However, it can be used in place of False, as can zero numerical values (of any numerical type), empty sequences/collections (`[]`,`()`, `{}`, etc.). Other values are treated as `True`. Note that Boolean expressions are short-circuited. As soon as the interpreter knows enough to compute the appropriate Boolean value of the expression, it stops further evaluation. Also, the retun value of the Boolean expression need not be a Boolean value, as indicated below. The value of the last item evaluated is returned. | print(1 and True)
print(True and 66)
print(True and "aa")
print(False and "aa")
print(True or {})
print(not [])
print(True and ()) | True
66
aa
False
True
True
()
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Boolean Comparisons There are 8 basic comparison operations in Python.| Symbol | Note | | --- | --- || `<` | less than | | `<=` | less than or equal to | | `>` | greater than | | `>=` | greater than or equal to | | `==` | equal to | | `!=` | not equal to | | `is` | identical to (for objects) | | `is not` | not identical to (for objects) | Regarding the first 6, these will work as expected for numerical values. Note, however, that they can be applied to other datatypes as well. Strings are compared on a character-by-character basis, based on a lexicographic ordering. Sequences such as lists are compared on an element by element basis. | print("abc" > "ac")
print("a" < "1")
print("A" < "a")
print((1, 1, 2) < (1, 1, 3)) | False
False
True
True
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Note that `is` is true only if the two items compared are the *same* object, whereas `==` only checks for eqaulity in a weaker sense. Below, the elements of the two lists `x` and `y` have elements that evaluate as being equal, but the two lists are nevertheless distinct in memory. As such, the first `print` statement should yield `True`, while the second should yield `False`. | x = (1, 1, 2)
y = (1, 1, 2)
print(x == y)
print(x is y)
x = "hello"
y = x
a = "hel"
b = "lo"
z = a + b
w = x[:]
print(x)
print(y)
print(z)
print("x==y: ", x == y)
print("x==z: ", x == z)
print("x is y: ", x is y)
print("x is z: ", x is z)
print("x is w: ", x is w) | hello
hello
hello
x==y: True
x==z: True
x is y: True
x is z: False
x is w: True
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Converting between TypesValues of certain data types can be converted to values of other datatypes (actually, a new value of the desired data type is produced). If the conversion cannot take place (becuase the datatypes are incompatible), an exception will be raised. | x = 1
s = str(x) # convert x to a string
s_int = int(s)
s_float = float(s)
s_comp = complex(s)
x_float = float(x)
print(s)
print(s_int) # convert to an integer
print(s_float) # convert to a floating point number
print(s_comp) # convert to a complext number
print(x_float)
# Let's check their IDs
print(id(x))
print(id(s))
print(id(s_int))
print(id(s_float))
print(id(x_float))
print(id(int(x_float))) | 1
1
1.0
(1+0j)
1.0
93926537898496
140538951529264
93926537898496
140538952028656
140538952028464
93926537898496
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
The `id()` functionThe `id()` function can be used to identify an object in memory. It returns an integer value that is guaranteed to uniquely identify an object for the duration of its existence. | print("id(x): ", id(x))
print("id(y): ", id(y))
print("id(z): ", id(z))
print("id(w): ", id(w)) | id(x): 140539018862384
id(y): 140539018862384
id(z): 140538951488880
id(w): 140539018862384
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Lists,Tuples, Sets, and Dictionaries Lists Many languages (e.g., Java) have what are often called **arrays**. In Python the object most like them are called **lists**. Like arrays in other languages, Python lists are represented syntactically using `[...]` blocks. Their elements can be referenced via indexes, and just like arrays in other languages, Python lists are **mutable** objects. That is, it is possible to change the value of an individual cell in a list. In this way, Python lists are unlike Python strings (which are immutable). | a = [0, 1, 2, 3] # a list of integers
print(a)
a[0] = 3 # overwrite the first element of the list
print(a)
a[1:3] = [4, 5]
# overwrite the last two elements of the list (using values from a new list)
print(a) | [0, 1, 2, 3]
[3, 1, 2, 3]
[3, 4, 5, 3]
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Note that some operations on lists return other lists. | a = [1, 2, 3]
b = [4, 5, 6]
c = a + b
print(a)
print(b)
print(c)
print("-" * 25)
c[0] = 10
b[0] = 40
print(a)
print(b)
print(c) | [1, 2, 3]
[4, 5, 6]
[1, 2, 3, 4, 5, 6]
-------------------------
[1, 2, 3]
[40, 5, 6]
[10, 2, 3, 4, 5, 6]
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Above, `c` is a new list containing elements copied from `a` and `b`. Subsequent changes to `a` or `b` do not affect `c`, and changes to `c` do not affect `a` or `b`. The length of a list can be obtained using `len()`, and a single element can be added to a list using `append()`. Note the syntax used for each. | a = []
a.append(1) # add an element to the end of the list
a.append(2)
a.append([3, 4])
print(a)
print("length of 'a': ", len(a)) | [1, 2, [3, 4]]
length of 'a': 3
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Some additional list operations are shown below. Pay careful attention to how `a` and `b` are related. | a = [10]
a.extend([11, 12]) # append elements of one list to the end of another one
b = a
c = a.copy() # copy the elements of a to a new list, and then assign it to c
b[0] = 20
c[0] = 30
print("a:", a)
print("b:", b)
print("c:", c)
b.reverse() # reverse the elements of the list in place
print("a reversed:", a)
b.sort()
print("a sorted:", a)
a.clear() # empty the list
print("b is ", b, " having length ", len(b))
list1 = ["a", "b", "d", "e"]
list1.insert(2, "c") # insert element "c" at position 2, increasing the length by 1
print(list1)
e = list1.pop() # remove the last element of the list
print("popped: ", e, list1)
list1 = ["d", "b", "b", "c", "d", "d", "a"]
list1.sort() # sort the list
print("new list, sorted:", list1)
print("count of 'd': ", list1.count("d")) # count the number of times "d" occurs
print("first index of 'd': ", list1.index("d")) # return the index of the first occurrence of "d"
print(list1)
del list1[2] # remove the element at index 2
print("element at index 2 removed:", list1)
del list1[2:4] # remove the elements from index 2 to 4
print("elements at index 2-4 removed:", list1) | ['a', 'b', 'c', 'd', 'e']
popped: e ['a', 'b', 'c', 'd']
new list, sorted: ['a', 'b', 'b', 'c', 'd', 'd', 'd']
count of 'd': 3
first index of 'd': 4
['a', 'b', 'b', 'c', 'd', 'd', 'd']
ele at index 2 removed: ['a', 'b', 'c', 'd', 'd', 'd']
elements at index 2-4 removed: ['a', 'b', 'd', 'd']
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
TuplesThere also exists an immutable counterpart to a list, the **tuple**. Elements can also be referenced by index, but (as with Python strings) new values cannot be assigned. Unlike a list, Tuples are created using either `(...)` or simply by using a comma-delimeted sequence of 1 or more elements. | a = () # the empty tuple
b = (1, 2) # a tuple of 2 elements
c = 3, 4, 5 # another way of creating a tuple
d = (6,) # a singleton tuple
e = (7,) # another singleton tuple
print(a)
print(b)
print(c)
print(d)
print(len(d))
print(e)
print(b[1]) | ()
(1, 2)
(3, 4, 5)
(6,)
1
(7,)
2
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
As with lists, we can combine tuples to form new tuples | a = (1, 2, 3, 4) # Create python tuple
b = "x", "y", "z" # Another way to create python tuple
c = a[0:3] + b # Concatenate two python tuples
print(c) | (1, 2, 3, 'x', 'y', 'z')
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
SetsSets, created using `{...}` or `set(...)` in Python, are unordered collections without duplicate elements. If the same element is added again, the set will not change. | a = {"a", "b", "c", "d"} # create a new set containing these elements
b = set(
"hello world"
) # create a set containing the distinct characters of 'hello world'
print(a)
print(b)
print(a | b) # print the union of a and b
print(a & b) # print the intersection of a and b
print(a - b) # print elements of a not in b
print(b - a) # print elements of b not in a
print(b ^ a) # print elements in either but not both | {'a', 'd', 'c', 'b'}
{'l', 'r', 'w', 'e', 'd', 'h', ' ', 'o'}
{'l', 'b', 'c', 'r', 'w', 'e', 'd', 'h', ' ', 'a', 'o'}
{'d'}
{'a', 'c', 'b'}
{'l', 'r', 'w', 'e', 'h', ' ', 'o'}
{'l', 'b', 'c', 'r', 'w', 'e', 'h', ' ', 'a', 'o'}
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Given the below, it appears that `==` is used to evaluate membership. | a = "hello"
b = "hel"
c = "lo"
d = b + c # Concatenate string
s = {a, b, c, d}
print("id(a):", a)
print("id(d):", d)
print(s) | id(a): hello
id(d): hello
{'lo', 'hel', 'hello'}
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
DictionariesDictionaries are collections of key-value pairs. A dictionary can be created using `d = {key1:value1, key2:value2, ...}` syntax, or else from 2-ary tuples using `dictionary()`. New key value pairs can be assigned, and old values referenced, using `d[key]`. | employee = {"last": "smth", "first": "joe"} # Create dictionary
employee["middle"] = "william" # Add new key and value to the dictionary
employee["last"] = "smith"
addr = {} # an empty dictionary
addr["number"] = 1234
addr["street"] = "Elm St" # Add new key and value to the dictionary
addr["city"] = "Athens" # Add new key and value to the dictionary
addr["state"] = "GA" # Add new key and value to the dictionary
addr["zip"] = "30602" # Add new key and value to the dictionary
employee["address"] = addr
print(employee)
keys = list(employee.keys()) # list the keys of 'employee'
print("keys: " + str(sorted(keys)))
print("last" in keys) # Print whether 'last' is in keys or not (prints True or False)
print("lastt" in keys) # Print whether 'lastt' is in keys or not (prints True or False)
employee2 = employee.copy() # create a shallow copy of the employee
employee2["last"] = "jones"
employee2["address"][
"street"
] = "beech" # reassign the street name of the employee's address
print(employee)
print(employee2) | {'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'Elm St', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}}
keys: ['address', 'first', 'last', 'middle']
True
False
{'last': 'smith', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'beech', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}}
{'last': 'jones', 'first': 'joe', 'middle': 'william', 'address': {'number': 1234, 'street': 'beech', 'city': 'Athens', 'state': 'GA', 'zip': '30602'}}
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Conversion Between Types | y = (1, 2, 3, 1, 1) # Create tuple
z = list(y) # convert tuple to a list
print(y)
print(z)
print(tuple(z)) # convert z to a tuple
print(set(z)) # convert z to a set
w = (("one", 1), ("two", 2), ("three", 3)) # Create special tuple to convert it to dictionary
v = dict(w) # Convert the tuple to dictionary
print(v)
print(tuple(v)) # Convert the dictionary to tuple
print(tuple(v.keys())) # Get the keys of the dictionary
print(tuple(v.values())) # Get the values of the keys in the dictionary | (1, 2, 3, 1, 1)
[1, 2, 3, 1, 1]
(1, 2, 3, 1, 1)
{1, 2, 3}
{'one': 1, 'two': 2, 'three': 3}
('one', 'two', 'three')
('one', 'two', 'three')
(1, 2, 3)
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Controlling the Flow of Program ExecutionAs in most programing languages, Python allows program execution to branch when certain conditions are met, and it also allows arbitrary execution loops. Without such features, Python would not be very useful (or Turing complete). If StatementsIn Python, *if-then-else* statements are specified using the keywords `if`, `elif` (else if), and `else` (else). The general form is given below: if condition1: do_something elif condition2: do_something_else ... elif condition_n: do_something_else else: if_all_else_fails_do_thisThe `elif` and `else` clauses are optional. There can be many `elif` clauses, but there can be only 1 `else` clause in the `if`-`elif`-`else` sequence. | x = 3
# Test the number if it bigger than 10
if x > 10:
print("value " + str(x) + " is greater than 10")
# Test the number if it bigger than or equal to 7 and less than 10
elif x >= 7 and x < 10:
print("value " + str(x) + " is in range [7,10)")
# Test the number if it bigger than or equal to 5 and less than 7
elif x >= 5 and x < 7:
print("value " + str(x) + " is in range [5,7)")
# Test the number if it's less than 5
else:
print("value " + str(x) + " is less than 5")
| value 3 is less than 5
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
While LoopsPython provides both `while` loops and `for` loops. The former are arguably lower-level but not as natural-looking to a human eye. Below is a simple `while` loop. So long as the condition specified evaluates to a value comparable to `True`, the code in the body of the loop will be executed. As such, without the statement incrementing `i`, the loop would halt. ```while condition: do_something``` | string = "hello world"
length = len(string)# get the length of the string
i = 0
while i < length:
print(string[i])
i = i + 1 | h
e
l
l
o
w
o
r
l
d
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Loops, including while loops, can contain break statements (which aborts execution of the loop) and continue statements (which tell the loop to proceed to the next cycle). | num = 0
while num < 5:
num += 1 # num += 1 is same as num = num + 1
print('num = ', num)
if num == 3: # condition before exiting a loop
break
num = 0
while num < 5:
num += 1
if num > 3: # condition before exiting a loop
continue
print('num = ', num) # the statement after 'continue' statement is skipped | num = 1
num = 2
num = 3
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
While Loop with else Block | num = 0
while num < 3:
num += 1
print('num = ', num)
else:
print('else block executed')
a = ['A', 'B', 'C', 'D']
s = 'd'
i = 0
while i < len(a):
if a[i] == s:
# Processing for item found
break
i += 1
else:
# Processing for item not found
print(s, 'not found in list') | d not found in list
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Below we sort a string, identifying its unique members. Then we use a `while` loop together with the previously defined `count` function to count the occurrences of each character. | # unique characters
raw_string = 'Hello'
result = set()
i = 0
length = len(raw_string)
while i < length:
result.add(raw_string[i])
i = i + 1
print(sorted(list(result)))
| ['H', 'e', 'l', 'o']
| MIT | Week 01 - Introduction to Python/Python I.ipynb | TheAIDojo/Machine_Learning_Bootcamp |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.