markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
testing database connection.We have a lookup table containing the FRED series along with the value. Let's export the connection parameters and test the connection by running a select query against the lookup table.
cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() query = ("SELECT frd_cd,frd_val FROM frd_lkp") cursor.execute(query) for (frd_cd,frd_val) in cursor: sr_list.append(frd_cd) print(frd_cd +' - '+ frd_val) cn.close()
UMCSENT - University of Michigan Consumer Sentiment Index GDPC1 - Real Gross Domestic Product UNRATE - US Civilian Unemployment Rate
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Helper functions.. We are doing this exercise with minimal modelling. Hence, just one target table to store the observations for all series. Let's create few helper functions to make this process easier. db_max_count - We are adding surrogate key to the table to make general querying operations and loads easier. COALESCE is used, to get a valid value from the database. db_srs_count - Since we are using just one target table, we are adding the series name as part of the data. this function will help us with the count for each series present in the table. fred_req - Helper function that sends the request to FRED API and returns the response back..
def db_max_count(): cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() dbquery = ("SELECT COALESCE(max(idfrd_srs),0) FROM frd_srs_data") cursor.execute(dbquery) for ct in cursor: if ct is not None: return ct[0] cn.close() def db_srs_count(): cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() dbquery = ("SELECT frd_srs, count(*) FROM frd_srs_data group by frd_srs") cursor.execute(dbquery) for ct in cursor: print(ct) cn.close() def fred_req(series): time.sleep(10) response = requests.get('https://api.stlouisfed.org/fred/series/observations?series_id='+series+'&api_key='+fred_key['api_key']+'&file_type=json') result = response.json() return result
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Main functions.. We are creating main functions to support the process. Here are the steps 1) Get the data from FRED API. (helper function created above) 2) Validate and transform the observations data from API. 3) Create tuples according to the table structure. 4) Load the tuples into the relational database fred_data for Step 2 & Step 3. Function dbload for Step 4.
def dbload(tuple_list): try: cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() insert_query = ("INSERT INTO frd_srs_data" "(idfrd_srs,frd_srs,frd_srs_val_dt,frd_srs_val,frd_srs_val_yr,frd_srs_val_mth,frd_srs_val_dy,frd_srs_strt_dt,frd_srs_end_dt)" "VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)") print("*** Database Connection Initialized, buckle up the seat belts..") # Data load.. for i in range(len(tuple_list)): data_val=tuple_list[i] cursor.execute(insert_query, data_val) cn.commit() ## Intended timeout before starting the next interation of load.. time.sleep(5) print("\n *** Data load successful.. ") db_srs_count() # Closing database connection... cn.close except mysql.connector.Error as err: cn.close print("Something went wrong: {}".format(err)) def fred_data(series): print("\n") print("** Getting data for the series: " + series) counter=db_max_count() # Calling function to get the data from FRED API for the series. fred_result = fred_req(series) print("** Number of observations extracted -" '{:d}'.format(fred_result['count'])) # transforming observations and preparing for data load. print("** Preparing data for load for series -",series) temp_lst = fred_result['observations'] tlist = [] # from the incoming data, let's create tuple of values for data load. for val in range(len(temp_lst)): temp_dict = temp_lst[val] for key,val in temp_dict.items(): if key=='date': dt_lst = val.split("-") yr = dt_lst[0] mth = dt_lst[1] dtt = dt_lst[2] if key=='value': if len(val.strip())>1: out_val = val else: out_val = 0.00 counter+=1 tup = (counter,series,temp_dict['date'],out_val,yr,mth,dtt,temp_dict['realtime_start'],temp_dict['realtime_end']) tlist.append(tup) print("** Data is ready for the load.. Loading " '{:d}'.format(len(tlist))) dbload(tlist)
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Starting point... So, we have all functions created based on few assumptions (that data is all good with very minimal or no issues).
sr_list = ['UMCSENT', 'GDPC1', 'UNRATE'] for series in sr_list: fred_data(series) cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() quizquery = ("SELECT frd_srs_val_yr , avg(frd_srs_val) as avg_unrate FROM fred.frd_srs_data WHERE frd_srs='UNRATE' AND frd_srs_val_yr BETWEEN 1980 AND 2015 GROUP BY frd_srs_val_yr ORDER BY 1") cursor.execute(quizquery) for qz in cursor: print(qz)
(1980, 7.175000000000001) (1981, 7.616666666666667) (1982, 9.708333333333332) (1983, 9.6) (1984, 7.508333333333334) (1985, 7.191666666666666) (1986, 7.0) (1987, 6.175000000000001) (1988, 5.491666666666666) (1989, 5.258333333333333) (1990, 5.616666666666666) (1991, 6.849999999999999) (1992, 7.491666666666667) (1993, 6.908333333333332) (1994, 6.1000000000000005) (1995, 5.591666666666668) (1996, 5.408333333333334) (1997, 4.941666666666666) (1998, 4.5) (1999, 4.216666666666668) (2000, 3.9666666666666663) (2001, 4.741666666666666) (2002, 5.783333333333334) (2003, 5.991666666666667) (2004, 5.541666666666667) (2005, 5.083333333333333) (2006, 4.608333333333333) (2007, 4.616666666666667) (2008, 5.8) (2009, 9.283333333333333) (2010, 9.608333333333333) (2011, 8.933333333333334) (2012, 8.075000000000001) (2013, 7.358333333333334) (2014, 6.175000000000001) (2015, 5.266666666666667)
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Arize Tutorial: Surrogate Model Feature ImportanceA surrogate model is an interpretable model trained on predicting the predictions of a black box model. The goal is to approximate the predictions of the black box model as closely as possible and generate feature importance values from the interpretable surrogate model. The benefit of this approach is that it does not require knowledge of the inner workings of the black box model.In this tutorial we use the `MimcExplainer` from the `interpret_community` library to generate feature importance values from a surrogate model using only the prediction outputs from a black box model. Both [classification](classification) and [regression](regression) examples are provided below and feature importance values are logged to Arize using the Pandas [logger](https://docs.arize.com/arize/api-reference/python-sdk/arize.pandas). Install and import the `interpret_community` library
!pip install -q interpret==0.2.7 interpret-community==0.22.0 from interpret_community.mimic.mimic_explainer import ( MimicExplainer, LGBMExplainableModel, )
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Classification Example Generate exampleIn this example we'll use a support vector machine (SVM) as our black box model. Only the prediction outputs of the SVM model is needed to train the surrogate model, and feature importances are generated from the surrogate model and sent to Arize.
import pandas as pd import numpy as np from sklearn.datasets import load_breast_cancer from sklearn.svm import SVC bc = load_breast_cancer() feature_names = bc.feature_names target_names = bc.target_names data, target = bc.data, bc.target df = pd.DataFrame(data, columns=feature_names) model = SVC(probability=True).fit(df, target) prediction_label = pd.Series(map(lambda v: target_names[v], model.predict(df))) prediction_score = pd.Series(map(lambda v: v[1], model.predict_proba(df))) actual_label = pd.Series(map(lambda v: target_names[v], target)) actual_score = pd.Series(target)
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Generate feature importance valuesNote that the model itself is not used here. Only its prediction outputs are used.
def model_func(_): return np.array(list(map(lambda p: [1 - p, p], prediction_score))) explainer = MimicExplainer( model_func, df, LGBMExplainableModel, augment_data=False, is_function=True, ) feature_importance_values = pd.DataFrame( explainer.explain_local(df).local_importance_values, columns=feature_names ) feature_importance_values
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send data to ArizeSet up Arize client. We'll be using the Pandas Logger. First copy the Arize `API_KEY` and `ORG_KEY` from your admin page linked below![![Button_Open.png](https://storage.googleapis.com/arize-assets/fixtures/Button_Open.png)](https://app.arize.com/admin)
!pip install -q arize from arize.pandas.logger import Client, Schema from arize.utils.types import ModelTypes, Environments ORGANIZATION_KEY = "ORGANIZATION_KEY" API_KEY = "API_KEY" arize_client = Client(organization_key=ORGANIZATION_KEY, api_key=API_KEY) if ORGANIZATION_KEY == "ORGANIZATION_KEY" or API_KEY == "API_KEY": raise ValueError("❌ NEED TO CHANGE ORGANIZATION AND/OR API_KEY") else: print("βœ… Import and Setup Arize Client Done! Now we can start using Arize!")
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Helper functions to simulate prediction IDs and timestamps.
import uuid from datetime import datetime, timedelta # Prediction ID is required for logging any dataset def generate_prediction_ids(df): return pd.Series((str(uuid.uuid4()) for _ in range(len(df))), index=df.index) # OPTIONAL: We can directly specify when inferences were made def simulate_production_timestamps(df, days=30): t = datetime.now() current_t, earlier_t = t.timestamp(), (t - timedelta(days=days)).timestamp() return pd.Series(np.linspace(earlier_t, current_t, num=len(df)), index=df.index)
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Assemble Pandas DataFrame as a production dataset with prediction IDs and timestamps.
feature_importance_values_column_names_mapping = { f"{feat}": f"{feat} (feature importance)" for feat in feature_names } production_dataset = pd.concat( [ pd.DataFrame( { "prediction_id": generate_prediction_ids(df), "prediction_ts": simulate_production_timestamps(df), "prediction_label": prediction_label, "actual_label": actual_label, "prediction_score": prediction_score, "actual_score": actual_score, } ), df, feature_importance_values.rename( columns=feature_importance_values_column_names_mapping ), ], axis=1, ) production_dataset
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send dataframe to Arize
# Define a Schema() object for Arize to pick up data from the correct columns for logging production_schema = Schema( prediction_id_column_name="prediction_id", # REQUIRED timestamp_column_name="prediction_ts", prediction_label_column_name="prediction_label", prediction_score_column_name="prediction_score", actual_label_column_name="actual_label", actual_score_column_name="actual_score", feature_column_names=feature_names, shap_values_column_names=feature_importance_values_column_names_mapping, ) # arize_client.log returns a Response object from Python's requests module response = arize_client.log( dataframe=production_dataset, schema=production_schema, model_id="surrogate_model_example_classification", model_type=ModelTypes.SCORE_CATEGORICAL, environment=Environments.PRODUCTION, ) # If successful, the server will return a status_code of 200 if response.status_code != 200: print( f"❌ logging failed with response code {response.status_code}, {response.text}" ) else: print( f"βœ… You have successfully logged {len(production_dataset)} data points to Arize!" )
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Regression Example Generate exampleIn this example we'll use a support vector machine (SVM) as our black box model. Only the prediction outputs of the SVM model is needed to train the surrogate model, and feature importances are generated from the surrogate model and sent to Arize.
import pandas as pd import numpy as np from sklearn.datasets import fetch_california_housing housing = fetch_california_housing() # Use only 1,000 data point for a speedier example data_reg = housing.data[:1000] target_reg = housing.target[:1000] feature_names_reg = housing.feature_names df_reg = pd.DataFrame(data_reg, columns=feature_names_reg) from sklearn.svm import SVR model_reg = SVR().fit(df_reg, target_reg) prediction_label_reg = pd.Series(model_reg.predict(df_reg)) actual_label_reg = pd.Series(target_reg)
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Generate feature importance valuesNote that the model itself is not used here. Only its prediction outputs are used.
def model_func_reg(_): return np.array(prediction_label_reg) explainer_reg = MimicExplainer( model_func_reg, df_reg, LGBMExplainableModel, augment_data=False, is_function=True, ) feature_importance_values_reg = pd.DataFrame( explainer_reg.explain_local(df_reg).local_importance_values, columns=feature_names_reg, ) feature_importance_values_reg
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Assemble Pandas DataFrame as a production dataset with prediction IDs and timestamps.
feature_importance_values_column_names_mapping_reg = { f"{feat}": f"{feat} (feature importance)" for feat in feature_names_reg } production_dataset_reg = pd.concat( [ pd.DataFrame( { "prediction_id": generate_prediction_ids(df_reg), "prediction_ts": simulate_production_timestamps(df_reg), "prediction_label": prediction_label_reg, "actual_label": actual_label_reg, } ), df_reg, feature_importance_values_reg.rename( columns=feature_importance_values_column_names_mapping_reg ), ], axis=1, ) production_dataset_reg
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send DataFrame to Arize.
# Define a Schema() object for Arize to pick up data from the correct columns for logging production_schema_reg = Schema( prediction_id_column_name="prediction_id", # REQUIRED timestamp_column_name="prediction_ts", prediction_label_column_name="prediction_label", actual_label_column_name="actual_label", feature_column_names=feature_names_reg, shap_values_column_names=feature_importance_values_column_names_mapping_reg, ) # arize_client.log returns a Response object from Python's requests module response_reg = arize_client.log( dataframe=production_dataset_reg, schema=production_schema_reg, model_id="surrogate_model_example_regression", model_type=ModelTypes.NUMERIC, environment=Environments.PRODUCTION, ) # If successful, the server will return a status_code of 200 if response_reg.status_code != 200: print( f"❌ logging failed with response code {response_reg.status_code}, {response_reg.text}" ) else: print( f"βœ… You have successfully logged {len(production_dataset_reg)} data points to Arize!" )
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Plotting
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot();
_____no_output_____
MIT
01_pandas_basics/10_pandas_plotting.ipynb
markumreed/data_management_sp_2021
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=['A', 'B', 'C', 'D']) df = df.cumsum() df.plot();
_____no_output_____
MIT
01_pandas_basics/10_pandas_plotting.ipynb
markumreed/data_management_sp_2021
!pip show tensorflow !git clone https://github.com/MingSheng92/AE_denoise.git from google.colab import drive drive.mount('/content/drive') %load /content/AE_denoise/scripts/utility.py %load /content/AE_denoise/scripts/Denoise_NN.py from AE_denoise.scripts.utility import load_data, faceGrid, ResultGrid, subsample, AddNoiseToMatrix, noisy from AE_denoise.scripts.Denoise_NN import PSNRLoss, createModel import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split img_data, label, img_size = load_data('/content/drive/My Drive/FaceDataset/CroppedYaleB', 0) #img_data, label, img_size = load_data('/content/drive/My Drive/FaceDataset/ORL', 0) img_size x_train, x_test, y_train, y_test = train_test_split(img_data.T, label, test_size=0.1, random_state=111) x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.1, random_state=111) print("Total number of training samples: ", x_train.shape) print("Total number of training samples: ", x_val.shape) print("Total number of validation samples: ", x_test.shape) x_train = x_train.astype('float32') / 255.0 x_val = x_val.astype('float32') / 255.0 x_test = x_test.astype('float32') / 255.0 #x_train = x_train.reshape(-1, img_size[0], img_size[1], 1) #x_val = x_val.reshape(-1, img_size[0], img_size[1], 1) x_train = np.reshape(x_train, (len(x_train), img_size[0], img_size[1], 1)) x_val = np.reshape(x_val, (len(x_val), img_size[0], img_size[1], 1)) x_test = np.reshape(x_test, (len(x_test), img_size[0], img_size[1], 1)) # add noise to the face images noise_factor = 0.3 x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape) x_val_noisy = x_val + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_val.shape) x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_val_noisy = np.clip(x_val_noisy, 0., 1.) x_test_noisy = np.clip(x_test_noisy, 0., 1.) faceGrid(10, x_train, img_size, 64) faceGrid(10, x_train_noisy, img_size, 64) model = createModel(img_size) model.summary() model.fit(x_train_noisy, x_train, epochs=15, batch_size=64, validation_data=(x_val_noisy, x_val)) denoise_prediction = model.predict(x_test_noisy) faceGrid(10, x_test, img_size, 5) faceGrid(10, x_test_noisy, img_size, 5) faceGrid(10, denoise_prediction, img_size, 5)
_____no_output_____
MIT
DL_Example.ipynb
MingSheng92/AE_denoise
The RosenBlatt Perceptron An exemple on the MNIST database Import
from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
Model Parameters
input_size = 784 no_classes = 10 batch_size = 100 total_batches = 200 x_input = tf.placeholder(tf.float32, shape=[None, input_size]) y_input = tf.placeholder(tf.float32, shape=[None, no_classes]) weights = tf.Variable(tf.random_normal([input_size, no_classes])) bias = tf.Variable(tf.random_normal([no_classes])) logits = tf.matmul(x_input, weights) + bias softmax_cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_input, logits=logits) loss_operation = tf.reduce_mean(softmax_cross_entropy) optimiser = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss_operation)
_____no_output_____
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
Run the model
session = tf.Session() session.run(tf.global_variables_initializer()) for batch_no in range(total_batches): amnist_batch = mnist_data.train.next_batch(batch_size) _, loss_value = session.run([optimiser, loss_operation], feed_dict={ x_input: mnist_batch[0], y_input: mnist_batch[1]}) print(loss_value) predictions = tf.argmax(logits, 1) correct_predictions = tf.equal(predictions, tf.argmax(y_input, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_predictions,tf.float32)) test_images, test_labels = mnist_data.test.images, mnist_data.test.labels accuracy_value = session.run(accuracy_operation, feed_dict={ x_input: test_images, y_input: test_labels}) print('Accuracy : ', accuracy_value) session.close()
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters /anaconda3/lib/python3.6/site-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.21.1) or chardet (2.3.0) doesn't match a supported version! RequestsDependencyWarning)
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
PyTorch: nn-----------A fully-connected ReLU network with one hidden layer, trained to predict y from xby minimizing squared Euclidean distance.This implementation uses the nn package from PyTorch to build the network.PyTorch autograd makes it easy to define computational graphs and take gradients,but raw autograd can be a bit too low-level for defining complex neural networks;this is where the nn package can help. The nn package defines a set of Modules,which you can think of as a neural network layer that has produces output frominput and may have some trainable weights.
import torch from torch.autograd import Variable # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables. x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out), requires_grad=False) # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. Each Linear Module computes output from input using a # linear function, and holds internal Variables for its weight and bias. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(size_average=False) learning_rate = 1e-4 for t in range(500): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Variable of input data to the Module and it produces # a Variable of output data. y_pred = model(x) # Compute and print loss. We pass Variables containing the predicted and true # values of y, and the loss function returns a Variable containing the # loss. loss = loss_fn(y_pred, y) print(t, loss.data[0]) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Variables with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Variable, so # we can access its data and gradients like we did before. for param in model.parameters(): param.data -= learning_rate * param.grad.data
_____no_output_____
MIT
two_layer_net_nn.ipynb
asapypy/mokumokuTorch
Understanding Data Types in Python Effective data-driven science and computation requires understanding how data is stored and manipulated. This section outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this. Understanding this difference is fundamental to understanding much of the material throughout the rest of the course.Python is simple to use. While a statically-typed language like C or Java requires each variable to be explicitly declared, a dynamically-typed language like Python skips this specification.In C, the data types of each variable are explicitly declared, while in Python the types are dynamically inferred.This means, for example, that we can assign any kind of data to any variable:
x = 4 x = "four"
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
This sort of flexibility is one piece that makes Python and other dynamically-typed languages convenient and easy to use. 1.1. Data Types We have several data types in python:* None* Numeric (int, float, complex, bool)* List* Tuple* Set * String* Range* Dictionary (Map)
# NoneType a = None type(a) # int a = 1+1 print(a) type(a) # complex c = 1.5 + 0.5j type(c) c.real c.imag # boolean d = 2 > 3 print(d) type(d)
False
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Python Lists Let's consider now what happens when we use a Python data structure that holds many Python objects. The standard mutable multi-element container in Python is the list. We can create a list of integers as follows:
L = list(range(10)) L type(L[0])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Or, similarly, a list of strings:
L2 = [str(c) for c in L] L2 type(L2[0])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Because of Python's dynamic typing, we can even create heterogeneous lists:
L3 = [True, "2", 3.0, 4] [type(item) for item in L3]
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Python Dictionaries
keys = [1, 2, 3, 4, 5] values = ['monday', 'tuesday', 'wendsday', 'friday'] dictionary = dict(zip(keys, values)) dictionary dictionary.get(1) dictionary[1]
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Fixed-Type Arrays in Python
import numpy as np
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
First, we can use np.array to create arrays from Python lists:
# integer array: np.array([1, 4, 2, 5, 3])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Unlike python lists, NumPy is constrained to arrays that all contain the same type. If we want to explicitly set the data type of the resulting array, we can use the dtype keyword:
np.array([1, 2, 3, 4], dtype='float32')
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Creating Arrays from Scratch Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy. Here are several examples:
# Create a length-10 integer array filled with zeros np.zeros(10, dtype=int) # Create a 3x5 floating-point array filled with ones np.ones((3, 5), dtype=float) # Create a 3x5 array filled with 3.14 np.full((3, 5), 3.14) # Create an array filled with a linear sequence np.arange(1, 10) # Starting at 0, ending at 20, stepping by 2 # (this is similar to the built-in range() function) np.arange(0, 20, 2)
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
GlobalMaxPooling1D **[pooling.GlobalMaxPooling1D.0] input 6x6**
data_in_shape = (6, 6) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(260) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling1D.0'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (6, 6) in: [-0.806777, -0.564841, -0.481331, 0.559626, 0.274958, -0.659222, -0.178541, 0.689453, -0.028873, 0.053859, -0.446394, -0.53406, 0.776897, -0.700858, -0.802179, -0.616515, 0.718677, 0.303042, -0.080606, -0.850593, -0.795971, 0.860487, -0.90685, 0.89858, 0.617251, 0.334305, -0.351137, -0.642574, 0.108974, -0.993964, 0.051085, -0.372012, 0.843766, 0.088025, -0.598662, 0.789035] out shape: (6,) out: [0.776897, 0.689453, 0.843766, 0.860487, 0.718677, 0.89858]
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
**[pooling.GlobalMaxPooling1D.1] input 3x7**
data_in_shape = (3, 7) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(261) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling1D.1'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (3, 7) in: [0.601872, -0.028379, 0.654213, 0.217731, -0.864161, 0.422013, 0.888312, -0.714141, -0.184753, 0.224845, -0.221123, -0.847943, -0.511334, -0.871723, -0.597589, -0.889034, -0.544887, -0.004798, 0.406639, -0.35285, 0.648562] out shape: (7,) out: [0.601872, -0.028379, 0.654213, 0.217731, 0.406639, 0.422013, 0.888312]
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
**[pooling.GlobalMaxPooling1D.2] input 8x4**
data_in_shape = (8, 4) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(262) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in])) data_out_shape = result[0].shape data_in_formatted = format_decimal(data_in.ravel().tolist()) data_out_formatted = format_decimal(result[0].ravel().tolist()) print('') print('in shape:', data_in_shape) print('in:', data_in_formatted) print('out shape:', data_out_shape) print('out:', data_out_formatted) DATA['pooling.GlobalMaxPooling1D.2'] = { 'input': {'data': data_in_formatted, 'shape': data_in_shape}, 'expected': {'data': data_out_formatted, 'shape': data_out_shape} }
in shape: (8, 4) in: [-0.215694, 0.441215, 0.116911, 0.53299, 0.883562, -0.535525, -0.869764, -0.596287, 0.576428, -0.689083, -0.132924, -0.129935, -0.17672, -0.29097, 0.590914, 0.992098, 0.908965, -0.170202, 0.640203, 0.178644, 0.866749, -0.545566, -0.827072, 0.420342, -0.076191, 0.207686, -0.908472, -0.795307, -0.948319, 0.683682, -0.563278, -0.82135] out shape: (4,) out: [0.908965, 0.683682, 0.640203, 0.992098]
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
export for Keras.js tests
import os filename = '../../../test/data/layers/pooling/GlobalMaxPooling1D.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA))
{"pooling.GlobalMaxPooling1D.0": {"input": {"data": [-0.806777, -0.564841, -0.481331, 0.559626, 0.274958, -0.659222, -0.178541, 0.689453, -0.028873, 0.053859, -0.446394, -0.53406, 0.776897, -0.700858, -0.802179, -0.616515, 0.718677, 0.303042, -0.080606, -0.850593, -0.795971, 0.860487, -0.90685, 0.89858, 0.617251, 0.334305, -0.351137, -0.642574, 0.108974, -0.993964, 0.051085, -0.372012, 0.843766, 0.088025, -0.598662, 0.789035], "shape": [6, 6]}, "expected": {"data": [0.776897, 0.689453, 0.843766, 0.860487, 0.718677, 0.89858], "shape": [6]}}, "pooling.GlobalMaxPooling1D.1": {"input": {"data": [0.601872, -0.028379, 0.654213, 0.217731, -0.864161, 0.422013, 0.888312, -0.714141, -0.184753, 0.224845, -0.221123, -0.847943, -0.511334, -0.871723, -0.597589, -0.889034, -0.544887, -0.004798, 0.406639, -0.35285, 0.648562], "shape": [3, 7]}, "expected": {"data": [0.601872, -0.028379, 0.654213, 0.217731, 0.406639, 0.422013, 0.888312], "shape": [7]}}, "pooling.GlobalMaxPooling1D.2": {"input": {"data": [-0.215694, 0.441215, 0.116911, 0.53299, 0.883562, -0.535525, -0.869764, -0.596287, 0.576428, -0.689083, -0.132924, -0.129935, -0.17672, -0.29097, 0.590914, 0.992098, 0.908965, -0.170202, 0.640203, 0.178644, 0.866749, -0.545566, -0.827072, 0.420342, -0.076191, 0.207686, -0.908472, -0.795307, -0.948319, 0.683682, -0.563278, -0.82135], "shape": [8, 4]}, "expected": {"data": [0.908965, 0.683682, 0.640203, 0.992098], "shape": [4]}}}
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
Water quality Setup software libraries
# Import and initialize the Earth Engine library. import ee ee.Initialize() ee.__version__ # Folium setup. import folium print(folium.__version__) # Skydipper library. import Skydipper print(Skydipper.__version__) import matplotlib.pyplot as plt import numpy as np import pandas as pd import functools import json import uuid import os from pprint import pprint import env import time import ee_collection_specifics
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Composite image**Variables**
collection = 'Lake-Water-Quality-100m' init_date = '2019-01-21' end_date = '2019-01-31' # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' composite = ee_collection_specifics.Composite(collection)(init_date, end_date) mapid = composite.getMapId(ee_collection_specifics.vizz_params_rgb(collection)) tiles_url = EE_TILES.format(**mapid) map = folium.Map(location=[39.31, 0.302]) folium.TileLayer( tiles=tiles_url, attr='Google Earth Engine', overlay=True, name=str(ee_collection_specifics.ee_bands_rgb(collection))).add_to(map) map.add_child(folium.LayerControl()) map
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** GeostoreWe select the areas from which we will export the training data.**Variables**
def polygons_to_multipoligon(polygons): multipoligon = [] MultiPoligon = {} for polygon in polygons.get('features'): multipoligon.append(polygon.get('geometry').get('coordinates')) MultiPoligon = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "MultiPolygon", "coordinates": multipoligon } } ] } return MultiPoligon #trainPolygons = {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-0.45043945312499994,39.142842478062505],[0.06042480468749999,39.142842478062505],[0.06042480468749999,39.55064761909318],[-0.45043945312499994,39.55064761909318],[-0.45043945312499994,39.142842478062505]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-0.2911376953125,38.659777730712534],[0.2581787109375,38.659777730712534],[0.2581787109375,39.10022600175347],[-0.2911376953125,39.10022600175347],[-0.2911376953125,38.659777730712534]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-0.3350830078125,39.56758783088905],[0.22521972656249997,39.56758783088905],[0.22521972656249997,39.757879992021756],[-0.3350830078125,39.757879992021756],[-0.3350830078125,39.56758783088905]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[0.07965087890625,39.21310328979648],[0.23345947265625,39.21310328979648],[0.23345947265625,39.54852980171147],[0.07965087890625,39.54852980171147],[0.07965087890625,39.21310328979648]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-1.0931396484375,35.7286770448517],[-0.736083984375,35.7286770448517],[-0.736083984375,35.94243575255426],[-1.0931396484375,35.94243575255426],[-1.0931396484375,35.7286770448517]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-1.7303466796874998,35.16931803601131],[-1.4666748046875,35.16931803601131],[-1.4666748046875,35.74205383068037],[-1.7303466796874998,35.74205383068037],[-1.7303466796874998,35.16931803601131]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-1.42822265625,35.285984736065764],[-1.131591796875,35.285984736065764],[-1.131591796875,35.782170703266075],[-1.42822265625,35.782170703266075],[-1.42822265625,35.285984736065764]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-1.8127441406249998,35.831174956246535],[-1.219482421875,35.831174956246535],[-1.219482421875,36.04465753921525],[-1.8127441406249998,36.04465753921525],[-1.8127441406249998,35.831174956246535]]]}}]} trainPolygons = {"type":"FeatureCollection","features":[{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-0.406494140625,38.64476310916202],[0.27740478515625,38.64476310916202],[0.27740478515625,39.74521015328692],[-0.406494140625,39.74521015328692],[-0.406494140625,38.64476310916202]]]}},{"type":"Feature","properties":{},"geometry":{"type":"Polygon","coordinates":[[[-1.70013427734375,35.15135442846945],[-0.703125,35.15135442846945],[-0.703125,35.94688293218141],[-1.70013427734375,35.94688293218141],[-1.70013427734375,35.15135442846945]]]}}]} trainPolys = polygons_to_multipoligon(trainPolygons) evalPolys = None nTrain = len(trainPolys.get('features')[0].get('geometry').get('coordinates')) print('Number of training polygons:', nTrain) if evalPolys: nEval = len(evalPolys.get('features')[0].get('geometry').get('coordinates')) print('Number of training polygons:', nEval)
Number of training polygons: 2
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display Polygons**
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' composite = ee_collection_specifics.Composite(collection)(init_date, end_date) mapid = composite.getMapId(ee_collection_specifics.vizz_params_rgb(collection)) tiles_url = EE_TILES.format(**mapid) map = folium.Map(location=[39.31, 0.302], zoom_start=6) folium.TileLayer( tiles=tiles_url, attr='Google Earth Engine', overlay=True, name=str(ee_collection_specifics.ee_bands_rgb(collection))).add_to(map) # Convert the GeoJSONs to feature collections trainFeatures = ee.FeatureCollection(trainPolys.get('features')) if evalPolys: evalFeatures = ee.FeatureCollection(evalPolys.get('features')) polyImage = ee.Image(0).byte().paint(trainFeatures, 1) if evalPolys: polyImage = ee.Image(0).byte().paint(trainFeatures, 1).paint(evalFeatures, 2) polyImage = polyImage.updateMask(polyImage) mapid = polyImage.getMapId({'min': 1, 'max': 2, 'palette': ['red', 'blue']}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='training polygons', ).add_to(map) map.add_child(folium.LayerControl()) map
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Data pre-processingWe normalize the composite images to have values from 0 to 1.**Variables**
input_dataset = 'Sentinel-2-Top-of-Atmosphere-Reflectance' output_dataset = 'Lake-Water-Quality-100m' init_date = '2019-01-21' end_date = '2019-01-31' scale = 100 #scale in meters collections = [input_dataset, output_dataset]
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Normalize images**
def min_max_values(image, collection, scale, polygons=None): normThreshold = ee_collection_specifics.ee_bands_normThreshold(collection) num = 2 lon = np.linspace(-180, 180, num) lat = np.linspace(-90, 90, num) features = [] for i in range(len(lon)-1): for j in range(len(lat)-1): features.append(ee.Feature(ee.Geometry.Rectangle(lon[i], lat[j], lon[i+1], lat[j+1]))) if not polygons: polygons = ee.FeatureCollection(features) regReducer = { 'geometry': polygons, 'reducer': ee.Reducer.minMax(), 'maxPixels': 1e10, 'bestEffort': True, 'scale':scale, 'tileScale': 10 } values = image.reduceRegion(**regReducer).getInfo() print(values) # Avoid outliers by taking into account only the normThreshold% of the data points. regReducer = { 'geometry': polygons, 'reducer': ee.Reducer.histogram(), 'maxPixels': 1e10, 'bestEffort': True, 'scale':scale, 'tileScale': 10 } hist = image.reduceRegion(**regReducer).getInfo() for band in list(normThreshold.keys()): if normThreshold[band] != 100: count = np.array(hist.get(band).get('histogram')) x = np.array(hist.get(band).get('bucketMeans')) cumulative_per = np.cumsum(count/count.sum()*100) values[band+'_max'] = x[np.where(cumulative_per < normThreshold[band])][-1] return values def normalize_ee_images(image, collection, values): Bands = ee_collection_specifics.ee_bands(collection) # Normalize [0, 1] ee images for i, band in enumerate(Bands): if i == 0: image_new = image.select(band).clamp(values[band+'_min'], values[band+'_max'])\ .subtract(values[band+'_min'])\ .divide(values[band+'_max']-values[band+'_min']) else: image_new = image_new.addBands(image.select(band).clamp(values[band+'_min'], values[band+'_max'])\ .subtract(values[band+'_min'])\ .divide(values[band+'_max']-values[band+'_min'])) return image_new %%time images = [] for collection in collections: # Create composite image = ee_collection_specifics.Composite(collection)(init_date, end_date) bands = ee_collection_specifics.ee_bands(collection) image = image.select(bands) #Create composite if ee_collection_specifics.normalize(collection): # Get min man values for each band values = min_max_values(image, collection, scale, polygons=trainFeatures) print(values) # Normalize images image = normalize_ee_images(image, collection, values) else: values = {} images.append(image)
{'B11_max': 10857.5, 'B11_min': 7.0, 'B12_max': 10691.0, 'B12_min': 1.0, 'B1_max': 6806.0, 'B1_min': 983.0, 'B2_max': 6406.0, 'B2_min': 685.0, 'B3_max': 6182.0, 'B3_min': 412.0, 'B4_max': 7485.5, 'B4_min': 229.0, 'B5_max': 8444.0, 'B5_min': 186.0, 'B6_max': 9923.0, 'B6_min': 153.0, 'B7_max': 11409.0, 'B7_min': 128.0, 'B8A_max': 12957.0, 'B8A_min': 84.0, 'B8_max': 7822.0, 'B8_min': 104.0, 'ndvi_max': 0.8359633027522936, 'ndvi_min': -0.6463519313304721, 'ndwi_max': 0.7134948096885814, 'ndwi_min': -0.8102189781021898} {'B11_max': 10857.5, 'B11_min': 7.0, 'B12_max': 10691.0, 'B12_min': 1.0, 'B1_max': 1330.4577965925364, 'B1_min': 983.0, 'B2_max': 1039.5402534802865, 'B2_min': 685.0, 'B3_max': 879.698114934553, 'B3_min': 412.0, 'B4_max': 751.6494664084341, 'B4_min': 229.0, 'B5_max': 1119.607360754671, 'B5_min': 186.0, 'B6_max': 1823.92697289679, 'B6_min': 153.0, 'B7_max': 2079.961473786427, 'B7_min': 128.0, 'B8A_max': 2207.831974029281, 'B8A_min': 84.0, 'B8_max': 2031.6418424876374, 'B8_min': 104.0, 'ndvi_max': 0.8359633027522936, 'ndvi_min': -0.6463519313304721, 'ndwi_max': 0.7134948096885814, 'ndwi_min': -0.8102189781021898} CPU times: user 45.8 ms, sys: 4.96 ms, total: 50.7 ms Wall time: 9.69 s
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display composite**
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' map = folium.Map(location=[39.31, 0.302], zoom_start=6) for n, collection in enumerate(collections): for params in ee_collection_specifics.vizz_params(collection): mapid = images[n].getMapId(params) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name=str(params['bands']), ).add_to(map) # Convert the GeoJSONs to feature collections trainFeatures = ee.FeatureCollection(trainPolys.get('features')) if evalPolys: evalFeatures = ee.FeatureCollection(evalPolys.get('features')) polyImage = ee.Image(0).byte().paint(trainFeatures, 1) if evalPolys: polyImage = ee.Image(0).byte().paint(trainFeatures, 1).paint(evalFeatures, 2) polyImage = polyImage.updateMask(polyImage) mapid = polyImage.getMapId({'min': 1, 'max': 2, 'palette': ['red', 'blue']}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='training polygons', ).add_to(map) map.add_child(folium.LayerControl()) map
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Create TFRecords for training Export pixels**Variables**
input_bands = ['B2','B3','B4','B5','ndvi','ndwi'] output_bands = ['turbidity_blended_mean'] bands = [input_bands, output_bands] dataset_name = 'Sentinel2_WaterQuality' base_names = ['training_pixels', 'eval_pixels'] bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data'
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Select the bands**
# Select the bands we want c = images[0].select(bands[0])\ .addBands(images[1].select(bands[1])) pprint(c.getInfo())
{'bands': [{'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0, 'min': 0.0, 'precision': 'double', 'type': 'PixelType'}, 'id': 'B2'}, {'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0, 'min': 0.0, 'precision': 'double', 'type': 'PixelType'}, 'id': 'B3'}, {'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0, 'min': 0.0, 'precision': 'double', 'type': 'PixelType'}, 'id': 'B4'}, {'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0, 'min': 0.0, 'precision': 'double', 'type': 'PixelType'}, 'id': 'B5'}, {'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.000000004087453, 'min': -1.449649135231728e-09, 'precision': 'double', 'type': 'PixelType'}, 'id': 'ndvi'}, {'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0000000181106892, 'min': -7.70938799632259e-09, 'precision': 'double', 'type': 'PixelType'}, 'id': 'ndwi'}, {'crs': 'EPSG:4326', 'crs_transform': [0.000898311174991017, 0.0, -10.06198347107437, 0.0, -0.000898311174991017, 43.89328063241106], 'data_type': {'precision': 'float', 'type': 'PixelType'}, 'dimensions': [15043, 10004], 'id': 'turbidity_blended_mean'}], 'type': 'Image'}
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Sample pixels**
sr = c.sample(region = trainFeatures, scale = scale, numPixels=20000, tileScale=4, seed=999) # Add random column sr = sr.randomColumn(seed=999) # Partition the sample approximately 70-30. train_dataset = sr.filter(ee.Filter.lt('random', 0.7)) eval_dataset = sr.filter(ee.Filter.gte('random', 0.7)) # Print the first couple points to verify. pprint({'training': train_dataset.first().getInfo()}) pprint({'testing': eval_dataset.first().getInfo()}) # Print the first couple points to verify. from pprint import pprint train_size=train_dataset.size().getInfo() eval_size=eval_dataset.size().getInfo() pprint({'training': train_size}) pprint({'testing': eval_size})
{'training': 8091} {'testing': 3508}
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Export the training and validation data**
def export_TFRecords_pixels(datasets, base_names, bucket, folder, selectors): # Export all the training/evaluation data filePaths = [] for n, dataset in enumerate(datasets): filePaths.append(bucket+ '/' + folder + '/' + base_names[n]) # Create the tasks. task = ee.batch.Export.table.toCloudStorage( collection = dataset, description = 'Export '+base_names[n], fileNamePrefix = folder + '/' + base_names[n], bucket = bucket, fileFormat = 'TFRecord', selectors = selectors) task.start() return filePaths datasets = [train_dataset, eval_dataset] selectors = input_bands + output_bands # Export training/evaluation data filePaths = export_TFRecords_pixels(datasets, base_names, bucket, folder, selectors)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Inspect data Inspect pixelsLoad the data exported from Earth Engine into a tf.data.Dataset. **Helper functions**
# Tensorflow setup. import tensorflow as tf if tf.__version__ == '1.15.0': tf.enable_eager_execution() print(tf.__version__) def parse_function(proto): """The parsing function. Read a serialized example into the structure defined by FEATURES_DICT. Args: example_proto: a serialized Example. Returns: A tuple of the predictors dictionary and the labels. """ # Define your tfrecord features = input_bands + output_bands # Specify the size and shape of patches expected by the model. columns = [ tf.io.FixedLenFeature(shape=[1,1], dtype=tf.float32) for k in features ] features_dict = dict(zip(features, columns)) # Load one example parsed_features = tf.io.parse_single_example(proto, features_dict) # Convert a dictionary of tensors to a tuple of (inputs, outputs) inputsList = [parsed_features.get(key) for key in features] stacked = tf.stack(inputsList, axis=0) # Convert the tensors into a stack in HWC shape stacked = tf.transpose(stacked, [1, 2, 0]) return stacked[:,:,:len(input_bands)], stacked[:,:,len(input_bands):] def get_dataset(glob, buffer_size, batch_size): """Get the dataset Returns: A tf.data.Dataset of training data. """ glob = tf.compat.v1.io.gfile.glob(glob) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_function, num_parallel_calls=5) dataset = dataset.shuffle(buffer_size).batch(batch_size).repeat() return dataset
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Variables**
buffer_size = 100 batch_size = 4
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Dataset**
glob = 'gs://' + bucket + '/' + folder + '/' + base_names[0] + '*' dataset = get_dataset(glob, buffer_size, batch_size) dataset
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check the first record**
arr = iter(dataset.take(1)).next() input_arr = arr[0].numpy() print(input_arr.shape) output_arr = arr[1].numpy() print(output_arr.shape)
(4, 1, 1, 6) (4, 1, 1, 1)
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Training the model locally**Variables**
job_dir = 'gs://' + bucket + '/' + 'cnn-models/'+ dataset_name +'/trainer' logs_dir = job_dir + '/logs' model_dir = job_dir + '/model' shuffle_size = 2000 batch_size = 4 epochs=50 train_size=train_size eval_size=eval_size output_activation=''
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Training/evaluation data**The following is code to load training/evaluation data.
import tensorflow as tf def parse_function(proto): """The parsing function. Read a serialized example into the structure defined by FEATURES_DICT. Args: example_proto: a serialized Example. Returns: A tuple of the predictors dictionary and the labels. """ # Define your tfrecord features = input_bands + output_bands # Specify the size and shape of patches expected by the model. columns = [ tf.io.FixedLenFeature(shape=[1,1], dtype=tf.float32) for k in features ] features_dict = dict(zip(features, columns)) # Load one example parsed_features = tf.io.parse_single_example(proto, features_dict) # Convert a dictionary of tensors to a tuple of (inputs, outputs) inputsList = [parsed_features.get(key) for key in features] stacked = tf.stack(inputsList, axis=0) # Convert the tensors into a stack in HWC shape stacked = tf.transpose(stacked) return stacked[:,:,:len(input_bands)], stacked[:,:,len(input_bands):] def get_dataset(glob): """Get the dataset Returns: A tf.data.Dataset of training data. """ glob = tf.compat.v1.io.gfile.glob(glob) dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') dataset = dataset.map(parse_function, num_parallel_calls=5) return dataset def get_training_dataset(): """Get the preprocessed training dataset Returns: A tf.data.Dataset of training data. """ glob = 'gs://' + bucket + '/' + folder + '/' + base_names[0] + '*' dataset = get_dataset(glob) dataset = dataset.shuffle(shuffle_size).batch(batch_size).repeat() return dataset def get_evaluation_dataset(): """Get the preprocessed evaluation dataset Returns: A tf.data.Dataset of evaluation data. """ glob = 'gs://' + bucket + '/' + folder + '/' + base_names[1] + '*' dataset = get_dataset(glob) dataset = dataset.batch(1).repeat() return dataset
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Model**
from tensorflow.python.keras import Model # Keras model module from tensorflow.python.keras.layers import Input, Dense, Dropout, Activation def create_keras_model(inputShape, nClasses, output_activation='linear'): inputs = Input(shape=inputShape, name='vector') x = Dense(32, input_shape=inputShape, activation='relu')(inputs) x = Dropout(0.5)(x) x = Dense(128, activation='relu')(x) x = Dropout(0.5)(x) x = Dense(nClasses)(x) outputs = Activation(output_activation, name= 'output')(x) model = Model(inputs=inputs, outputs=outputs, name='sequential') return model
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Training task**The following will get the training and evaluation data, train the model and save it when it's done in a Cloud Storage bucket.
import tensorflow as tf import time import os def train_and_evaluate(): """Trains and evaluates the Keras model. Uses the Keras model defined in model.py and trains on data loaded and preprocessed in util.py. Saves the trained model in TensorFlow SavedModel format to the path defined in part by the --job-dir argument. """ # Create the Keras Model if not output_activation: keras_model = create_keras_model(inputShape = (None, None, len(input_bands)), nClasses = len(output_bands)) else: keras_model = create_keras_model(inputShape = (None, None, len(input_bands)), nClasses = len(output_bands), output_activation = output_activation) # Compile Keras model keras_model.compile(loss='mse', optimizer='adam', metrics=['mse']) # Pass a tfrecord training_dataset = get_training_dataset() evaluation_dataset = get_evaluation_dataset() # Setup TensorBoard callback. tensorboard_cb = tf.keras.callbacks.TensorBoard(logs_dir) # Train model keras_model.fit( x=training_dataset, steps_per_epoch=int(train_size / batch_size), epochs=epochs, validation_data=evaluation_dataset, validation_steps=int(eval_size / batch_size), verbose=1, callbacks=[tensorboard_cb]) tf.keras.models.save_model(keras_model, filepath=os.path.join(model_dir, str(int(time.time()))), save_format="tf") return keras_model model = train_and_evaluate()
Train for 2022 steps, validate for 877 steps Epoch 1/50 1/2022 [..............................] - ETA: 36:44 - loss: 0.0110 - mean_squared_error: 0.0110WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (3.539397). Check your callbacks. 2022/2022 [==============================] - 15s 7ms/step - loss: 83.2001 - mean_squared_error: 83.2309 - val_loss: 64.3992 - val_mean_squared_error: 64.3992 Epoch 2/50 2022/2022 [==============================] - 28s 14ms/step - loss: 78.7397 - mean_squared_error: 78.7687 - val_loss: 59.1074 - val_mean_squared_error: 59.1074 Epoch 3/50 2022/2022 [==============================] - 10s 5ms/step - loss: 78.1049 - mean_squared_error: 78.1339 - val_loss: 54.7844 - val_mean_squared_error: 54.7844 Epoch 4/50 2022/2022 [==============================] - 9s 4ms/step - loss: 64.1067 - mean_squared_error: 64.1305 - val_loss: 52.8855 - val_mean_squared_error: 52.8855 Epoch 5/50 2022/2022 [==============================] - 12s 6ms/step - loss: 65.9322 - mean_squared_error: 65.9566 - val_loss: 49.9769 - val_mean_squared_error: 49.9769 Epoch 6/50 2022/2022 [==============================] - 7s 3ms/step - loss: 64.9093 - mean_squared_error: 64.9334 - val_loss: 46.0060 - val_mean_squared_error: 46.0060 Epoch 7/50 2022/2022 [==============================] - 7s 3ms/step - loss: 59.9277 - mean_squared_error: 59.9500 - val_loss: 45.4808 - val_mean_squared_error: 45.4808 Epoch 8/50 2022/2022 [==============================] - 11s 6ms/step - loss: 60.2654 - mean_squared_error: 60.2877 - val_loss: 43.2340 - val_mean_squared_error: 43.2340 Epoch 9/50 2022/2022 [==============================] - 7s 4ms/step - loss: 61.9468 - mean_squared_error: 61.9697 - val_loss: 43.2755 - val_mean_squared_error: 43.2755 Epoch 10/50 2022/2022 [==============================] - 8s 4ms/step - loss: 60.1263 - mean_squared_error: 60.1486 - val_loss: 44.4449 - val_mean_squared_error: 44.4449 Epoch 11/50 2022/2022 [==============================] - 8s 4ms/step - loss: 68.2141 - mean_squared_error: 68.2394 - val_loss: 40.8561 - val_mean_squared_error: 40.8561 Epoch 12/50 2022/2022 [==============================] - 9s 4ms/step - loss: 55.4871 - mean_squared_error: 55.5077 - val_loss: 41.8557 - val_mean_squared_error: 41.8557 Epoch 13/50 2022/2022 [==============================] - 18s 9ms/step - loss: 58.3074 - mean_squared_error: 58.3290 - val_loss: 41.2392 - val_mean_squared_error: 41.2392 Epoch 14/50 2022/2022 [==============================] - 8s 4ms/step - loss: 62.9377 - mean_squared_error: 62.9610 - val_loss: 39.4673 - val_mean_squared_error: 39.4673 Epoch 15/50 2022/2022 [==============================] - 8s 4ms/step - loss: 52.0152 - mean_squared_error: 52.0330 - val_loss: 32.8405 - val_mean_squared_error: 32.8405 Epoch 16/50 2022/2022 [==============================] - 8s 4ms/step - loss: 55.8185 - mean_squared_error: 55.8392 - val_loss: 34.5340 - val_mean_squared_error: 34.5340 Epoch 17/50 2022/2022 [==============================] - 9s 5ms/step - loss: 58.6639 - mean_squared_error: 58.6857 - val_loss: 37.0712 - val_mean_squared_error: 37.0712 Epoch 18/50 2022/2022 [==============================] - 6s 3ms/step - loss: 61.4281 - mean_squared_error: 54.1492 - val_loss: 34.6674 - val_mean_squared_error: 34.6674 Epoch 19/50 2022/2022 [==============================] - 8s 4ms/step - loss: 56.6472 - mean_squared_error: 56.6683 - val_loss: 31.4451 - val_mean_squared_error: 31.4451 Epoch 20/50 2022/2022 [==============================] - 8s 4ms/step - loss: 52.6858 - mean_squared_error: 52.7053 - val_loss: 30.1258 - val_mean_squared_error: 30.1258 Epoch 21/50 2022/2022 [==============================] - 7s 3ms/step - loss: 53.4791 - mean_squared_error: 53.4989 - val_loss: 32.4835 - val_mean_squared_error: 32.4835 Epoch 22/50 2022/2022 [==============================] - 10s 5ms/step - loss: 52.6867 - mean_squared_error: 51.6206 - val_loss: 33.0613 - val_mean_squared_error: 33.0613 Epoch 23/50 2022/2022 [==============================] - 8s 4ms/step - loss: 51.0708 - mean_squared_error: 51.0897 - val_loss: 28.4322 - val_mean_squared_error: 28.4322 Epoch 24/50 2022/2022 [==============================] - 5s 2ms/step - loss: 48.4817 - mean_squared_error: 48.4997 - val_loss: 26.6276 - val_mean_squared_error: 26.6276 Epoch 25/50 2022/2022 [==============================] - 15s 7ms/step - loss: 40.9348 - mean_squared_error: 40.9500 - val_loss: 23.2825 - val_mean_squared_error: 23.2825 Epoch 26/50 2022/2022 [==============================] - 9s 4ms/step - loss: 48.1200 - mean_squared_error: 48.1378 - val_loss: 22.9047 - val_mean_squared_error: 22.9047 Epoch 27/50 2022/2022 [==============================] - 13s 6ms/step - loss: 38.1358 - mean_squared_error: 38.1500 - val_loss: 22.1093 - val_mean_squared_error: 22.1093 Epoch 28/50 2022/2022 [==============================] - 9s 4ms/step - loss: 41.3039 - mean_squared_error: 41.3192 - val_loss: 20.6742 - val_mean_squared_error: 20.6742 Epoch 29/50 2022/2022 [==============================] - 6s 3ms/step - loss: 55.5983 - mean_squared_error: 55.6182 - val_loss: 22.4796 - val_mean_squared_error: 22.4796 Epoch 30/50 2022/2022 [==============================] - 5s 3ms/step - loss: 47.1700 - mean_squared_error: 47.1874 - val_loss: 18.7321 - val_mean_squared_error: 18.7321 Epoch 31/50 2022/2022 [==============================] - 13s 7ms/step - loss: 37.0061 - mean_squared_error: 37.0198 - val_loss: 18.1387 - val_mean_squared_error: 18.1387 Epoch 32/50 2022/2022 [==============================] - 5s 3ms/step - loss: 38.3234 - mean_squared_error: 38.3376 - val_loss: 17.2121 - val_mean_squared_error: 17.2121 Epoch 33/50 2022/2022 [==============================] - 6s 3ms/step - loss: 35.8868 - mean_squared_error: 35.9001 - val_loss: 13.4702 - val_mean_squared_error: 13.4702 Epoch 34/50 2022/2022 [==============================] - 7s 4ms/step - loss: 39.1125 - mean_squared_error: 39.1271 - val_loss: 14.8563 - val_mean_squared_error: 14.8563 Epoch 35/50 2022/2022 [==============================] - 9s 4ms/step - loss: 35.0492 - mean_squared_error: 35.0621 - val_loss: 7.9853 - val_mean_squared_error: 7.9853 Epoch 36/50 2022/2022 [==============================] - 9s 4ms/step - loss: 32.7854 - mean_squared_error: 32.7975 - val_loss: 5.5603 - val_mean_squared_error: 5.5603 Epoch 37/50 2022/2022 [==============================] - 8s 4ms/step - loss: 28.6975 - mean_squared_error: 28.7081 - val_loss: 9.9096 - val_mean_squared_error: 9.9096 Epoch 38/50 2022/2022 [==============================] - 5s 3ms/step - loss: 32.4937 - mean_squared_error: 32.5058 - val_loss: 8.3113 - val_mean_squared_error: 8.3113 Epoch 39/50 2022/2022 [==============================] - 10s 5ms/step - loss: 28.3869 - mean_squared_error: 28.3974 - val_loss: 15.2752 - val_mean_squared_error: 15.2752 Epoch 40/50 2022/2022 [==============================] - 7s 3ms/step - loss: 31.6952 - mean_squared_error: 31.7070 - val_loss: 6.0550 - val_mean_squared_error: 6.0550 Epoch 41/50 2022/2022 [==============================] - 8s 4ms/step - loss: 24.0169 - mean_squared_error: 24.0259 - val_loss: 6.6364 - val_mean_squared_error: 6.6364 Epoch 42/50 2022/2022 [==============================] - 6s 3ms/step - loss: 28.1696 - mean_squared_error: 28.1800 - val_loss: 3.9832 - val_mean_squared_error: 3.9832 Epoch 43/50 2022/2022 [==============================] - 9s 5ms/step - loss: 27.9051 - mean_squared_error: 27.9154 - val_loss: 6.5917 - val_mean_squared_error: 6.5917 Epoch 44/50 2022/2022 [==============================] - 8s 4ms/step - loss: 36.0532 - mean_squared_error: 36.0665 - val_loss: 9.1431 - val_mean_squared_error: 9.1431 Epoch 45/50 2022/2022 [==============================] - 7s 3ms/step - loss: 34.9575 - mean_squared_error: 34.9704 - val_loss: 2.6993 - val_mean_squared_error: 2.6993 Epoch 46/50 2022/2022 [==============================] - 10s 5ms/step - loss: 23.5416 - mean_squared_error: 23.5503 - val_loss: 11.6222 - val_mean_squared_error: 11.6222 Epoch 47/50 2022/2022 [==============================] - 6s 3ms/step - loss: 31.2373 - mean_squared_error: 31.2488 - val_loss: 3.7480 - val_mean_squared_error: 3.7480 Epoch 48/50 2022/2022 [==============================] - 8s 4ms/step - loss: 25.8300 - mean_squared_error: 25.8396 - val_loss: 2.2407 - val_mean_squared_error: 2.2407 Epoch 49/50 2022/2022 [==============================] - 6s 3ms/step - loss: 25.2008 - mean_squared_error: 25.2070 - val_loss: 2.5820 - val_mean_squared_error: 2.5820 Epoch 50/50 2022/2022 [==============================] - 5s 2ms/step - loss: 26.1330 - mean_squared_error: 26.1426 - val_loss: 4.6872 - val_mean_squared_error: 4.6872 INFO:tensorflow:Assets written to: gs://skydipper_materials/cnn-models/Sentinel2_WaterQuality/trainer/model/1580817124/assets
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Evaluate model**
evaluation_dataset = get_evaluation_dataset() model.evaluate(evaluation_dataset, steps=int(eval_size / batch_size))
877/877 [==============================] - 1s 1ms/step - loss: 4.6872 - mean_squared_error: 4.6872
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Read pretrained model
job_dir = 'gs://' + env.bucket_name + '/' + 'cnn-models/' + dataset_name + '/trainer' model_dir = job_dir + '/model' PROJECT_ID = env.project_id # Pick the directory with the latest timestamp, in case you've trained multiple times exported_model_dirs = ! gsutil ls {model_dir} saved_model_path = exported_model_dirs[-1] model = tf.keras.models.load_model(saved_model_path)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Predict in Earth Engine Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to the input and outputs of the model in order to accomdate the interchange format between pixels from Earth Engine (float32) and inputs to AI Platform (base64). (See [this doc](https://cloud.google.com/ml-engine/docs/online-predictbinary_data_in_prediction_input) for details.) **`earthengine model prepare`**The EEification process is handled for you using the Earth Engine command `earthengine model prepare`. To use that command, we need to specify the input and output model directories and the name of the input and output nodes in the TensorFlow computation graph. We can do all that programmatically:
dataset_name = 'Sentinel2_WaterQuality' job_dir = 'gs://' + env.bucket_name + '/' + 'cnn-models/' + dataset_name + '/trainer' model_dir = job_dir + '/model' project_id = env.project_id # Pick the directory with the latest timestamp, in case you've trained multiple times exported_model_dirs = ! gsutil ls {model_dir} saved_model_path = exported_model_dirs[-1] folder_name = saved_model_path.split('/')[-2] from tensorflow.python.tools import saved_model_utils meta_graph_def = saved_model_utils.get_meta_graph_def(saved_model_path, 'serve') inputs = meta_graph_def.signature_def['serving_default'].inputs outputs = meta_graph_def.signature_def['serving_default'].outputs # Just get the first thing(s) from the serving signature def. i.e. this # model only has a single input and a single output. input_name = None for k,v in inputs.items(): input_name = v.name break output_name = None for k,v in outputs.items(): output_name = v.name break # Make a dictionary that maps Earth Engine outputs and inputs to # AI Platform inputs and outputs, respectively. import json input_dict = "'" + json.dumps({input_name: "array"}) + "'" output_dict = "'" + json.dumps({output_name: "prediction"}) + "'" # Put the EEified model next to the trained model directory. EEIFIED_DIR = job_dir + '/eeified/' + folder_name # You need to set the project before using the model prepare command. !earthengine set_project {PROJECT_ID} !earthengine model prepare --source_dir {saved_model_path} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}
Running command using Cloud API. Set --no-use_cloud_api to go back to using the API Successfully saved project id Running command using Cloud API. Set --no-use_cloud_api to go back to using the API Success: model at 'gs://skydipper_materials/cnn-models/Sentinel2_WaterQuality/trainer/eeified/1580824709' is ready to be hosted in AI Platform.
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Deployed the model to AI Platform
from googleapiclient import discovery from googleapiclient import errors
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Authenticate your GCP account**Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
%env GOOGLE_APPLICATION_CREDENTIALS {env.privatekey_path} model_name = 'water_quality_test' version_name = 'v' + folder_name project_id = env.project_id
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Create model**
print('Creating model: ' + model_name) # Store your full project ID in a variable in the format the API needs. project = 'projects/{}'.format(project_id) # Build a representation of the Cloud ML API. ml = discovery.build('ml', 'v1') # Create a dictionary with the fields from the request body. request_dict = {'name': model_name, 'description': ''} # Create a request to call projects.models.create. request = ml.projects().models().create( parent=project, body=request_dict) # Make the call. try: response = request.execute() print(response) except errors.HttpError as err: # Something went wrong, print out some information. print('There was an error creating the model. Check the details:') print(err._get_reason())
Creating model: water_quality_test There was an error creating the model. Check the details: Field: model.name Error: A model with the same name already exists.
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Create version**
ml = discovery.build('ml', 'v1') request_dict = { 'name': version_name, 'deploymentUri': EEIFIED_DIR, 'runtimeVersion': '1.14', 'pythonVersion': '3.5', 'framework': 'TENSORFLOW', 'autoScaling': { "minNodes": 10 }, 'machineType': 'mls1-c4-m2' } request = ml.projects().models().versions().create( parent=f'projects/{project_id}/models/{model_name}', body=request_dict ) # Make the call. try: response = request.execute() print(response) except errors.HttpError as err: # Something went wrong, print out some information. print('There was an error creating the model. Check the details:') print(err._get_reason())
{'name': 'projects/skydipper-196010/operations/create_water_quality_test_v1580824709-1580824821325', 'metadata': {'@type': 'type.googleapis.com/google.cloud.ml.v1.OperationMetadata', 'createTime': '2020-02-04T14:00:22Z', 'operationType': 'CREATE_VERSION', 'modelName': 'projects/skydipper-196010/models/water_quality_test', 'version': {'name': 'projects/skydipper-196010/models/water_quality_test/versions/v1580824709', 'deploymentUri': 'gs://skydipper_materials/cnn-models/Sentinel2_WaterQuality/trainer/eeified/1580824709', 'createTime': '2020-02-04T14:00:21Z', 'runtimeVersion': '1.14', 'autoScaling': {'minNodes': 10}, 'etag': 'NbCwe2E94o0=', 'framework': 'TENSORFLOW', 'machineType': 'mls1-c4-m2', 'pythonVersion': '3.5'}}}
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check deployment status**
def check_status_deployment(model_name, version_name): desc = !gcloud ai-platform versions describe {version_name} --model={model_name} return desc.grep('state:')[0].split(':')[1].strip() print(check_status_deployment(model_name, version_name))
READY
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Load the trained model and use it for prediction in Earth Engine**Variables**
# polygon where we want to display de predictions geometry = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -2.63671875, 34.56085936708384 ], [ -1.2084960937499998, 34.56085936708384 ], [ -1.2084960937499998, 36.146746777814364 ], [ -2.63671875, 36.146746777814364 ], [ -2.63671875, 34.56085936708384 ] ] ] } } ] }
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Input image**Select bands and convert them into float
image = images[0].select(bands[0]).float()
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Output image**
# Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = project_id, modelName = model_name, version = version_name, inputTileSize = [1, 1], inputOverlapSize = [0, 0], proj = ee.Projection('EPSG:4326').atScale(scale), fixInputProj = True, outputBands = {'prediction': { 'type': ee.PixelType.float(), 'dimensions': 1, } } ) predictions = model.predictImage(image.toArray()).arrayFlatten([bands[1]]) predictions.getInfo()
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Clip the prediction area with the polygon
# Clip the prediction area with the polygon polygon = ee.Geometry.Polygon(geometry.get('features')[0].get('geometry').get('coordinates')) predictions = predictions.clip(polygon) # Get centroid centroid = polygon.centroid().getInfo().get('coordinates')[::-1]
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display**Use folium to visualize the input imagery and the predictions.
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 1}) map = folium.Map(location=centroid, zoom_start=8) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='median composite', ).add_to(map) params = ee_collection_specifics.vizz_params(collections[1])[0] mapid = images[1].getMapId(params) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name=str(params['bands']), ).add_to(map) for band in bands[1]: mapid = predictions.getMapId({'bands': [band], 'min': 0, 'max': 1}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name=band, ).add_to(map) map.add_child(folium.LayerControl()) map
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Make predictions of an image outside Earth Engine Export the imageryWe export the imagery using TFRecord format. **Variables**
#Input image image = images[0].select(bands[0]) dataset_name = 'Sentinel2_WaterQuality' file_name = 'image_pixel' bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data' # polygon where we want to display de predictions geometry = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -2.63671875, 34.56085936708384 ], [ -1.2084960937499998, 34.56085936708384 ], [ -1.2084960937499998, 36.146746777814364 ], [ -2.63671875, 36.146746777814364 ], [ -2.63671875, 34.56085936708384 ] ] ] } } ] } # Specify patch and file dimensions. imageExportFormatOptions = { 'patchDimensions': [256, 256], 'maxFileSize': 104857600, 'compressed': True } # Setup the task. imageTask = ee.batch.Export.image.toCloudStorage( image=image, description='Image Export', fileNamePrefix=folder + '/' + file_name, bucket=bucket, scale=scale, fileFormat='TFRecord', region=geometry.get('features')[0].get('geometry').get('coordinates'), formatOptions=imageExportFormatOptions, ) # Start the task. imageTask.start()
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Read the JSON mixer file**The mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction.
json_file = f'gs://{bucket}' + '/' + folder + '/' + file_name +'.json' # Load the contents of the mixer file to a JSON object. json_text = !gsutil cat {json_file} # Get a single string w/ newlines from the IPython.utils.text.SList mixer = json.loads(json_text.nlstr) pprint(mixer)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Read the image files into a dataset**The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.
# Get relevant info from the JSON mixer file. PATCH_WIDTH = mixer['patchDimensions'][0] PATCH_HEIGHT = mixer['patchDimensions'][1] PATCHES = mixer['totalPatches'] PATCH_DIMENSIONS_FLAT = [PATCH_WIDTH * PATCH_HEIGHT, 1] features = bands[0] glob = f'gs://{bucket}' + '/' + folder + '/' + file_name +'.tfrecord.gz' # Note that the tensors are in the shape of a patch, one patch for each band. image_columns = [ tf.FixedLenFeature(shape=PATCH_DIMENSIONS_FLAT, dtype=tf.float32) for k in features ] # Parsing dictionary. features_dict = dict(zip(bands[0], image_columns)) def parse_image(proto): return tf.io.parse_single_example(proto, features_dict) image_dataset = tf.data.TFRecordDataset(glob, compression_type='GZIP') image_dataset = image_dataset.map(parse_image, num_parallel_calls=5) # Break our long tensors into many little ones. image_dataset = image_dataset.flat_map( lambda features: tf.data.Dataset.from_tensor_slices(features) ) # Turn the dictionary in each record into a tuple without a label. image_dataset = image_dataset.map( lambda dataDict: (tf.transpose(list(dataDict.values())), ) ) # Turn each patch into a batch. image_dataset = image_dataset.batch(PATCH_WIDTH * PATCH_HEIGHT) image_dataset
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check the first record**
arr = iter(image_dataset.take(1)).next() input_arr = arr[0].numpy() print(input_arr.shape)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display the input channels**
def display_channels(data, nChannels, titles = False): if nChannels == 1: plt.figure(figsize=(5,5)) plt.imshow(data[:,:,0]) if titles: plt.title(titles[0]) else: fig, axs = plt.subplots(nrows=1, ncols=nChannels, figsize=(5*nChannels,5)) for i in range(nChannels): ax = axs[i] ax.imshow(data[:,:,i]) if titles: ax.set_title(titles[i]) input_arr = input_arr.reshape((PATCH_WIDTH, PATCH_HEIGHT, len(bands[0]))) input_arr.shape display_channels(input_arr, input_arr.shape[2], titles=bands[0])
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Generate predictions for the image pixelsTo get predictions in each pixel, run the image dataset through the trained model using model.predict(). Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.
predictions = model.predict(image_dataset, steps=PATCHES, verbose=1) output_arr = predictions.reshape((PATCHES, PATCH_WIDTH, PATCH_HEIGHT, len(bands[1]))) output_arr.shape display_channels(output_arr[9,:,:,:], output_arr.shape[3], titles=bands[1])
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Write the predictions to a TFRecord fileWe need to write the pixels into the file as patches in the same order they came out. The records are written as serialized `tf.train.Example` protos.
dataset_name = 'Sentinel2_WaterQuality' bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data' output_file = 'gs://' + bucket + '/' + folder + '/predicted_image_pixel.TFRecord' print('Writing to file ' + output_file) # Instantiate the writer. writer = tf.io.TFRecordWriter(output_file) patch = [[]] nPatch = 1 for prediction in predictions: patch[0].append(prediction[0][0]) # Once we've seen a patches-worth of class_ids... if (len(patch[0]) == PATCH_WIDTH * PATCH_HEIGHT): print('Done with patch ' + str(nPatch) + ' of ' + str(PATCHES)) # Create an example example = tf.train.Example( features=tf.train.Features( feature={ 'prediction': tf.train.Feature( float_list=tf.train.FloatList( value=patch[0])) } ) ) # Write the example to the file and clear our patch array so it's ready for # another batch of class ids writer.write(example.SerializeToString()) patch = [[]] nPatch += 1 writer.close()
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Verify the existence of the predictions file**
!gsutil ls -l {output_file}
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Upload the predicted image to an Earth Engine asset
asset_id = 'projects/vizzuality/skydipper-water-quality/predicted-image' print('Writing to ' + asset_id) # Start the upload. !earthengine upload image --asset_id={asset_id} {output_file} {json_file}
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
View the predicted image
# Get centroid polygon = ee.Geometry.Polygon(geometry.get('features')[0].get('geometry').get('coordinates')) centroid = polygon.centroid().getInfo().get('coordinates')[::-1] EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' map = folium.Map(location=centroid, zoom_start=8) for n, collection in enumerate(collections): params = ee_collection_specifics.vizz_params(collection)[0] mapid = images[n].getMapId(params) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name=str(params['bands']), ).add_to(map) # Read predicted Image predicted_image = ee.Image(asset_id) mapid = predicted_image.getMapId({'bands': ['prediction'], 'min': 0, 'max': 1}) folium.TileLayer( tiles=EE_TILES.format(**mapid), attr='Google Earth Engine', overlay=True, name='predicted image', ).add_to(map) map.add_child(folium.LayerControl()) map
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Calculating Area and Center Coordinates of a Polygon
%load_ext lab_black %load_ext autoreload %autoreload 2 import geopandas as gpd import pandas as pd %aimport src.utils from src.utils import show_df
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
[Table of Contents](table-of-contents)0. [About](about)1. [User Inputs](user-inputs)2. [Load Chicago Community Areas GeoData](load-chicago-community-areas-geodata)3. [Calculate Area of each Community Area](calculate-area-of-each-community-area)4. [Calculate Coordinates of Midpoint of each Community Area](calculate-coordinates-of-midpoint-of-each-community-area) 0. [About](about) We'll explore calculations of the area and central coordinates of polygons from geospatial data using the Python [`geopandas` library](https://pypi.org/project/geopandas/). 1. [User Inputs](user-inputs)
ca_url = "https://data.cityofchicago.org/api/geospatial/cauq-8yn6?method=export&format=GeoJSON" convert_sqm_to_sqft = 10.7639
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
2. [Load Chicago Community Areas GeoData](load-chicago-community-areas-geodata) Load the boundaries geodata for the [Chicago community areas](https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Community-Areas-current-/cauq-8yn6)
%%time gdf_ca = gpd.read_file(ca_url) print(gdf_ca.crs) gdf_ca.head(2)
epsg:4326 CPU times: user 209 ms, sys: 11.7 ms, total: 221 ms Wall time: 1.1 s
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
3. [Calculate Area of each Community Area](calculate-area-of-each-community-area) To get the area, we need to- project the geometry into a Cylindrical Equal-Area (CEA) format, an equal area projection, with that preserves area ([1](https://learn.arcgis.com/en/projects/choose-the-right-projection/))- calculate the area by calling the `.area()` method on the `GeoDataFrame` - this will give area in square meters- [convert area from square meters to square feet](https://www.metric-conversions.org/area/square-meters-to-square-feet.htm) - through trial and error, it was found that this is the unit in which the Chicago community areas geodata gives the area (see the `shape_area` column)
%%time gdf_ca["cea_area_square_feet"] = gdf_ca.to_crs({"proj": "cea"}).area * convert_sqm_to_sqft gdf_ca["diff_sq_feet"] = gdf_ca["shape_area"].astype(float) - gdf_ca["cea_area_square_feet"] gdf_ca["diff_pct"] = gdf_ca["diff_sq_feet"] / gdf_ca["shape_area"].astype(float) * 100 show_df(gdf_ca.drop(columns=["geometry"])) display(gdf_ca[["diff_sq_feet", "diff_pct"]].describe())
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
**Observations**1. It is reassuring that the CEA projection has given us areas in square feet that are within less than 0.01 percent of the areas provided with the Chicago community areas dataset. We'll use this approach to calculate shape areas. 4. [Calculate Coordinates of Midpoint of each Community Area](calculate-coordinates-of-midpoint-of-each-community-area) In order to get the centroid of a geometry, it is [recommended to first project to the CEA CRS (equal area CRS) before computing the centroid](https://gis.stackexchange.com/a/401815/135483). [Other used CRS values include 3395, 32663 or 4087](https://gis.stackexchange.com/a/390563/135483). Once the geometry is projected, we can calculate the centroid coordinates calling the `.centroid()` method on the `GeoDataFrame`'s `geometry` column
%%time centroid_cea = gdf_ca["geometry"].to_crs("+proj=cea").centroid.to_crs(gdf_ca.crs) centroid_3395 = gdf_ca["geometry"].to_crs(epsg=3395).centroid.to_crs(gdf_ca.crs) centroid_32663 = gdf_ca["geometry"].to_crs(epsg=32663).centroid.to_crs(gdf_ca.crs) centroid_4087 = gdf_ca["geometry"].to_crs(epsg=4087).centroid.to_crs(gdf_ca.crs) centroid_6345 = gdf_ca["geometry"].to_crs(epsg=6345).centroid.to_crs(gdf_ca.crs) df_centroid_coords = pd.DataFrame() for c, centroid_coords in zip( ["cea", 3395, 32663, 4087, 6345], [centroid_cea, centroid_3395, centroid_32663, centroid_4087, centroid_6345], ): df_centroid_coords[f"lat_{c}"] = centroid_coords.y df_centroid_coords[f"lon_{c}"] = centroid_coords.x show_df(df_centroid_coords)
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper from steves_utils.iterable_aggregator import Iterable_Aggregator from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig from steves_utils.torch_sequential_builder import build_sequential from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path) from steves_utils.PTN.utils import independent_accuracy_assesment from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory from steves_utils.ptn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.transforms import get_chained_transform
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run1_limited/trials/8/trial.ipynb
stevester94/csc500-notebooks
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "labels_source", "labels_target", "domains_source", "domains_target", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "n_shot", "n_way", "n_query", "train_k_factor", "val_k_factor", "test_k_factor", "n_epoch", "patience", "criteria_for_best", "x_transforms_source", "x_transforms_target", "episode_transforms_source", "episode_transforms_target", "pickle_name", "x_net", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "torch_default_dtype" } standalone_parameters = {} standalone_parameters["experiment_name"] = "STANDALONE PTN" standalone_parameters["lr"] = 0.0001 standalone_parameters["device"] = "cuda" standalone_parameters["seed"] = 1337 standalone_parameters["dataset_seed"] = 1337 standalone_parameters["num_examples_per_domain_per_label_source"]=100 standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_shot"] = 3 standalone_parameters["n_query"] = 2 standalone_parameters["train_k_factor"] = 1 standalone_parameters["val_k_factor"] = 2 standalone_parameters["test_k_factor"] = 2 standalone_parameters["n_epoch"] = 100 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "target_accuracy" standalone_parameters["x_transforms_source"] = ["unit_power"] standalone_parameters["x_transforms_target"] = ["unit_power"] standalone_parameters["episode_transforms_source"] = [] standalone_parameters["episode_transforms_target"] = [] standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ] # Parameters relevant to results # These parameters will basically never need to change standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # uncomment for CORES dataset from steves_utils.CORES.utils import ( ALL_NODES, ALL_NODES_MINIMUM_1000_EXAMPLES, ALL_DAYS ) standalone_parameters["labels_source"] = ALL_NODES standalone_parameters["labels_target"] = ALL_NODES standalone_parameters["domains_source"] = [1] standalone_parameters["domains_target"] = [2,3,4,5] standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl" # Uncomment these for ORACLE dataset # from steves_utils.ORACLE.utils_v2 import ( # ALL_DISTANCES_FEET, # ALL_RUNS, # ALL_SERIAL_NUMBERS, # ) # standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS # standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS # standalone_parameters["domains_source"] = [8,20, 38,50] # standalone_parameters["domains_target"] = [14, 26, 32, 44, 56] # standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl" # standalone_parameters["num_examples_per_domain_per_label_source"]=1000 # standalone_parameters["num_examples_per_domain_per_label_target"]=1000 # Uncomment these for Metahan dataset # standalone_parameters["labels_source"] = list(range(19)) # standalone_parameters["labels_target"] = list(range(19)) # standalone_parameters["domains_source"] = [0] # standalone_parameters["domains_target"] = [1] # standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl" # standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # standalone_parameters["num_examples_per_domain_per_label_source"]=200 # standalone_parameters["num_examples_per_domain_per_label_target"]=100 standalone_parameters["n_way"] = len(standalone_parameters["labels_source"]) # Parameters parameters = { "experiment_name": "tuned_1v2:oracle.run1_limited", "device": "cuda", "lr": 0.0001, "labels_source": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "labels_target": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "episode_transforms_source": [], "episode_transforms_target": [], "domains_source": [8, 32, 50], "domains_target": [14, 20, 26, 38, 44], "num_examples_per_domain_per_label_source": 2000, "num_examples_per_domain_per_label_target": 2000, "n_shot": 3, "n_way": 16, "n_query": 2, "train_k_factor": 3, "val_k_factor": 2, "test_k_factor": 2, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "pickle_name": "oracle.Run1_10kExamples_stratified_ds.2022A.pkl", "x_transforms_source": ["unit_power"], "x_transforms_target": ["unit_power"], "dataset_seed": 1337, "seed": 1337, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) ########################################### # The stratified datasets honor this ########################################### torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG # (This is due to the randomized initial weights) ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() ################################### # Build the dataset ################################### if p.x_transforms_source == []: x_transform_source = None else: x_transform_source = get_chained_transform(p.x_transforms_source) if p.x_transforms_target == []: x_transform_target = None else: x_transform_target = get_chained_transform(p.x_transforms_target) if p.episode_transforms_source == []: episode_transform_source = None else: raise Exception("episode_transform_source not implemented") if p.episode_transforms_target == []: episode_transform_target = None else: raise Exception("episode_transform_target not implemented") eaf_source = Episodic_Accessor_Factory( labels=p.labels_source, domains=p.domains_source, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_source, example_transform_func=episode_transform_source, ) train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test() eaf_target = Episodic_Accessor_Factory( labels=p.labels_target, domains=p.domains_target, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target, iterator_seed=p.seed, dataset_seed=p.dataset_seed, n_shot=p.n_shot, n_way=p.n_way, n_query=p.n_query, train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor), pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name), x_transform_func=x_transform_target, example_transform_func=episode_transform_target, ) train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test() transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda) val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda) test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda) train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda) val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda) test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) # Some quick unit tests on the data from steves_utils.transforms import get_average_power, get_average_magnitude q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source)) assert q_x.dtype == eval(p.torch_default_dtype) assert s_x.dtype == eval(p.torch_default_dtype) print("Visually inspect these to see if they line up with expected values given the transforms") print('x_transforms_source', p.x_transforms_source) print('x_transforms_target', p.x_transforms_target) print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy())) print("Average power, source:", get_average_power(q_x[0].numpy())) q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target)) print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy())) print("Average power, target:", get_average_power(q_x[0].numpy())) ################################### # Build the model ################################### model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256)) optimizer = Adam(params=model.parameters(), lr=p.lr) ################################### # train ################################### jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, patience=p.patience, optimizer=optimizer, criteria_for_best=p.criteria_for_best, ) total_experiment_time_secs = time.time() - start_time_secs ################################### # Evaluate the model ################################### source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val)) confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) experiment = { "experiment_name": p.experiment_name, "parameters": dict(p), "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "ptn"), } ax = get_loss_curve(experiment) plt.show() get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment)
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run1_limited/trials/8/trial.ipynb
stevester94/csc500-notebooks
Batch Normalization – Practice Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.This is **not** a good network for classfying MNIST digits. You could create a _much_ simpler network and get _better_ results. However, to give you hands-on experience with batch normalization, we had to make an example that was:1. Complicated enough that training would benefit from batch normalization.2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.3. Simple enough that the architecture would be easy to understand without additional resources. This notebook includes two versions of the network that you can edit. The first uses higher level functions from the `tf.layers` package. The second is the same network, but uses only lower level functions in the `tf.nn` package.1. [Batch Normalization with `tf.layers.batch_normalization`](example_1)2. [Batch Normalization with `tf.nn.batch_normalization`](example_2) The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named `mnist`. You'll need to run this cell before running anything else in the notebook.
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
Batch Normalization using `tf.layers.batch_normalization`This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.This version of the function does not include batch normalization.
""" DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu) return layer
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.This version of the function does not include batch normalization.
""" DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu) return conv_layer
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions). This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
""" DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly. correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]]}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
Batch: 0: Validation loss: 0.69052, Validation accuracy: 0.10020 Batch: 25: Training loss: 0.41248, Training accuracy: 0.07812 Batch: 50: Training loss: 0.32848, Training accuracy: 0.04688 Batch: 75: Training loss: 0.32555, Training accuracy: 0.07812 Batch: 100: Validation loss: 0.32519, Validation accuracy: 0.11000 Batch: 125: Training loss: 0.32458, Training accuracy: 0.07812 Batch: 150: Training loss: 0.32654, Training accuracy: 0.12500 Batch: 175: Training loss: 0.32703, Training accuracy: 0.03125 Batch: 200: Validation loss: 0.32540, Validation accuracy: 0.11260 Batch: 225: Training loss: 0.32345, Training accuracy: 0.18750 Batch: 250: Training loss: 0.32359, Training accuracy: 0.07812 Batch: 275: Training loss: 0.32836, Training accuracy: 0.07812 Batch: 300: Validation loss: 0.32573, Validation accuracy: 0.11260 Batch: 325: Training loss: 0.32430, Training accuracy: 0.07812 Batch: 350: Training loss: 0.32710, Training accuracy: 0.07812 Batch: 375: Training loss: 0.32377, Training accuracy: 0.15625 Batch: 400: Validation loss: 0.32518, Validation accuracy: 0.09900 Batch: 425: Training loss: 0.32419, Training accuracy: 0.09375 Batch: 450: Training loss: 0.32710, Training accuracy: 0.04688 Batch: 475: Training loss: 0.32596, Training accuracy: 0.09375 Batch: 500: Validation loss: 0.32536, Validation accuracy: 0.11260 Batch: 525: Training loss: 0.32429, Training accuracy: 0.03125 Batch: 550: Training loss: 0.32544, Training accuracy: 0.09375 Batch: 575: Training loss: 0.32535, Training accuracy: 0.12500 Batch: 600: Validation loss: 0.32552, Validation accuracy: 0.10020 Batch: 625: Training loss: 0.32403, Training accuracy: 0.10938 Batch: 650: Training loss: 0.32617, Training accuracy: 0.09375 Batch: 675: Training loss: 0.32527, Training accuracy: 0.12500 Batch: 700: Validation loss: 0.32512, Validation accuracy: 0.11000 Batch: 725: Training loss: 0.32503, Training accuracy: 0.17188 Batch: 750: Training loss: 0.32640, Training accuracy: 0.09375 Batch: 775: Training loss: 0.32589, Training accuracy: 0.07812 Final validation accuracy: 0.09860 Final test accuracy: 0.10100 Accuracy on 100 samples: 0.11
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches. Add batch normalizationWe've copied the previous three cells to get you started. **Edit these cells** to add batch normalization to the network. For this exercise, you should use [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. If you get stuck, you can check out the `Batch_Normalization_Solutions` notebook to see how we did things. **TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) layer = tf.layers.batch_normalization(layer, training=is_training) layer = tf.nn.relu(layer) return layer
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Modify `conv_layer` to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None, use_bias=False) conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training) conv_layer = tf.nn.relu(conv_layer) return conv_layer
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) # Tell TensorFlow to update the population statistics while training with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
Batch: 0: Validation loss: 0.69111, Validation accuracy: 0.08680 Batch: 25: Training loss: 0.58037, Training accuracy: 0.17188 Batch: 50: Training loss: 0.46359, Training accuracy: 0.10938 Batch: 75: Training loss: 0.39624, Training accuracy: 0.07812 Batch: 100: Validation loss: 0.35559, Validation accuracy: 0.10020 Batch: 125: Training loss: 0.34059, Training accuracy: 0.12500 Batch: 150: Training loss: 0.33566, Training accuracy: 0.06250 Batch: 175: Training loss: 0.32763, Training accuracy: 0.21875 Batch: 200: Validation loss: 0.40874, Validation accuracy: 0.11260 Batch: 225: Training loss: 0.41788, Training accuracy: 0.09375 Batch: 250: Training loss: 0.50921, Training accuracy: 0.18750 Batch: 275: Training loss: 0.40777, Training accuracy: 0.35938 Batch: 300: Validation loss: 0.62787, Validation accuracy: 0.20260 Batch: 325: Training loss: 0.46186, Training accuracy: 0.42188 Batch: 350: Training loss: 0.20306, Training accuracy: 0.71875 Batch: 375: Training loss: 0.06057, Training accuracy: 0.90625 Batch: 400: Validation loss: 0.07048, Validation accuracy: 0.89720 Batch: 425: Training loss: 0.00765, Training accuracy: 0.98438 Batch: 450: Training loss: 0.01864, Training accuracy: 0.95312 Batch: 475: Training loss: 0.02225, Training accuracy: 0.95312 Batch: 500: Validation loss: 0.04807, Validation accuracy: 0.93200 Batch: 525: Training loss: 0.02990, Training accuracy: 0.96875 Batch: 550: Training loss: 0.06346, Training accuracy: 0.92188 Batch: 575: Training loss: 0.07358, Training accuracy: 0.90625 Batch: 600: Validation loss: 0.06977, Validation accuracy: 0.89360 Batch: 625: Training loss: 0.00792, Training accuracy: 0.98438 Batch: 650: Training loss: 0.04138, Training accuracy: 0.92188 Batch: 675: Training loss: 0.05289, Training accuracy: 0.92188 Batch: 700: Validation loss: 0.02661, Validation accuracy: 0.96060 Batch: 725: Training loss: 0.03836, Training accuracy: 0.96875 Batch: 750: Training loss: 0.03171, Training accuracy: 0.95312 Batch: 775: Training loss: 0.02621, Training accuracy: 0.96875 Final validation accuracy: 0.95760 Final test accuracy: 0.96350 Accuracy on 100 samples: 0.98
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference. Batch Normalization using `tf.nn.batch_normalization`Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.This version of the network uses `tf.nn` for almost everything, and expects you to implement batch normalization using [`tf.nn.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization).**Optional TODO:** You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. **TODO:** Modify `fully_connected` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.**Note:** For convenience, we continue to use `tf.layers.dense` for the `fully_connected` layer. By this point in the class, you should have no problem replacing that with matrix operations between the `prev_layer` and explicit weights and biases variables.
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the number of units, nodes, or neurons. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new fully connected layer """ layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None) gamma = tf.Variable(tf.ones([num_units])) beta = tf.Variable(tf.zeros([num_units])) pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False) pop_variance = tf.Variable(tf.ones([num_units]), trainable=False) epsilon = 1e-3 def batch_norm_training(): batch_mean, batch_variance = tf.nn.moments(layer, [0]) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output)
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Modify `conv_layer` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.**Note:** Unlike in the previous example that used `tf.layers`, adding batch normalization to these convolutional layers _does_ require some slight differences to what you did in `fully_connected`.
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's depth in the network. This is *not* a good way to make a CNN, but it helps us create this example with very little code. :param is_training: bool or Tensor Indicates whether or not the network is currently training, which tells the batch normalization layer whether or not it should update or use its population statistics. :returns Tensor A new convolutional layer """ strides = 2 if layer_depth % 3 == 0 else 1 in_channels = prev_layer.get_shape().as_list()[3] out_channels = layer_depth*4 weights = tf.Variable( tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)) layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME') gamma = tf.Variable(tf.ones([out_channels])) beta = tf.Variable(tf.zeros([out_channels])) pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False) pop_variance = tf.Variable(tf.ones([out_channels]), trainable=False) epsilon = 1e-3 def batch_norm_training(): # Important to use the correct dimensions here to ensure the mean and variance are calculated # per feature map instead of for the entire layer batch_mean, batch_variance = tf.nn.moments(layer, [0,1,2], keep_dims=False) decay = 0.99 train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay)) train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay)) with tf.control_dependencies([train_mean, train_variance]): return tf.nn.batch_normalization(layer, batch_mean, batch_variance, beta, gamma, epsilon) def batch_norm_inference(): return tf.nn.batch_normalization(layer, pop_mean, pop_variance, beta, gamma, epsilon) batch_normalized_output = tf.cond(is_training, batch_norm_training, batch_norm_inference) return tf.nn.relu(batch_normalized_output)
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.placeholder(tf.bool) # Feed the inputs into a series of 20 convolutional layers layer = inputs for layer_i in range(1, 20): layer = conv_layer(layer, layer_i, is_training) # Flatten the output from the convolutional layers orig_shape = layer.get_shape().as_list() layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]]) # Add one fully connected layer layer = fully_connected(layer, 100, is_training) # Create the output layer with 1 node for each logits = tf.layers.dense(layer, 10) # Define loss and training operations model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss) # Create operations to test accuracy correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Train and test the network with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for batch_i in range(num_batches): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # train this batch sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True}) # Periodically check the validation or training loss and accuracy if batch_i % 100 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc)) elif batch_i % 25 == 0: loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False}) print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc)) # At the end, score the final accuracy for both the validation and test sets acc = sess.run(accuracy, {inputs: mnist.validation.images, labels: mnist.validation.labels, is_training: False}) print('Final validation accuracy: {:>3.5f}'.format(acc)) acc = sess.run(accuracy, {inputs: mnist.test.images, labels: mnist.test.labels, is_training: False}) print('Final test accuracy: {:>3.5f}'.format(acc)) # Score the first 100 test images individually, just to make sure batch normalization really worked correct = 0 for i in range(100): correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]], labels: [mnist.test.labels[i]], is_training: False}) print("Accuracy on 100 samples:", correct/100) num_batches = 800 batch_size = 64 learning_rate = 0.002 tf.reset_default_graph() with tf.Graph().as_default(): train(num_batches, batch_size, learning_rate)
Batch: 0: Validation loss: 0.69128, Validation accuracy: 0.09580 Batch: 25: Training loss: 0.58242, Training accuracy: 0.07812 Batch: 50: Training loss: 0.46814, Training accuracy: 0.07812 Batch: 75: Training loss: 0.40309, Training accuracy: 0.17188 Batch: 100: Validation loss: 0.36373, Validation accuracy: 0.09900 Batch: 125: Training loss: 0.35578, Training accuracy: 0.07812 Batch: 150: Training loss: 0.33116, Training accuracy: 0.10938 Batch: 175: Training loss: 0.34014, Training accuracy: 0.15625 Batch: 200: Validation loss: 0.35679, Validation accuracy: 0.09900 Batch: 225: Training loss: 0.36367, Training accuracy: 0.06250 Batch: 250: Training loss: 0.48576, Training accuracy: 0.10938 Batch: 275: Training loss: 0.45041, Training accuracy: 0.10938 Batch: 300: Validation loss: 0.60292, Validation accuracy: 0.11260 Batch: 325: Training loss: 0.90907, Training accuracy: 0.12500 Batch: 350: Training loss: 1.21087, Training accuracy: 0.09375 Batch: 375: Training loss: 0.84756, Training accuracy: 0.10938 Batch: 400: Validation loss: 0.82665, Validation accuracy: 0.16000 Batch: 425: Training loss: 0.45936, Training accuracy: 0.28125 Batch: 450: Training loss: 0.70676, Training accuracy: 0.21875 Batch: 475: Training loss: 0.22090, Training accuracy: 0.75000 Batch: 500: Validation loss: 0.18597, Validation accuracy: 0.78500 Batch: 525: Training loss: 0.06446, Training accuracy: 0.87500 Batch: 550: Training loss: 0.03445, Training accuracy: 0.95312 Batch: 575: Training loss: 0.03627, Training accuracy: 0.96875 Batch: 600: Validation loss: 0.05220, Validation accuracy: 0.92260 Batch: 625: Training loss: 0.01909, Training accuracy: 0.98438 Batch: 650: Training loss: 0.02751, Training accuracy: 0.96875 Batch: 675: Training loss: 0.00516, Training accuracy: 1.00000 Batch: 700: Validation loss: 0.06646, Validation accuracy: 0.92720 Batch: 725: Training loss: 0.03347, Training accuracy: 0.92188 Batch: 750: Training loss: 0.06926, Training accuracy: 0.90625 Batch: 775: Training loss: 0.02755, Training accuracy: 0.96875 Final validation accuracy: 0.96560 Final test accuracy: 0.96320 Accuracy on 100 samples: 0.97
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
PyTorch on GPU: first steps Put tensor to GPU
import torch device = torch.device("cuda:0") my_tensor = torch.Tensor([1., 2., 3., 4., 5.]) mytensor = my_tensor.to(device) mytensor my_tensor
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Put model to GPU
from torch import nn class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) print("\tIn Model: input size", input.size(), "output size", output.size()) return output input_size = 128 output_size = 128 model = Model(input_size, output_size) device = torch.device("cuda:0") model.to(device)
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Data parallelism
from torch.nn import DataParallel torch.cuda.is_available() torch.cuda.device_count()
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Part on CPU, part on GPU
device = torch.device("cuda:0") class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, 100) self.fc2 = nn.Linear(100, output_size).to(device) def forward(self, x): # Compute first layer on CPU x = self.fc(x) # Transfer to GPU x = x.to(device) # Compute second layer on GPU x = self.fc2(x) return x input_size = 100 output_size = 50 data_length = 1000 data = torch.randn(data_length, input_size) model = Model(input_size, output_size) model.forward(data)
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Country Economic Conditions for Cargo Carriers This report is written from the point of view of a data scientist preparing a report to the Head of Analytics for a logistics company. The company needs information on economic and financial conditions is different countries, including data on their international trade, to be aware of any situations that could affect business. Data Summary This dataset is taken from the International Monetary Fund (IMF) data bank. It lists country-level economic and financial statistics from all countries globally. This includes data such as gross domestic product (GDP), inflation, exports and imports, and government borrowing and revenue. The data is given in either US Dollars, or local currency depending on the country and year. Some variables, like inflation and unemployment, are given as percentages. Data Exploration The initial plan for data exploration is to first model the data on country GDP and inflation, then to look further into trade statistics.
#Import required packages import numpy as np import pandas as pd from sklearn import linear_model from scipy import stats import math from sklearn import datasets, linear_model from sklearn.linear_model import LinearRegression import statsmodels.api as sm #Import IMF World Economic Outlook Data from GitHub WEO = pd.read_csv('https://raw.githubusercontent.com/jamiemfraser/machine_learning/main/WEOApr2021all.csv') WEO=pd.DataFrame(WEO) WEO.head() # Print basic details of the dataset print(WEO.shape[0]) print(WEO.columns.tolist()) print(WEO.dtypes) #Shows that all numeric columns are type float, and string columns are type object
4289 ['CountryCode', 'Country', 'Indicator', 'Notes', 'Units', 'Scale', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019'] CountryCode object Country object Indicator object Notes object Units object Scale object 2000 float64 2001 float64 2002 float64 2003 float64 2004 float64 2005 float64 2006 float64 2007 float64 2008 float64 2009 float64 2010 float64 2011 float64 2012 float64 2013 float64 2014 float64 2015 float64 2016 float64 2017 float64 2018 float64 2019 float64 dtype: object
CC0-1.0
Country_Economic_Conditions_for_Cargo_Carriers.ipynb
jamiemfraser/machine_learning