markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
As expected, this plot shows that all of the agreement metrics as well as PRMSE ranks the systems correctly since all of them are being computed against the same rater pair. Note that systems known to have better performance rank "higher". Step 3: Evaluate each systems against a different pair of ratersNow, we change things up and evaluate the scores assigned by each of our simulated systems in the dataset against a _different_ pair of simulated human raters from the dataset. We always sample both human raters from the same category but different _pairs_ of raters are sampled from different categories so that each system is evaluated against rater pairs with a different level of average inter-rater agreement.
# first let's get rater pairs within each category rater_pairs_per_category = df_rater_metadata.groupby('rater_category')['rater_id'].apply(lambda values: itertools.combinations(values, 2)) # next let's combine all possible rater pairs across the categories combined_rater_pairs = [f"{rater_id1}+{rater_id2}" for rater_id1, rater_id2 in itertools.chain.from_iterable(rater_pairs_per_category.values)] # finally sample a rater pair for each of the systems prng = np.random.RandomState(1234567890) num_systems = dataset.num_systems_per_category * len(dataset.system_categories) rater_pairs_for_systems = prng.choice(combined_rater_pairs, size=num_systems, replace=False) rater_pairs_for_systems = [rater_pair.split('+') for rater_pair in rater_pairs_for_systems]
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Before we proceed, let's see how the different system categories are distributed across different rater pairs.
# create a dataframe from our rater pairs with h1 and h2 as the two columns df_rater_pairs_with_categories = pd.DataFrame(data=rater_pairs_for_systems, columns=['h1', 'h2']) # add in the system metadata df_rater_pairs_with_categories['system_id'] = df_system_metadata['system_id'] df_rater_pairs_with_categories['system_category'] = df_system_metadata['system_category'] # merge in the rater metadata # we can use h1 since the pairs are always drawn from the same category of raters df_rater_pairs_with_categories = pd.merge(df_rater_pairs_with_categories, df_rater_metadata, left_on='h1', right_on='rater_id' ) system_category_by_rater_category_table = pd.crosstab(df_rater_pairs_with_categories['system_category'], df_rater_pairs_with_categories['rater_category']).loc[dataset.system_categories, dataset.rater_categories] system_category_by_rater_category_table
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
As the table shows, we see that systems in different categories were evaluated against rater pairs with different level of agreement. For example, 3 out of 5 systems in "low" category were evaluated against raters with "high" agreement. At the same time for systems in "medium" category, 3 out of 5 systems were evaluated against raters with "low" agreement. Next, we compute the values of the conventional agreement metrics and PRMSE for each system against its corresponding rater pair.
# initialize an empty list to hold the metric values for each system ID metric_values_list = [] # iterate over each system ID for system_id, rater_id1, rater_id2 in zip(df_rater_pairs_with_categories['system_id'], df_rater_pairs_with_categories['h1'], df_rater_pairs_with_categories['h2']): # compute the agreement metrics for all of the systems in this category against our chosen rater pair metric_values, _ = compute_agreement_one_system_one_rater_pair(df_scores, system_id, rater_id1, rater_id2, include_mean=True) # now compute the PRMSE value of the system against the two raters metric_values['PRMSE'] = prmse_true(df_scores[system_id], df_scores[[rater_id1, rater_id2]]) # save the system ID since we will need it later metric_values['system_id'] = system_id # save this list of metrics in the list metric_values_list.append(metric_values) # now create a data frame with all of the metric values for all system IDs df_metrics_different_rater_pairs = pd.DataFrame(metric_values_list) # merge in the system category from the metadata since we need that for plotting df_metrics_different_rater_pairs_with_categories = df_metrics_different_rater_pairs.merge(df_system_metadata, left_on='system_id', right_on='system_id') # keep only the columns we need df_metrics_different_rater_pairs_with_categories = df_metrics_different_rater_pairs_with_categories[['system_id', 'system_category', 'r', 'QWK', 'R2', 'degradation', 'PRMSE']]
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Now that we have computed the metrics, we can plot each simulated system's measured performance via each of the metrics against its known performance, as indicated by its system category.
# now create a longer version of this data frame that's more amenable to plotting df_metrics_different_rater_pairs_with_categories_long = df_metrics_different_rater_pairs_with_categories.melt(id_vars=['system_id', 'system_category'], var_name='metric') # plot the metric values by system category ax = sns.catplot(col='metric', y='value', x='system_category', data=df_metrics_different_rater_pairs_with_categories_long, kind='box', order=dataset.system_categories) plt.show()
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
From this plot, we can see that only PRMSE values accurately separate the systems from each other whereas the other metrics are not able to do. Next, let's plot how the different systems are ranked by each of the metrics and also compare these ranks to the ranks from the same-rater scenario.
# get the ranks for the metrics df_ranks_different_rater_pairs = compute_ranks_from_metrics(df_metrics_different_rater_pairs_with_categories) # and now get a longer version of this data frame that's more amenable to plotting df_ranks_different_rater_pairs_long = df_ranks_different_rater_pairs.melt(id_vars=['system_category', 'system_id'], var_name='metric', value_name='rank') # we also merge in the ranks from the same-rater scenario and distinguish the two sets of ranks with suffixes df_all_ranks_long = df_ranks_different_rater_pairs_long.merge(df_ranks_same_rater_pair_long, left_on=['system_id', 'system_category', 'metric'], right_on=['system_id', 'system_category', 'metric'], suffixes=('_diff', '_same')) # now make a plot that shows the comparison of the two sets of ranks with sns.plotting_context('notebook', font_scale=1.1): g = sns.FacetGrid(df_all_ranks_long, col='metric', height=4, aspect=0.6) g.map(sns.boxplot, 'system_category', 'rank_diff', order=dataset.system_categories, color='grey') g.map(sns.stripplot, 'system_category', 'rank_same', order=dataset.system_categories, color='red', jitter=False) (g.set_xticklabels(dataset.system_categories, rotation=90) .set(xlabel='system category') .set(ylabel='rank')) plt.show()
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
As this plot shows, only the PRMSE metric is still able to rank the systems accurately whereas all the other metrics are not. We can also make another plot that shows a more direct comparison between $R^2$ and PRMSE.
# plot PRMSE and R2 ranks only df_r2_prmse_ranks_long = df_all_ranks_long[df_all_ranks_long['metric'].isin(['R2', 'PRMSE'])] ax = sns.boxplot(x='system_category', y='rank_diff', hue='metric', hue_order=['R2', 'PRMSE'], data=df_r2_prmse_ranks_long,) ax.set_xlabel('system category') ax.set_ylabel('rank') plt.show()
_____no_output_____
MIT
notebooks/ranking_multiple_systems.ipynb
EducationalTestingService/prmse-simulations
Hyperparameter tuning with Cloud ML Engine **Learning Objectives:** * Improve the accuracy of a model by hyperparameter tuning
import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 os.environ['TFVERSION'] = '1.8' # Tensorflow version # for bash os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/b_hyperparam.ipynb
cjqian/training-data-analyst
Create command-line programIn order to submit to Cloud ML Engine, we need to create a distributed training program. Let's convert our housing example to fit that paradigm, using the Estimators API.
%%bash rm -rf trainer mkdir trainer touch trainer/__init__.py %%writefile trainer/house.py import os import math import json import shutil import argparse import numpy as np import pandas as pd import tensorflow as tf def train(output_dir, batch_size, learning_rate): tf.logging.set_verbosity(tf.logging.INFO) # Read dataset and split into train and eval df = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",") df['num_rooms'] = df['total_rooms'] / df['households'] msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk] # Train and eval input functions SCALE = 100000 train_input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = batch_size, # note the batch size shuffle = True) eval_input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, batch_size = len(evaldf), shuffle=False) # Define feature columns features = [tf.feature_column.numeric_column('num_rooms')] def train_and_evaluate(output_dir): # Compute appropriate number of steps num_steps = (len(traindf) / batch_size) / learning_rate # if learning_rate=0.01, hundred epochs # Create custom optimizer myopt = tf.train.FtrlOptimizer(learning_rate = learning_rate) # note the learning rate # Create rest of the estimator as usual estimator = tf.estimator.LinearRegressor(model_dir = output_dir, feature_columns = features, optimizer = myopt) train_spec = tf.estimator.TrainSpec(input_fn = train_input_fn, max_steps = num_steps) eval_spec = tf.estimator.EvalSpec(input_fn = eval_input_fn, steps = None) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run the training shutil.rmtree(output_dir, ignore_errors=True) # start fresh each time train_and_evaluate(output_dir) if __name__ == '__main__' and "get_ipython" not in dir(): parser = argparse.ArgumentParser() parser.add_argument( '--learning_rate', type = float, default = 0.01 ) parser.add_argument( '--batch_size', type = int, default = 30 ), parser.add_argument( '--job-dir', help = 'GCS location to write checkpoints and export models.', required = True ) args = parser.parse_args() print("Writing checkpoints to {}".format(args.job_dir)) train(args.job_dir, args.batch_size, args.learning_rate) %%bash rm -rf house_trained gcloud ml-engine local train \ --module-name=trainer.house \ --job-dir=house_trained \ --package-path=$(pwd)/trainer \ -- \ --batch_size=30 \ --learning_rate=0.02
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/b_hyperparam.ipynb
cjqian/training-data-analyst
Create hyperparam.yaml
%%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 5 maxParallelTrials: 1 hyperparameterMetricTag: average_loss params: - parameterName: batch_size type: INTEGER minValue: 8 maxValue: 64 scaleType: UNIT_LINEAR_SCALE - parameterName: learning_rate type: DOUBLE minValue: 0.01 maxValue: 0.1 scaleType: UNIT_LOG_SCALE %%bash OUTDIR=gs://${BUCKET}/house_trained # CHANGE bucket name appropriately gsutil rm -rf $OUTDIR gcloud ml-engine jobs submit training house_$(date -u +%y%m%d_%H%M%S) \ --config=hyperparam.yaml \ --module-name=trainer.house \ --package-path=$(pwd)/trainer \ --job-dir=$OUTDIR \ --runtime-version=$TFVERSION \ !gcloud ml-engine jobs describe house_180403_231031 # CHANGE jobId appropriately
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive/05_artandscience/b_hyperparam.ipynb
cjqian/training-data-analyst
Big Brother - Healthcare edition Building a classifier using the [fastai](https://www.fast.ai/) library
from fastai.tabular import * #hide path = Path('./covid19_ml_education') df = pd.read_csv(path/'covid_ml.csv') df.head(3)
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Independent variableThis is the value we want to predict
y_col = 'urgency_of_admission'
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Dependent variableThe values on which we can make a prediciton
cat_names = ['sex', 'cough', 'fever', 'chills', 'sore_throat', 'headache', 'fatigue'] cat_names = ['sex', 'cough', 'fever', 'headache', 'fatigue'] cont_names = ['age'] #hide procs = [FillMissing, Categorify, Normalize] #hide test = TabularList.from_df(df.iloc[660:861].copy(), path = path, cat_names= cat_names, cont_names = cont_names) data = (TabularList.from_df(df, path=path, cat_names=cat_names, cont_names=cont_names, procs = procs) .split_by_rand_pct(0.2) .label_from_df(cols=y_col) # .add_test(test) .databunch() ) data.show_batch(rows=5)
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
ModelHere we build our machine learning model that will learn from the dataset to classify between patients Using Focal Loss
import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable class FocalLoss(nn.Module): def __init__(self, gamma=0, alpha=None, size_average=True): super(FocalLoss, self).__init__() self.gamma = gamma self.alpha = alpha if isinstance(alpha,(float,int)): self.alpha = torch.Tensor([alpha,1-alpha]) if isinstance(alpha,list): self.alpha = torch.Tensor(alpha) self.size_average = size_average def forward(self, input, target): if input.dim()>2: input = input.view(input.size(0),input.size(1),-1) # N,C,H,W => N,C,H*W input = input.transpose(1,2) # N,C,H*W => N,H*W,C input = input.contiguous().view(-1,input.size(2)) # N,H*W,C => N*H*W,C target = target.view(-1,1) logpt = F.log_softmax(input) logpt = logpt.gather(1,target) logpt = logpt.view(-1) pt = Variable(logpt.data.exp()) if self.alpha is not None: if self.alpha.type()!=input.data.type(): self.alpha = self.alpha.type_as(input.data) at = self.alpha.gather(0,target.data.view(-1)) logpt = logpt * Variable(at) loss = -1 * (1-pt)**self.gamma * logpt if self.size_average: return loss.mean() else: return loss.sum() learn = tabular_learner(data, layers = [150,50], \ metrics = [accuracy,FBeta("macro")]) learn.load('150-50-focal') learn.loss_func = FocalLoss() #hide learn.fit_one_cycle(5, 1e-4, wd= 0.2) learn.save('150-50-focal') learn.export('150-50-focal.pth') #hide testdf = df.iloc[660:861].copy() testdf.urgency.value_counts() testdf.head() testdf = testdf.iloc[:,1:] #hide testdf.insert(0, 'predictions','') #hide for i in range(len(testdf)): row = testdf.iloc[i][1:] testdf.predictions.iloc[i] = str(learn.predict(row)[0])
/usr/local/lib/python3.7/site-packages/pandas/core/indexing.py:205: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy self._setitem_with_indexer(indexer, value)
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Making predictionsWe've taken out a test set to see how well our model works, by making predictions on them.Interestingly, all those predicted with 'High' urgency have a common trait of absence of **chills** and **sore throat**
testdf.urgency.value_counts() testdf.predictions.value_counts() from sklearn.metrics import classification_report print(classification_report(testdf.predictions, testdf.urgency, labels = ["High", "Low"])) print(classification_report(testdf.predictions, testdf.urgency, labels = ["High", "Low"])) testdf = pd.read_csv('processed_over_test.csv') testdf = testdf.iloc[:,1:] testdf.head() yesnomapper = {1:'Yes', 0: 'No'} for col in testdf.columns[2:-1]: testdf[col]= testdf[col].map(yesnomapper testdf['sex'] = testdf['sex'].map({1: 'male', 0:'female'}) testdf['urgency'] = testdf['urgency'].map({0:'Low', 1:'High'}) from sklearn.metrics import confusion_matrix cm_test = confusion_matrix(testdf.urgency, testdf.predictions) cm_test cm_test = np.array([[72, 51], [18,27]]) cm_test cm_test2 = np.array([[94, 29],[30,15]]) df_cm import seaborn as sn import pandas as pd fig, ax = plt.subplots() fig.set_size_inches(7,5) df_cm = pd.DataFrame(cm_test2, index = ['Actual Low','Actual High'], columns = ['Predicted Low','Predicted High']) sns.set(font_scale=1.2) sn.heatmap(df_cm, annot=True, ax = ax) ax.set_ylim([0,2]); ax.set_title('Deep Model Confusion Matrix') fig.savefig('DeepModel_CM.png')
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Profile after focal loss
import seaborn as sns import pandas as pd fig, ax = plt.subplots() fig.set_size_inches(7,5) df_cm = pd.DataFrame(cm_test, index = ['Actual Low','Actual High'], columns = ['Predicted Low','Predicted High']) sns.set(font_scale=1.2) sns.heatmap(df_cm, annot=True, ax = ax) ax.set_ylim([0,2]); ax.set_title('Deep Model Confusion Matrix (with Focal Loss)'); fig.savefig('DeepModel_CM_Focal Loss.png') import seaborn as sns import pandas as pd fig, ax = plt.subplots() fig.set_size_inches(7,5) df_cm = pd.DataFrame(cm_test, index = ['Actual Low','Actual High'], columns = ['Predicted Low','Predicted High']) sns.set(font_scale=1.2) sns.heatmap(df_cm, annot=True, ax = ax) ax.set_ylim([0,2]); ax.set_title('Deep Model Confusion Matrix (with Focal Loss)'); testdf.head() row = testdf.iloc[0] round(float(learn.predict(row[1:-1])[2][0]),5)
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Experimental SectionTrying to figure out top
for i in range(len(testdf)): row = testdf.iloc[i][1:] testdf.probability.iloc[i] = round(float(learn.predict(row[1:-1])[2][0]),5) testdf.head() testdf.sort_values(by=['probability'],ascending = False, inplace = True) # cumulative lift gain baseline model - test 20% Cost based affection Give kits only top 20% Profiling them: How you can get the probs? Decile? subsetting your group - divide 100 people into ten equal groups descending order of probability profile them: see features (prediction important features) top 20 vs rest 80 Descriptive statistics (count, mean, median, average) How are they different? (see a big distinction top 20 top 80) figure out what is happening questions: lift curve # 1. GET PROBABILITIES 2. MAKE DECILES 3. MAKE CURVE 4. PROFILING (feature selection - HOW ARE THEY BEHAVING??) Optional: Work with different thresholds Confusion matrix to risk matrix (cost what minimizes - risk utility matrix) import scikitplot as skplt skplt.metrics.plot_cumulative_gain(y_true = testdf.urgency, testdf.probability) # plt.savefig('lift_curve.png') plt.show() df['decile1'] = pd.qcut(df['pred_prob'].rank(method='first'), 10, labels=np.arange(10, 0, -1))
_____no_output_____
Unlicense
UnivAiBlog/.ipynb_checkpoints/CoVID-49-checkpoint.ipynb
hargun3045/covid19-app
Edgar Holdings The examples in this notebook demonstrate using the GremlinPython library to connect to and work with a Neptune instance. Using a Jupyter notebook in this way provides a nice way to interact with your Neptune graph database in a familiar and instantly productive environment. Connect to the Neptune Database which has the load Edgar Data When the SageMaker notebook instance was created the appropriate Python libraries for working with a Tinkerpop enabled graph were installed. We now need to `import` some classes from those libraries before connecting to our Neptune instance, loading some sample data, and running queries. Below are the packages that need to be installed. This should be executed once to configure the environment.
!pip install --upgrade pip !pip install futures !pip install gremlinpython !pip install SPARQLWrapper !pip install tornado !pip install tornado-httpclient-session !pip install tornado-utils !pip install matplotlib !pip install numpy !pip install pandas !pip install networkx
Requirement already up-to-date: pip in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (19.0.3) Requirement already satisfied: futures in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (3.1.1) Requirement already satisfied: gremlinpython in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (3.4.0) Requirement already satisfied: six>=1.10.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (1.11.0) Requirement already satisfied: tornado<5.0,>=4.4.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (4.5.3) Requirement already satisfied: aenum>=1.4.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gremlinpython) (2.1.2) Requirement already satisfied: SPARQLWrapper in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.8.2) Requirement already satisfied: rdflib>=4.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from SPARQLWrapper) (4.2.2) Requirement already satisfied: isodate in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from rdflib>=4.0->SPARQLWrapper) (0.6.0) Requirement already satisfied: pyparsing in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from rdflib>=4.0->SPARQLWrapper) (2.2.0) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from isodate->rdflib>=4.0->SPARQLWrapper) (1.11.0) Requirement already satisfied: tornado in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (4.5.3) Requirement already satisfied: tornado-httpclient-session in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.2.5) Requirement already satisfied: tornado in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from tornado-httpclient-session) (4.5.3) Requirement already satisfied: tornado-utils in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.6) Requirement already satisfied: matplotlib in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (2.2.2) Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (0.10.0) Requirement already satisfied: numpy>=1.7.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (1.14.5) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (1.0.1) Requirement already satisfied: python-dateutil>=2.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (2.7.3) Requirement already satisfied: pytz in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (2018.4) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (2.2.0) Requirement already satisfied: six>=1.10 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from matplotlib) (1.11.0) Requirement already satisfied: setuptools in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from kiwisolver>=1.0.1->matplotlib) (39.1.0) Requirement already satisfied: numpy in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.14.5) Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.22.0) Requirement already satisfied: pytz>=2011k in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (2018.4) Requirement already satisfied: numpy>=1.9.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (1.14.5) Requirement already satisfied: python-dateutil>=2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from pandas) (2.7.3) Requirement already satisfied: six>=1.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from python-dateutil>=2->pandas) (1.11.0) Requirement already satisfied: networkx in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (2.1) Requirement already satisfied: decorator>=4.1.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from networkx) (4.3.0)
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Establish access to our Neptune instanceBefore we can work with our graph we need to establish a connection to it. This is done using the `DriverRemoteConnection` capability as defined by Apache TinkerPop and supported by GremlinPython. The `neptune.py` helper module facilitates creating this connection.Once this cell has been run we will be able to use the variable `g` to refer to our graph in Gremlin queries in subsequent cells. By default Neptune uses port 8182 and that is what we connect to below. When you configure your own Neptune instance you can you choose a different endpoint and port number by specifiying the `neptune_endpoint` and `neptune_port` parameters to the `graphTraversal()` method.
from gremlin_python import statics from gremlin_python.structure.graph import Graph from gremlin_python.process.graph_traversal import __ from gremlin_python.process.strategies import * from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection endpoint="wss://dbfindata.carpeooi4ov5.us-east-1.neptune.amazonaws.com:8182/gremlin" graph=Graph() g=graph.traversal().withRemote(DriverRemoteConnection(endpoint,'g'))
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Let's find out a bit about the graphLet's start off with a simple query just to make sure our connection to Neptune is working. The queries below look at all of the vertices and edges in the graph and create two maps that show the demographic of the graph. As we are using the air routes data set, not surprisingly, the values returned are related to airports and routes.
vertices = g.V().groupCount().by(T.label).toList() edges = g.E().groupCount().by(T.label).toList() print(vertices) print(edges)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Find routes longer than 8,400 milesThe query below finds routes in the graph that are longer than 8,400 miles. This is done by examining the `dist` property of the `routes` edges in the graph. Having found some edges that meet our criteria we sort them in descending order by distance. The `where` step filters out the reverse direction routes for the ones that we have already found beacuse we do not, in this case, want two results for each route. As an experiment, try removing the `where` line and observe the additional results that are returned. Lastly we generate some `path` results using the airport codes and route distances. Notice how we have laid the Gremlin query out over multiple lines to make it easier to read. To avoid errors, when you lay out a query in this way using Python, each line must end with a backslash character "\".The results from running the query will be placed into the variable `paths`. Notice how we ended the Gremlin query with a call to `toList`. This tells Gremlin that we want our results back in a list. We can then use a Python `for` loop to print those results. Each entry in the list will itself be a list containing the starting airport code, the length of the route and the destination airport code.
paths = g.V().hasLabel('airport').as_('a') \ .outE('route').has('dist',gt(8400)) \ .order().by('dist',Order.decr) \ .inV() \ .where(P.lt('a')).by('code') \ .path().by('code').by('dist').by('code').toList() for p in paths: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a Bar Chart that represents the routes we just found.One of the nice things about using Python to work with our graph is that we can take advantage of the larger Python ecosystem of libraries such as `matplotlib`, `numpy` and `pandas` to further analyze our data and represent it pictorially. So, now that we have found some long airline routes we can build a bar chart that represents them graphically.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd routes = list() dist = list() # Construct the x-axis labels by combining the airport pairs we found # into strings with with a "-" between them. We also build a list containing # the distance values that will be used to construct and label the bars. for i in range(len(paths)): routes.append(paths[i][0] + '-' + paths[i][2]) dist.append(paths[i][1]) # Setup everything we need to draw the chart y_pos = np.arange(len(routes)) y_labels = (0,1000,2000,3000,4000,5000,6000,7000,8000,9000) freq_series = pd.Series(dist) plt.figure(figsize=(11,6)) fs = freq_series.plot(kind='bar') fs.set_xticks(y_pos, routes) fs.set_ylabel('Miles') fs.set_title('Longest routes') fs.set_yticklabels(y_labels) fs.set_xticklabels(routes) fs.yaxis.set_ticks(np.arange(0, 10000, 1000)) fs.yaxis.set_ticklabels(y_labels) # Annotate each bar with the distance value for i in range(len(paths)): fs.annotate(dist[i],xy=(i,dist[i]+60),xycoords='data',ha='center') # We are finally ready to draw the bar chart plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Explore the distribution of airports by continentThe next example queries the graph to find out how many airports are in each continent. The query starts by finding all vertices that are continents. Next, those vertices are grouped, which creates a map (or dict) whose keys are the continent descriptions and whose values represent the counts of the outgoing edges with a 'contains' label. Finally the resulting map is sorted using the keys in ascending order. That result is then returned to our Python code as the variable `m`. Finally we can print the map nicely using regular Python concepts.
# Return a map where the keys are the continent names and the values are the # number of airports in that continent. m = g.V().hasLabel('continent') \ .group().by('desc').by(__.out('contains').count()) \ .order(Scope.local).by(Column.keys) \ .next() for c,n in m.items(): print('%4d %s' %(n,c))
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Draw a pie chart representing the distribution by continentRather than return the results as text like we did above, it might be nicer to display them as percentages on a pie chart. That is what the code in the next cell does. Rather than return the descriptions of the continents (their names) this time our Gremlin query simply retrieves the two digit character code representing each continent.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np # Return a map where the keys are the continent codes and the values are the # number of airports in that continent. m = g.V().hasLabel('continent').group().by('code').by(__.out().count()).next() fig,pie1 = plt.subplots() pie1.pie(m.values() \ ,labels=m.keys() \ ,autopct='%1.1f%%'\ ,shadow=True \ ,startangle=90 \ ,explode=(0,0,0.1,0,0,0,0)) pie1.axis('equal') plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Find some routes from London to San Jose and draw themOne of the nice things about connected graph data is that it lends itself nicely to visualization that people can get value from looking at. The Python `networkx` library makes it fairly easy to draw a graph. The next example takes advantage of this capability to draw a directed graph (DiGraph) of a few airline routes.The query below starts by finding the vertex that represents London Heathrow (LHR). It then finds 15 routes from LHR that end up in San Jose California (SJC) with one stop on the way. Those routes are returned as a list of paths. Each path will contain the three character IATA codes representing the airports found.The main purpose of this example is to show that we can easily extract part of a larger graph and render it graphically in a way that is easy for an end user to comprehend.
import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np import matplotlib.pyplot as plt import pandas as pd import networkx as nx # Find up to 15 routes from LHR to SJC that make one stop. paths = g.V().has('airport','code','LHR') \ .out().out().has('code','SJC').limit(15) \ .path().by('code').toList() # Create a new empty DiGraph G=nx.DiGraph() # Add the routes we found to DiGraph we just created for p in paths: G.add_edge(p[0],p[1]) G.add_edge(p[1],p[2]) # Give the starting and ending airports a different color colors = [] for label in G: if label in['LHR','SJC']: colors.append('yellow') else: colors.append('#11cc77') # Now draw the graph plt.figure(figsize=(5,5)) nx.draw(G, node_color=colors, node_size=1200, with_labels=True) plt.show()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
PART 2 - Examples that use iPython GremlinThis part of the notebook contains examples that use the iPython Gremlin Jupyter extension to work with a Neptune instance using Gremlin. Configuring iPython Gremlin to work with NeptuneBefore we can start to use iPython Gremlin we need to load the Jupyter Kernel extension and configure access to our Neptune endpoint.
# Create a string containing the full Web Socket path to the endpoint # Replace <neptune-instance-name> with the name of your Neptune instance. # which will be of the form myinstance.us-east-1.neptune.amazonaws.com #neptune_endpoint = '<neptune-instance-name>' neptune_endpoint = os.environ['NEPTUNE_CLUSTER_ENDPOINT'] neptune_port = os.environ['NEPTUNE_CLUSTER_PORT'] neptune_gremlin_endpoint = 'ws://' + neptune_endpoint + ':' + neptune_port + '/gremlin' # Load the iPython Gremlin extension and setup access to Neptune. %load_ext gremlin %gremlin.connection.set_current $neptune_gremlin_endpoint
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Run this cell if you need to reload the Gremlin extension.Occaisionally it becomes necessary to reload the iPython Gremlin extension to make things work. Running this cell will do that for you.
# Re-load the iPython Gremlin Jupyter Kernel extension. %reload_ext gremlin
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
A simple query to make sure we can connect to the graph. Find all the airports in England that are in London. Notice that when using iPython Gremlin you do not need to use a terminal step such as `next` or `toList` at the end of the query in order to get it to return results. As mentioned earlier in this post, the `%reset -f` is to work around a known issue with iPython Gremlin.
%reset -f %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc')
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
You can store the results of a query in a variable just as when using Gremlin Python.The query below is the same as the previous one except that the results of running the query are stored in the variable 'places'. We can then work with that variable in our code.
%reset -f places = %gremlin g.V().has('airport','region','GB-ENG') \ .has('city','London').values('desc') for p in places: print(p)
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Treating entire cells as GremlinAny cell that begins with `%%gremlin` tells iPython Gremlin to treat the entire cell as Gremlin. You cannot mix Python code into these cells.
%%gremlin g.V().has('city','London').has('region','GB-ENG').count()
_____no_output_____
MIT-0
neptune-sagemaker/notebooks/edgar/.ipynb_checkpoints/01-edar-checkpoint.ipynb
JanuaryThomas/amazon-neptune-samples
Creating a pandas data frame for experimental PF valuesWan et al. (JCTC 2020) trained a forward model on experimentally measured HDX protection factors:* 72 PF values for backbone amides of ubiquitin taken from Craig et al.* 30 (of 53) for amibackbone amides of BPTI taken from Persson et al.In this notebook we have converted the published values to $\ln$ PF (natural log). ReferencesWan, Hongbin, Yunhui Ge, Asghar Razavi, and Vincent A. Voelz. “Reconciling Simulated Ensembles of Apomyoglobin with Experimental Hydrogen/Deuterium Exchange Data Using Bayesian Inference and Multiensemble Markov State Models.” Journal of Chemical Theory and Computation 16, no. 2 (February 11, 2020): 1333–48. https://doi.org/10.1021/acs.jctc.9b01240.Craig, P. O.; Lätzer, J.; Weinkam, P.; Hoffman, R. M. B.; Ferreiro, D. U.; Komives, E. A.; Wolynes, P. G. Journal of the American Chemical Society 2011, 133, 17463–17472.Persson, F.; Halle, B. Proceedings of the National Academy of Sciences 2015, 112, 10383– 10388.
import os, sys import numpy as np import pandas as pd ### Ubiquitin ubiquitin_text ="""#residue\tresnum\tln PF \\ GLN & 2 & 6.210072 \\ ILE & 3 & 13.7372227 \\ PHE & 4 & 13.4839383 \\ VAL & 5 & 13.1523661 \\ LYS & 6 & 10.6909026 \\ THR & 7 & 10.4629467 \\ LEU & 8 & 0.67005226 \\ THR & 9 & 0 \\ GLY & 10 & 3.93511792 \\ LYS & 11 & 5.21075007 \\ THR & 12 & 3.2443424 \\ ILE & 13 & 11.0570136 \\ THR & 14 & 2.53514619 \\ LEU & 15 & 11.5336487 \\ GLU & 16 & 5.14167251 \\ VAL & 17 & 12.2405424 \\ GLU & 18 & 7.97845735 \\ SER & 20 & 3.82459384 \\ ASP & 21 & 12.9796722 \\ THR & 22 & 7.73438333 \\ ILE & 23 & 11.2135894 \\ GLU & 24 & 5.00581999 \\ ASN & 25 & 10.5550501 \\ VAL & 26 & 14.6214153 \\ LYS & 27 & 14.8539764 \\ ALA & 28 & 11.4806893 \\ LYS & 29 & 13.4517021 \\ ILE & 30 & 14.2483966 \\ GLN & 31 & 7.7689221 \\ ASP & 32 & 5.6229128 \\ LYS & 33 & 4.56602624 \\ GLU & 34 & 6.21467717 \\ GLY & 35 & 6.26763662 \\ ILE & 36 & 6.981438 \\ ASP & 39 & 1.64174317 \\ GLN & 40 & 6.45875119 \\ GLN & 41 & 8.26167531 \\ ARG & 42 & 9.13435506 \\ LEU & 43 & 6.56927527 \\ ILE & 44 & 12.6573103 \\ PHE & 45 & 6.49328996 \\ ALA & 46 & 0 \\ GLY & 47 & 4.1699816 \\ LYS & 48 & 7.86102551 \\ GLN & 49 & 3.80617316 \\ LEU & 50 & 8.45969763 \\ GLU & 51 & 5.41798272 \\ ASP & 52 & 2.73316851 \\ GLY & 53 & 3.31802512 \\ ARG & 54 & 9.9932193 \\ THR & 55 & 11.3494419 \\ LEU & 56 & 13.0211187 \\ SER & 57 & 6.68440452 \\ ASP & 58 & 6.84098031 \\ TYR & 59 & 10.9580025 \\ ASN & 60 & 5.11404149 \\ ILE & 61 & 8.54949845 \\ GLN & 62 & 8.27088565 \\ LYS & 63 & 3.09927954 \\ GLU & 64 & 6.37125295 \\ SER & 65 & 7.52484808 \\ THR & 66 & 6.1939539 \\ LEU & 67 & 7.75740918 \\ HIS & 68 & 8.49884158 \\ LEU & 69 & 7.95773408 \\ VAL & 70 & 9.09981629 \\ LEU & 71 & 3.0278994 \\ ARG & 72 & 2.5788953 \\ LEU & 73 & 0 \\ ARG & 74 & 0 \\ GLY & 75 & 0 \\ GLY & 76 & 0 \\""" ubiquitin_text = ubiquitin_text.replace(' & ','\t').replace(' \\','') # print(ubiquitin_text) fout = open('ubiquitin_lnPF.txt', 'w') fout.write(ubiquitin_text) fout.close() ubi = pd.read_csv('ubiquitin_lnPF.txt', header=0, sep='\t') ubi ### BPTI bpti_text ="""#residue\tresnum\tln PF \\ CYS & 5 & 8.52877518 \\ LEU & 6 & 7.43504727 \\ GLU & 7 & 8.229439 \\ TYR & 10 & 5.756463 \\ GLY & 12 & 3.840712 \\ ALA & 16 & 6.963017 \\ ARG & 17 & 1.752267 \\ IIE & 18 & 12.37639 \\ IIE & 19 & 2.256533 \\ ALA & 25 & 3.04632 \\ GLY & 28 & 7.676819 \\ LEU & 29 & 10.85899 \\ CYS & 30 & 3.677228 \\ THR & 32 & 5.701201 \\ VAL & 34 & 3.734793 \\ TYR & 35 & 11.23431 \\ GLY & 36 & 9.574149 \\ GLY & 37 & 11.43924 \\ CYS & 38 & 4.503856 \\ LYS & 41 & 6.988346 \\ ARG & 42 & 2.141404 \\ ASN & 43 & 5.125554 \\ ASN & 44 & 14.02274 \\ SER & 47 & 4.503856 \\ ALA & 48 & 2.403899 \\ MET & 52 & 11.02938 \\ ARG & 53 & 9.825131 \\ THR & 54 & 7.962339 \\ CYS & 55 & 12.1139 \\ GLY & 56 & 8.008391 \\""" bpti_text = bpti_text.replace(' & ','\t').replace(' \\','') # print(bpti_text) fout = open('bpti_lnPF.txt', 'w') fout.write(bpti_text) fout.close() bpti = pd.read_csv('bpti_lnPF.txt', header=0, sep='\t') bpti ### Finally, we make a data frame that concatenates the ubiquitin and BPTI values ubi_bpti = pd.concat([ubi, bpti], ignore_index=True) ubi_bpti # Also, let's write a text file version too ubi_lines = ubiquitin_text.split('\n') bpti_lines = bpti_text.split('\n') ubi_bpti_lines = ubi_lines + bpti_lines[1:] fout = open('ubi_bpti_lnPF.txt', 'w') fout.writelines("%s\n" % l for l in ubi_bpti_lines) fout.close() # write all the data frames to JSON ubi.to_json('ubiquitin_lnPF.json') bpti.to_json('bpti_lnPF.json') ubi_bpti.to_json('ubi_bpti_lnPF.json') # write a numpy array of JUST the lnPF values all_lnPF_values = np.array(ubi_bpti['ln PF']) all_lnPF_values np.save('ubi_bpti_lnPF.npy',all_lnPF_values) ### VERIFY that this data matches Hongbin's earlier data file all_lnPF_values_HONGBIN = np.load('ubi_bpti_all_exp_data_in_ln.npy') print('all_lnPF_values.shape', all_lnPF_values.shape) print('all_lnPF_values_HONGBIN.shape', all_lnPF_values_HONGBIN.shape) for i in range(all_lnPF_values_HONGBIN.shape[0]): print(all_lnPF_values[i], all_lnPF_values_HONGBIN[i])
all_lnPF_values.shape (102,) all_lnPF_values_HONGBIN.shape (102,) 6.210072 6.210071995804942 13.737222699999998 13.737222664802477 13.4839383 13.483938304573131 13.1523661 13.152366051181989 10.6909026 10.690902586771355 10.4629467 10.462946662564942 0.67005226 0.6700522620612673 0.0 0.0 3.93511792 3.9351179239268244 5.21075007 5.210750065445525 3.2443424 3.2443423960286104 11.0570136 11.057013616557407 2.53514619 2.5351461873864443 11.533648699999999 11.533648730807176 5.14167251 5.141672512655704 12.2405424 12.240542354356347 7.97845735 7.978457347224368 3.82459384 3.82459383946311 12.9796722 12.979672169207435 7.73438333 7.7343833273669995 11.2135894 11.213589402881002 5.00581999 5.005819992169055 10.555050099999999 10.555050066284705 14.621415300000002 14.62141534051219 14.853976399999999 14.853976434904588 11.480689300000002 11.480689273668311 13.4517021 13.451702113271214 14.248396599999998 14.248396555447155 7.768922099999999 7.76892210376191 5.6229128 5.62291279709146 4.56602624 4.566026239407193 6.21467717 6.214677165990929 6.26763662 6.267636623129793 6.981438000000001 6.9814380019579465 1.64174317 1.6417431713047546 6.45875119 6.458751185848299 8.26167531 8.261675313662636 9.13435506 9.13435506390738 6.56927527 6.569275270312013 12.6573103 12.65731025618827 6.49328996 6.4932899622432085 0.0 0.0 4.1699816 4.169981603412217 7.86102551 7.861025507481672 3.80617316 3.8061731587191576 8.45969763 8.459697631660124 5.41798272 5.41798272381499 2.73316851 2.7331685053839325 3.31802512 3.31802511900442 9.993219300000002 9.993219303594158 11.3494419 11.349441923367651 13.021118699999999 13.02111870088133 6.68440452 6.684404524961715 6.84098031 6.84098031128531 10.9580025 10.958002457558663 5.11404149 5.114041491539775 8.54949845 8.549498450286892 8.27088565 8.270885654034613 3.09927954 3.0992795351699858 6.37125295 6.371252952314524 7.52484808 7.524848083904541 6.1939539 6.193953900153983 7.75740918 7.757409178296941 8.49884158 8.498841578241022 7.95773408 7.9577340813874216 9.09981629 9.099816287512468 3.0278994 3.02789939728717 2.5788952999999997 2.5788953041533316 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 8.52877518 8.528775184449946 7.43504727 7.435047265277774 8.229439 8.229439122360718 5.756463 5.756462732485114 3.8407120000000003 3.840711935114068 6.963017 6.963017321213994 1.7522669999999998 1.7522672557684686 12.376389999999999 12.376394874842996 2.256533 2.2565333911341647 3.04632 3.046320078031122 7.676819 7.676818700042149 10.858989999999999 10.85899129855992 3.677228 3.677228393511491 5.701201 5.701200690253257 3.734793 3.7347930208363422 11.23431 11.234312668717948 9.574149 9.574148816669243 11.43924 11.439242741994418 4.503856 4.503856441896353 6.988346000000001 6.988345757236929 2.141404 2.1414041364844625 5.125554 5.125554417004746 14.022739999999999 14.022743216333739 4.503856 4.503856441896353 2.403899 2.4038988370857837 11.02938 11.02938259544148 9.825130999999999 9.825130591805594 7.962339 7.96233925157341 12.1139 12.113900174241675 8.008391 8.008390953433292
MIT
experimental-data/create_pd.ipynb
vvoelz/HDX-forward-model
Implementation of a Devito skew self adjoint variable density visco- acoustic isotropic modeling operator -- Correctness Testing -- This operator is contributed by Chevron Energy Technology Company (2020)This operator is based on simplfications of the systems presented in:**Self-adjoint, energy-conserving second-order pseudoacoustic systems for VTI and TTI media for reverse time migration and full-waveform inversion** (2016)Kenneth Bube, John Washbourne, Raymond Ergas, and Tamas NemethSEG Technical Program Expanded Abstractshttps://library.seg.org/doi/10.1190/segam2016-13878451.1 Introduction The goal of this tutorial set is to generate and prove correctness of modeling and inversion capability in Devito for variable density visco- acoustics using an energy conserving form of the wave equation. We describe how the linearization of the energy conserving *skew self adjoint* system with respect to modeling parameters allows using the same modeling system for all nonlinear and linearized forward and adjoint finite difference evolutions. There are three notebooks in this series: 1. Implementation of a Devito skew self adjoint variable density visco- acoustic isotropic modeling operator -- Nonlinear Ops- Implement the nonlinear modeling operations. - [ssa_01_iso_implementation1.ipynb](ssa_01_iso_implementation1.ipynb) 2. Implementation of a Devito skew self adjoint variable density visco- acoustic isotropic modeling operator -- Linearized Ops- Implement the linearized (Jacobian) ```forward``` and ```adjoint``` modeling operations.- [ssa_02_iso_implementation2.ipynb](ssa_02_iso_implementation2.ipynb) 3. Implementation of a Devito skew self adjoint variable density visco- acoustic isotropic modeling operator -- Correctness Testing- Tests the correctness of the implemented operators.- [ssa_03_iso_correctness.ipynb](ssa_03_iso_correctness.ipynb)There are similar series of notebooks implementing and testing operators for VTI and TTI anisotropy ([README.md](README.md)).Below we describe a suite of unit tests that prove correctness for our *skew self adjoint* operators. Outline 1. Define symbols2. Definition of correctness tests 3. Analytic response in the far field 4. Modeling operator linearity test, with respect to source 5. Modeling operator adjoint test, with respect to source 6. Nonlinear operator linearization test, with respect to model 7. Jacobian operator linearity test, with respect to model 8. Jacobian operator adjoint test, with respect to model 9. Skew symmetry test for shifted derivatives 10. References Table of symbolsWe show the symbols here relevant to the implementation of the linearized operators.| Symbol &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | Description | Dimensionality | |:---|:---|:---|| $\overleftarrow{\partial_t}$ | shifted first derivative wrt $t$ | shifted 1/2 sample backward in time || $\partial_{tt}$ | centered second derivative wrt $t$ | centered in time || $\overrightarrow{\partial_x},\ \overrightarrow{\partial_y},\ \overrightarrow{\partial_z}$ | + shifted first derivative wrt $x,y,z$ | shifted 1/2 sample forward in space || $\overleftarrow{\partial_x},\ \overleftarrow{\partial_y},\ \overleftarrow{\partial_z}$ | - shifted first derivative wrt $x,y,z$ | shifted 1/2 sample backward in space || $m(x,y,z)$ | Total P wave velocity ($m_0+\delta m$) | function of space || $m_0(x,y,z)$ | Reference P wave velocity | function of space || $\delta m(x,y,z)$ | Perturbation to P wave velocity | function of space || $u(t,x,y,z)$ | Total pressure wavefield ($u_0+\delta u$)| function of time and space || $u_0(t,x,y,z)$ | Reference pressure wavefield | function of time and space || $\delta u(t,x,y,z)$ | Perturbation to pressure wavefield | function of time and space || $s(t,x,y,z)$ | Source wavefield | function of time, localized in space to source location || $r(t,x,y,z)$ | Receiver wavefield | function of time, localized in space to receiver locations || $\delta r(t,x,y,z)$ | Receiver wavefield perturbation | function of time, localized in space to receiver locations || $F[m]\ q$ | Forward linear modeling operator | Nonlinear in $m$, linear in $q, s$: $\quad$ maps $q \rightarrow s$ || $\bigl( F[m] \bigr)^\top\ s$ | Adjoint linear modeling operator | Nonlinear in $m$, linear in $q, s$: $\quad$ maps $s \rightarrow q$ || $F[m; q]$ | Forward nonlinear modeling operator | Nonlinear in $m$, linear in $q$: $\quad$ maps $m \rightarrow r$ || $\nabla F[m; q]\ \delta m$ | Forward Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta m \rightarrow \delta r$ || $\bigl( \nabla F[m; q] \bigr)^\top\ \delta r$ | Adjoint Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta r \rightarrow \delta m$ || $\Delta_t, \Delta_x, \Delta_y, \Delta_z$ | sampling rates for $t, x, y , z$ | $t, x, y , z$ | A word about notation We use the arrow symbols over derivatives $\overrightarrow{\partial_x}$ as a shorthand notation to indicate that the derivative is taken at a shifted location. For example:- $\overrightarrow{\partial_x}\ u(t,x,y,z)$ indicates that the $x$ derivative of $u(t,x,y,z)$ is taken at $u(t,x+\frac{\Delta x}{2},y,z)$.- $\overleftarrow{\partial_z}\ u(t,x,y,z)$ indicates that the $z$ derivative of $u(t,x,y,z)$ is taken at $u(t,x,y,z-\frac{\Delta z}{2})$.- $\overleftarrow{\partial_t}\ u(t,x,y,z)$ indicates that the $t$ derivative of $u(t,x,y,z)$ is taken at $u(t-\frac{\Delta_t}{2},x,y,z)$.We usually drop the $(t,x,y,z)$ notation from wavefield variables unless required for clarity of exposition, so that $u(t,x,y,z)$ becomes $u$. Definition of correctness testsWe believe that if an operator passes the following suite of unit tests, it can be considered to be *righteous*. 1. Analytic response in the far fieldTest that data generated in a wholespace matches analogous analytic data away from the near field. We re-use the material shown in the [examples/seismic/acoustic/accuracy.ipynb](https://github.com/devitocodes/devito/blob/master/examples/seismic/acoustic/accuracy.ipynb) notebook. 2. Modeling operator linearity test, with respect to sourceFor random vectors $s$ and $r$, prove:$$\begin{aligned}F[m]\ (\alpha\ s) &\approx \alpha\ F[m]\ s \\[5pt]F[m]^\top (\alpha\ r) &\approx \alpha\ F[m]^\top r \\[5pt]\end{aligned}$$ 3. Modeling operator adjoint test, with respect to sourceFor random vectors $s$ and $r$, prove:$$r \cdot F[m]\ s \approx s \cdot F[m]^\top r$$ 4. Nonlinear operator linearization test, with respect to modelFor initial velocity model $m$ and random perturbation $\delta m$ prove that the $L_2$ norm error in the linearization $E(h)$ is second order (decreases quadratically) with the magnitude of the perturbation.$$E(h) = \biggl\|\ f(m+h\ \delta m) - f(m) - h\ \nabla F[m; q]\ \delta m\ \biggr\|$$One way to do this is to run a suite of $h$ values decreasing by a factor of $\gamma$, and prove the error decreases by a factor of $\gamma^2$: $$\frac{E\left(h\right)}{E\left(h/\gamma\right)} \approx \gamma^2$$Elsewhere in Devito tutorials, this relation is proven by fitting a line to a sequence of $E(h)$ for various $h$ and showing second order error decrease. We employ this strategy here. 5. Jacobian operator linearity test, with respect to modelFor initial velocity model $m$ and random vectors $\delta m$ and $\delta r$, prove:$$\begin{aligned}\nabla F[m; q]\ (\alpha\ \delta m) &\approx \alpha\ \nabla F[m; q]\ \delta m \\[5pt](\nabla F[m; q])^\top (\alpha\ \delta r) &\approx \alpha\ (\nabla F[m; q])^\top \delta r\end{aligned}$$ 6. Jacobian operator adjoint test, with respect to model perturbation and receiver wavefield perturbation For initial velocity model $m$ and random vectors $\delta m$ and $\delta r$, prove:$$\delta r \cdot \nabla F[m; q]\ \delta m \approx \delta m \cdot (\nabla F[m; q])^\top \delta r$$ 7. Skew symmetry for shifted derivativesIn addition to these tests, recall that in the first notebook ([ssa_01_iso_implementation1.ipynb](ssa_01_iso_implementation1.ipynb)) we implemented a unit test that demonstrates skew symmetry of the Devito generated shifted derivatives. We include that test in our suite of unit tests for completeness. Ensure for random $x_1, x_2$ that Devito shifted derivative operators $\overrightarrow{\partial_x}$ and $\overrightarrow{\partial_x}$ are skew symmetric by verifying the following dot product test.$$x_2 \cdot \left( \overrightarrow{\partial_x}\ x_1 \right) \approx -\ x_1 \cdot \left( \overleftarrow{\partial_x}\ x_2 \right) $$ Implementation of correctness testsBelow we implement the correctness tests described above. These tests are copied from standalone tests that run in the Devito project *continuous integration* (CI) pipeline via the script ```test_iso_wavesolver.py```. We will implement the test methods in one cell and then call from the next cell to verify correctness, but note that a wider variety of parameterization is tested in the CI pipeline.For these tests we use the convenience functions implemented in ```operators.py``` and ```wavesolver.py``` rather than implement the operators in the notebook as we have in the first two notebooks in this series. Please review the source to compare with our notebook implementations:- [operators.py](operators.py)- [wavesolver.py](wavesolver.py)- [test_wavesolver_iso.py](test_wavesolver_iso.py)**Important note:** you must run these notebook cells in order, because some cells have dependencies on state initialized in previous cells. Imports We have grouped all imports used in this notebook here for consistency.
from scipy.special import hankel2 import numpy as np from examples.seismic import RickerSource, Receiver, TimeAxis, Model, AcquisitionGeometry from devito import (Grid, Function, TimeFunction, SpaceDimension, Constant, Eq, Operator, solve, configuration) from devito.finite_differences import Derivative from devito.builtins import gaussian_smooth from examples.seismic.skew_self_adjoint import (acoustic_ssa_setup, setup_w_over_q, SsaIsoAcousticWaveSolver) import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import cm from timeit import default_timer as timer # These lines force images to be displayed in the notebook, and scale up fonts %matplotlib inline mpl.rc('font', size=14) # Make white background for plots, not transparent plt.rcParams['figure.facecolor'] = 'white' # Set the default language to openmp configuration['language'] = 'openmp' # Set logging to debug, captures statistics on the performance of operators # configuration['log-level'] = 'DEBUG' configuration['log-level'] = 'INFO'
_____no_output_____
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
1. Analytic response in the far fieldTest that data generated in a wholespace matches analogous analytic data away from the near field. We copy/modify the material shown in the [examples/seismic/acoustic/accuracy.ipynb](https://github.com/devitocodes/devito/blob/master/examples/seismic/acoustic/accuracy.ipynb) notebook. Analytic solution for the 2D acoustic wave equation$$\begin{aligned}u_s(r, t) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} \bigl\{ -i\ \pi\ H_0^{(2)}\left(k r \right)\ q(\omega)\ e^{i\omega t}\ d\omega\bigr\}\\[10pt]r &= \sqrt{(x_{src} - x_{rec})^2+(z_{src} - z_{rec})^2}\end{aligned}$$where $H_0^{(2)}$ is the Hankel function of the second kind, $F(\omega)$ is the Fourier spectrum of the source time function at angular frequencies $\omega$ and $k = (\omega\ /\ v)$ is the wavenumber. We look at the analytical and numerical solution at a single grid point.Note that we use a custom discretization for the analytic test that is much finer both temporally and spatially.
# Define the analytic response def analytic_response(fpeak, time_axis, src_coords, rec_coords, v): nt = time_axis.num dt = time_axis.step v0 = v.data[0,0] sx, sz = src_coords[0, :] rx, rz = rec_coords[0, :] ntpad = 20 * (nt - 1) + 1 tmaxpad = dt * (ntpad - 1) time_axis_pad = TimeAxis(start=tmin, stop=tmaxpad, step=dt) timepad = np.linspace(tmin, tmaxpad, ntpad) print(time_axis) print(time_axis_pad) srcpad = RickerSource(name='srcpad', grid=v.grid, f0=fpeak, npoint=1, time_range=time_axis_pad, t0w=t0w) nf = int(ntpad / 2 + 1) fnyq = 1.0 / (2 * dt) df = 1.0 / tmaxpad faxis = df * np.arange(nf) # Take the Fourier transform of the source time-function R = np.fft.fft(srcpad.wavelet[:]) R = R[0:nf] nf = len(R) # Compute the Hankel function and multiply by the source spectrum U_a = np.zeros((nf), dtype=complex) for a in range(1, nf - 1): w = 2 * np.pi * faxis[a] r = np.sqrt((rx - sx)**2 + (rz - sz)**2) U_a[a] = -1j * np.pi * hankel2(0.0, w * r / v0) * R[a] # Do inverse fft on 0:dt:T and you have analytical solution U_t = 1.0/(2.0 * np.pi) * np.real(np.fft.ifft(U_a[:], ntpad)) # Note that the analytic solution is scaled by dx^2 to convert to pressure return (np.real(U_t) * (dx**2)) #NBVAL_INGNORE_OUTPUT # Setup time / frequency nt = 1001 dt = 0.1 tmin = 0.0 tmax = dt * (nt - 1) fpeak = 0.090 t0w = 1.0 / fpeak omega = 2.0 * np.pi * fpeak time_axis = TimeAxis(start=tmin, stop=tmax, step=dt) time = np.linspace(tmin, tmax, nt) # Model space_order = 8 npad = 50 dx, dz = 0.5, 0.5 nx, nz = 801, 801 shape = (nx, nz) spacing = (dx, dz) origin = (0., 0.) dtype = np.float64 qmin = 0.1 qmax = 100000 v0 = 1.5*np.ones(shape) b0 = 1.0*np.ones(shape) # Model init_damp = lambda func, nbl: setup_w_over_q(func, omega, qmin, qmax, npad, sigma=0) model = Model(origin=origin, shape=shape, vp=v0, b=b0, spacing=spacing, nbl=npad, space_order=space_order, bcs=init_damp, dtype=dtype, dt=dt) # Source and reciver coordinates src_coords = np.empty((1, 2), dtype=dtype) rec_coords = np.empty((1, 2), dtype=dtype) src_coords[:, :] = np.array(model.domain_size) * .5 rec_coords[:, :] = np.array(model.domain_size) * .5 + 60 geometry = AcquisitionGeometry(model, rec_coords, src_coords, t0=0.0, tn=tmax, src_type='Ricker', f0=fpeak) # Solver setup solver = SsaIsoAcousticWaveSolver(model, geometry, space_order=space_order) # Numerical solution recNum, uNum, _ = solver.forward(dt=dt) # Analytic solution uAnaPad = analytic_response(fpeak, time_axis, src_coords, rec_coords, model.vp) uAna = uAnaPad[0:nt] # Compute RMS and difference diff = (recNum.data - uAna) nrms = np.max(np.abs(recNum.data)) arms = np.max(np.abs(uAna)) drms = np.max(np.abs(diff)) print("\nMaximum absolute numerical,analytic,diff; %+12.6e %+12.6e %+12.6e" % (nrms, arms, drms)) # This isnt a very strict tolerance ... tol = 0.1 assert np.allclose(diff, 0.0, atol=tol) nmin, nmax = np.min(recNum.data), np.max(recNum.data) amin, amax = np.min(uAna), np.max(uAna) print("") print("Numerical min/max; %+12.6e %+12.6e" % (nmin, nmax)) print("Analytic min/max; %+12.6e %+12.6e" % (amin, amax)) #NBVAL_INGNORE_OUTPUT # Plot x1 = origin[0] - model.nbl * model.spacing[0] x2 = model.domain_size[0] + model.nbl * model.spacing[0] z1 = origin[1] - model.nbl * model.spacing[1] z2 = model.domain_size[1] + model.nbl * model.spacing[1] xABC1 = origin[0] xABC2 = model.domain_size[0] zABC1 = origin[1] zABC2 = model.domain_size[1] plt_extent = [x1, x2, z2, z1] abc_pairsX = [xABC1, xABC1, xABC2, xABC2, xABC1] abc_pairsZ = [zABC1, zABC2, zABC2, zABC1, zABC1] plt.figure(figsize=(12.5,12.5)) # Plot wavefield plt.subplot(2,2,1) amax = 1.1 * np.max(np.abs(recNum.data[:])) plt.imshow(uNum.data[1,:,:], vmin=-amax, vmax=+amax, cmap="seismic", aspect="auto", extent=plt_extent) plt.plot(src_coords[0, 0], src_coords[0, 1], 'r*', markersize=15, label='Source') plt.plot(rec_coords[0, 0], rec_coords[0, 1], 'k^', markersize=11, label='Receiver') plt.plot(abc_pairsX, abc_pairsZ, 'black', linewidth=4, linestyle=':', label="ABC") plt.legend(loc="upper left", bbox_to_anchor=(0.0, 0.9, 0.35, .1), framealpha=1.0) plt.xlabel('x position (m)') plt.ylabel('z position (m)') plt.title('Wavefield of numerical solution') plt.tight_layout() # Plot trace plt.subplot(2,2,3) plt.plot(time, recNum.data[:, 0], '-b', label='Numeric') plt.plot(time, uAna[:], '--r', label='Analytic') plt.xlabel('Time (ms)') plt.ylabel('Amplitude') plt.title('Trace comparison of solutions') plt.legend(loc="upper right") plt.xlim([50,90]) plt.ylim([-0.7 * amax, +amax]) plt.subplot(2,2,4) plt.plot(time, 10 * (recNum.data[:, 0] - uAna[:]), '-k', label='Difference x10') plt.xlabel('Time (ms)') plt.ylabel('Amplitude') plt.title('Difference of solutions (x10)') plt.legend(loc="upper right") plt.xlim([50,90]) plt.ylim([-0.7 * amax, +amax]) plt.tight_layout() plt.show()
_____no_output_____
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
Reset default shapes for subsequent tests
npad = 10 fpeak = 0.010 qmin = 0.1 qmax = 500.0 tmax = 1000.0 shape = (101, 81)
_____no_output_____
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
2. Modeling operator linearity test, with respect to sourceFor random vectors $s$ and $r$, prove:$$\begin{aligned}F[m]\ (\alpha\ s) &\approx \alpha\ F[m]\ s \\[5pt]F[m]^\top (\alpha\ r) &\approx \alpha\ F[m]^\top r \\[5pt]\end{aligned}$$We first test the forward operator, and in the cell below that the adjoint operator.
#NBVAL_INGNORE_OUTPUT solver = acoustic_ssa_setup(shape=shape, dtype=dtype, space_order=8, tn=tmax) src = solver.geometry.src a = -1 + 2 * np.random.rand() rec1, _, _ = solver.forward(src) src.data[:] *= a rec2, _, _ = solver.forward(src) rec1.data[:] *= a # Check receiver wavefeild linearity # Normalize by rms of rec2, to enable using abolute tolerance below rms2 = np.sqrt(np.mean(rec2.data**2)) diff = (rec1.data - rec2.data) / rms2 print("\nlinearity forward F %s (so=%d) rms 1,2,diff; " "%+16.10e %+16.10e %+16.10e" % (shape, 8, np.sqrt(np.mean(rec1.data**2)), np.sqrt(np.mean(rec2.data**2)), np.sqrt(np.mean(diff**2)))) tol = 1.e-12 assert np.allclose(diff, 0.0, atol=tol) #NBVAL_INGNORE_OUTPUT src0 = solver.geometry.src rec, _, _ = solver.forward(src0) a = -1 + 2 * np.random.rand() src1, _, _ = solver.adjoint(rec) rec.data[:] = a * rec.data[:] src2, _, _ = solver.adjoint(rec) src1.data[:] *= a # Check adjoint source wavefeild linearity # Normalize by rms of rec2, to enable using abolute tolerance below rms2 = np.sqrt(np.mean(src2.data**2)) diff = (src1.data - src2.data) / rms2 print("\nlinearity adjoint F %s (so=%d) rms 1,2,diff; " "%+16.10e %+16.10e %+16.10e" % (shape, 8, np.sqrt(np.mean(src1.data**2)), np.sqrt(np.mean(src2.data**2)), np.sqrt(np.mean(diff**2)))) tol = 1.e-12 assert np.allclose(diff, 0.0, atol=tol)
Operator `IsoFwdOperator` run in 0.03 s Operator `IsoAdjOperator` run in 0.02 s Operator `IsoAdjOperator` run in 0.03 s
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
3. Modeling operator adjoint test, with respect to sourceFor random vectors $s$ and $r$, prove:$$r \cdot F[m]\ s \approx s \cdot F[m]^\top r$$
#NBVAL_INGNORE_OUTPUT src1 = solver.geometry.src rec1 = solver.geometry.rec rec2, _, _ = solver.forward(src1) # flip sign of receiver data for adjoint to make it interesting rec1.data[:] = rec2.data[:] src2, _, _ = solver.adjoint(rec1) sum_s = np.dot(src1.data.reshape(-1), src2.data.reshape(-1)) sum_r = np.dot(rec1.data.reshape(-1), rec2.data.reshape(-1)) diff = (sum_s - sum_r) / (sum_s + sum_r) print("\nadjoint F %s (so=%d) sum_s, sum_r, diff; %+16.10e %+16.10e %+16.10e" % (shape, 8, sum_s, sum_r, diff)) assert np.isclose(diff, 0., atol=1.e-12)
Operator `IsoFwdOperator` run in 0.56 s Operator `IsoAdjOperator` run in 1.42 s
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
4. Nonlinear operator linearization test, with respect to modelFor initial velocity model $m$ and random perturbation $\delta m$ prove that the $L_2$ norm error in the linearization $E(h)$ is second order (decreases quadratically) with the magnitude of the perturbation.$$E(h) = \biggl\|\ f(m+h\ \delta m) - f(m) - h\ \nabla F[m; q]\ \delta m\ \biggr\|$$One way to do this is to run a suite of $h$ values decreasing by a factor of $\gamma$, and prove the error decreases by a factor of $\gamma^2$: $$\frac{E\left(h\right)}{E\left(h/\gamma\right)} \approx \gamma^2$$Elsewhere in Devito tutorials, this relation is proven by fitting a line to a sequence of $E(h)$ for various $h$ and showing second order error decrease. We employ this strategy here.
#NBVAL_INGNORE_OUTPUT src = solver.geometry.src # Create Functions for models and perturbation m0 = Function(name='m0', grid=solver.model.grid, space_order=8) mm = Function(name='mm', grid=solver.model.grid, space_order=8) dm = Function(name='dm', grid=solver.model.grid, space_order=8) # Background model m0.data[:] = 1.5 # Model perturbation, box of (repeatable) random values centered on middle of model dm.data[:] = 0 size = 5 ns = 2 * size + 1 nx2, nz2 = shape[0]//2, shape[1]//2 np.random.seed(0) dm.data[nx2-size:nx2+size, nz2-size:nz2+size] = -1 + 2 * np.random.rand(ns, ns) # Compute F(m + dm) rec0, u0, summary0 = solver.forward(src, vp=m0) # Compute J(dm) rec1, u1, du, summary1 = solver.jacobian(dm, src=src, vp=m0) # Linearization test via polyfit (see devito/tests/test_gradient.py) # Solve F(m + h dm) for sequence of decreasing h dh = np.sqrt(2.0) h = 0.1 nstep = 7 scale = np.empty(nstep) norm1 = np.empty(nstep) norm2 = np.empty(nstep) for kstep in range(nstep): h = h / dh mm.data[:] = m0.data + h * dm.data rec2, _, _ = solver.forward(src, vp=mm) scale[kstep] = h norm1[kstep] = 0.5 * np.linalg.norm(rec2.data - rec0.data)**2 norm2[kstep] = 0.5 * np.linalg.norm(rec2.data - rec0.data - h * rec1.data)**2 # Fit 1st order polynomials to the error sequences # Assert the 1st order error has slope dh^2 # Assert the 2nd order error has slope dh^4 p1 = np.polyfit(np.log10(scale), np.log10(norm1), 1) p2 = np.polyfit(np.log10(scale), np.log10(norm2), 1) print("\nlinearization F %s (so=%d) 1st (%.1f) = %.4f, 2nd (%.1f) = %.4f" % (shape, 8, dh**2, p1[0], dh**4, p2[0])) assert np.isclose(p1[0], dh**2, rtol=0.1) assert np.isclose(p2[0], dh**4, rtol=0.1) #NBVAL_INGNORE_OUTPUT # Plot linearization tests plt.figure(figsize=(12,10)) expected1 = np.empty(nstep) expected2 = np.empty(nstep) expected1[0] = norm1[0] expected2[0] = norm2[0] for kstep in range(1, nstep): expected1[kstep] = expected1[kstep - 1] / (dh**2) expected2[kstep] = expected2[kstep - 1] / (dh**4) msize = 10 plt.subplot(2,1,1) plt.plot(np.log10(scale), np.log10(expected1), '--k', label='1st order expected', linewidth=1.5) plt.plot(np.log10(scale), np.log10(norm1), '-r', label='1st order actual', linewidth=1.5) plt.plot(np.log10(scale), np.log10(expected1), 'ko', markersize=10, linewidth=3) plt.plot(np.log10(scale), np.log10(norm1), 'r*', markersize=10, linewidth=1.5) plt.xlabel('$log_{10}\ h$') plt.ylabel('$log_{10}\ \|| F(m+h dm) - F(m) \||$') plt.title('Linearization test (1st order error)') plt.legend(loc="lower right") plt.subplot(2,1,2) plt.plot(np.log10(scale), np.log10(expected2), '--k', label='2nd order expected', linewidth=3) plt.plot(np.log10(scale), np.log10(norm2), '-r', label='2nd order actual', linewidth=1.5) plt.plot(np.log10(scale), np.log10(expected2), 'ko', markersize=10, linewidth=3) plt.plot(np.log10(scale), np.log10(norm2), 'r*', markersize=10, linewidth=1.5) plt.xlabel('$log_{10}\ h$') plt.ylabel('$log_{10}\ \|| F(m+h dm) - F(m) - h J(dm)\||$') plt.title('Linearization test (2nd order error)') plt.legend(loc="lower right") plt.tight_layout() plt.show()
_____no_output_____
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
5. Jacobian operator linearity test, with respect to modelFor initial velocity model $m$ and random vectors $\delta m$ and $\delta r$, prove:$$\begin{aligned}\nabla F[m; q]\ (\alpha\ \delta m) &\approx \alpha\ \nabla F[m; q]\ \delta m \\[5pt](\nabla F[m; q])^\top (\alpha\ \delta r) &\approx \alpha\ (\nabla F[m; q])^\top \delta r\end{aligned}$$We first test the forward operator, and in the cell below that the adjoint operator.
#NBVAL_INGNORE_OUTPUT src0 = solver.geometry.src m0 = Function(name='m0', grid=solver.model.grid, space_order=8) m1 = Function(name='m1', grid=solver.model.grid, space_order=8) m0.data[:] = 1.5 # Model perturbation, box of random values centered on middle of model m1.data[:] = 0 size = 5 ns = 2 * size + 1 nx2, nz2 = shape[0]//2, shape[1]//2 m1.data[nx2-size:nx2+size, nz2-size:nz2+size] = \ -1 + 2 * np.random.rand(ns, ns) a = np.random.rand() rec1, _, _, _ = solver.jacobian(m1, src0, vp=m0) rec1.data[:] = a * rec1.data[:] m1.data[:] = a * m1.data[:] rec2, _, _, _ = solver.jacobian(m1, src0, vp=m0) # Normalize by rms of rec2, to enable using abolute tolerance below rms2 = np.sqrt(np.mean(rec2.data**2)) diff = (rec1.data - rec2.data) / rms2 print("\nlinearity forward J %s (so=%d) rms 1,2,diff; " "%+16.10e %+16.10e %+16.10e" % (shape, 8, np.sqrt(np.mean(rec1.data**2)), np.sqrt(np.mean(rec2.data**2)), np.sqrt(np.mean(diff**2)))) tol = 1.e-12 assert np.allclose(diff, 0.0, atol=tol) #NBVAL_INGNORE_OUTPUT src0 = solver.geometry.src m0 = Function(name='m0', grid=solver.model.grid, space_order=8) m1 = Function(name='m1', grid=solver.model.grid, space_order=8) m0.data[:] = 1.5 # Model perturbation, box of random values centered on middle of model m1.data[:] = 0 size = 5 ns = 2 * size + 1 nx2, nz2 = shape[0]//2, shape[1]//2 m1.data[nx2-size:nx2+size, nz2-size:nz2+size] = \ -1 + 2 * np.random.rand(ns, ns) a = np.random.rand() rec0, u0, _ = solver.forward(src0, vp=m0, save=True) dm1, _, _, _ = solver.jacobian_adjoint(rec0, u0, vp=m0) dm1.data[:] = a * dm1.data[:] rec0.data[:] = a * rec0.data[:] dm2, _, _, _ = solver.jacobian_adjoint(rec0, u0, vp=m0) # Normalize by rms of rec2, to enable using abolute tolerance below rms2 = np.sqrt(np.mean(dm2.data**2)) diff = (dm1.data - dm2.data) / rms2 print("\nlinearity adjoint J %s (so=%d) rms 1,2,diff; " "%+16.10e %+16.10e %+16.10e" % (shape, 8, np.sqrt(np.mean(dm1.data**2)), np.sqrt(np.mean(dm2.data**2)), np.sqrt(np.mean(diff**2))))
Operator `IsoFwdOperator` run in 0.55 s Operator `IsoJacobianAdjOperator` run in 0.13 s Operator `IsoJacobianAdjOperator` run in 2.34 s
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
6. Jacobian operator adjoint test, with respect to model perturbation and receiver wavefield perturbation For initial velocity model $m$ and random vectors $\delta m$ and $\delta r$, prove:$$\delta r \cdot \nabla F[m; q]\ \delta m \approx \delta m \cdot (\nabla F[m; q])^\top \delta r$$
#NBVAL_INGNORE_OUTPUT src0 = solver.geometry.src m0 = Function(name='m0', grid=solver.model.grid, space_order=8) dm1 = Function(name='dm1', grid=solver.model.grid, space_order=8) m0.data[:] = 1.5 # Model perturbation, box of random values centered on middle of model dm1.data[:] = 0 size = 5 ns = 2 * size + 1 nx2, nz2 = shape[0]//2, shape[1]//2 dm1.data[nx2-size:nx2+size, nz2-size:nz2+size] = \ -1 + 2 * np.random.rand(ns, ns) # Data perturbation rec1 = solver.geometry.rec nt, nr = rec1.data.shape rec1.data[:] = np.random.rand(nt, nr) # Nonlinear modeling rec0, u0, _ = solver.forward(src0, vp=m0, save=True) # Linearized modeling rec2, _, _, _ = solver.jacobian(dm1, src0, vp=m0) dm2, _, _, _ = solver.jacobian_adjoint(rec1, u0, vp=m0) sum_m = np.dot(dm1.data.reshape(-1), dm2.data.reshape(-1)) sum_d = np.dot(rec1.data.reshape(-1), rec2.data.reshape(-1)) diff = (sum_m - sum_d) / (sum_m + sum_d) print("\nadjoint J %s (so=%d) sum_m, sum_d, diff; %16.10e %+16.10e %+16.10e" % (shape, 8, sum_m, sum_d, diff)) assert np.isclose(diff, 0., atol=1.e-11)
Operator `IsoFwdOperator` run in 0.09 s Operator `IsoJacobianFwdOperator` run in 4.05 s
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
7. Skew symmetry for shifted derivativesEnsure for random $x_1, x_2$ that Devito shifted derivative operators $\overrightarrow{\partial_x}$ and $\overrightarrow{\partial_x}$ are skew symmetric by verifying the following dot product test.$$x_2 \cdot \left( \overrightarrow{\partial_x}\ x_1 \right) \approx -\ x_1 \cdot \left( \overleftarrow{\partial_x}\ x_2 \right) $$We use Devito to implement the following two equations for random $f_1, g_1$:$$\begin{aligned}f_2 = \overrightarrow{\partial_x}\ f_1 \\[5pt]g_2 = \overleftarrow{\partial_x}\ g_1\end{aligned}$$We verify passing this adjoint test by implementing the following equations for random $f_1, g_1$, and ensuring that the relative error terms vanishes.$$\begin{aligned}f_2 = \overrightarrow{\partial_x}\ f_1 \\[5pt]g_2 = \overleftarrow{\partial_x}\ g_1 \\[7pt]\frac{\displaystyle f_1 \cdot g_2 + g_1 \cdot f_2} {\displaystyle f_1 \cdot g_2 - g_1 \cdot f_2}\ <\ \epsilon\end{aligned}$$
#NBVAL_INGNORE_OUTPUT # Make 1D grid to test derivatives n = 101 d = 1.0 shape = (n, ) spacing = (1 / (n-1), ) origin = (0., ) extent = (d * (n-1), ) dtype = np.float64 # Initialize Devito grid and Functions for input(f1,g1) and output(f2,g2) # Note that space_order=8 allows us to use an 8th order finite difference # operator by properly setting up grid accesses with halo cells grid1d = Grid(shape=shape, extent=extent, origin=origin, dtype=dtype) x = grid1d.dimensions[0] f1 = Function(name='f1', grid=grid1d, space_order=8) f2 = Function(name='f2', grid=grid1d, space_order=8) g1 = Function(name='g1', grid=grid1d, space_order=8) g2 = Function(name='g2', grid=grid1d, space_order=8) # Fill f1 and g1 with random values in [-1,+1] f1.data[:] = -1 + 2 * np.random.rand(n,) g1.data[:] = -1 + 2 * np.random.rand(n,) # Equation defining: [f2 = forward 1/2 cell shift derivative applied to f1] equation_f2 = Eq(f2, f1.dx(x0=x+0.5*x.spacing)) # Equation defining: [g2 = backward 1/2 cell shift derivative applied to g1] equation_g2 = Eq(g2, g1.dx(x0=x-0.5*x.spacing)) # Define an Operator to implement these equations and execute op = Operator([equation_f2, equation_g2]) op() # Compute the dot products and the relative error f1g2 = np.dot(f1.data, g2.data) g1f2 = np.dot(g1.data, f2.data) diff = (f1g2+g1f2)/(f1g2-g1f2) tol = 100 * np.finfo(dtype).eps print("f1g2, g1f2, diff, tol; %+.6e %+.6e %+.6e %+.6e" % (f1g2, g1f2, diff, tol)) # At last the unit test # Assert these dot products are float epsilon close in relative error assert diff < 100 * np.finfo(np.float32).eps
Operator `Kernel` run in 0.01 s
MIT
examples/seismic/skew_self_adjoint/ssa_03_iso_correctness.ipynb
dabiged/devito
Support Vector Regression (SVR) Importing the libraries
import numpy as np import matplotlib.pyplot as plt import pandas as pd
_____no_output_____
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Importing the dataset
dataset = pd.read_csv('Position_Salaries.csv') x = dataset.iloc[:, 1:-1].values y = dataset.iloc[:, -1].values print(x) print(y) y = y.reshape(len(y), 1) print(y)
[[ 45000] [ 50000] [ 60000] [ 80000] [ 110000] [ 150000] [ 200000] [ 300000] [ 500000] [1000000]]
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Feature Scaling
from sklearn.preprocessing import StandardScaler sc_x = StandardScaler() sc_y = StandardScaler() x = sc_x.fit_transform(x) y = sc_y.fit_transform(y) print(x) print(y)
[[-0.72004253] [-0.70243757] [-0.66722767] [-0.59680786] [-0.49117815] [-0.35033854] [-0.17428902] [ 0.17781001] [ 0.88200808] [ 2.64250325]]
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Training the SVR model on the whole dataset
from sklearn.svm import SVR regressor = SVR(kernel = 'rbf') regressor.fit(x, y)
/usr/local/lib/python3.7/dist-packages/sklearn/utils/validation.py:993: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel(). y = column_or_1d(y, warn=True)
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Predicting a new result
sc_y.inverse_transform([regressor.predict(sc_x.transform([[6.5]]))]) #requires 2D array, therefore added square brackets before regressor.predict
_____no_output_____
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Visualising the SVR results
m=regressor.predict(x) plt.scatter(sc_x.inverse_transform(x), sc_y.inverse_transform(y), color = 'red') plt.plot(sc_x.inverse_transform(x), sc_y.inverse_transform(m.reshape(len(m),1)), color='blue') # requires 2d array therefore used reshape() plt.title('truth or bluff (Support Vector Regression)') plt.xlabel('Level') plt.ylabel('Salary') plt.show()
_____no_output_____
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
Visualising the SVR results (for higher resolution and smoother curve)
X_unscaled = sc_x.inverse_transform(x) y_unscaled = sc_y.inverse_transform(y) X_grid = np.arange(min(x), max(x), 0.1) X_grid = X_grid.reshape((len(X_grid), 1)) X_grid_unscaled = sc_x.inverse_transform(X_grid) y_pred_grid = regressor.predict(X_grid) y_pred_grid = sc_y.inverse_transform([y_pred_grid]) y_pred_grid = y_pred_grid.reshape((len(X_grid), 1)) plt.scatter(X_unscaled, y_unscaled, color = 'red') plt.plot(X_grid_unscaled, y_pred_grid, color = 'blue') plt.title('Level vs Salary (Support Vector Regression') plt.xlabel('Level') plt.ylabel('Salary') plt.show()
_____no_output_____
MIT
Regression/Support_vector_regression.ipynb
AstitvaSharma/ML_Algorithms
The fundamental data structures are Series and DataFrame
s = pd.Series(np.random.randn(4), index =['a','f','t','y']) s s['a']
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
The data argument can be dict, ndarray, or even a scalar etc
data = {'a': 3, 'b':4, 'c': 5, 'f':'something_else'} data s_dict = pd.Series(data, index=data.keys()) s_dict
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
When the dict doesn't have a matching key, it's not added. And when the index key is missing a value attached, NaN is given
s_dict_with_nan = pd.Series(data, index = ['a','b', 'j']) s_dict_with_nan
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Works differently with scalars
s_scalar = pd.Series(3, index=range(5))
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Works the same way even if you pass a list with one element, like [3] but fails if you pass [3,4] because expects 5 and not 2
s_scalar
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Series works just like an ndarray, if you have worked with numpy before. I do not have a lot of experience with numpy so can't comment on the full capabilites but according to what I know, you can apply vectorized operations to get a better code performance, slice in the same way we do with numpy ndarrays.
s
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
MENTIONING PYTHON DATA TYPE IN VARIABLE NAME IS NOT GOOD PRACTICE, SO TRY NOT TO
s[s>s.median()] s[[3, 0, 1]] np.exp(s) s.values s.keys s.keys() s.index try : some_random_var = s['g'] # Raises key error except KeyError : print('Caught key Error') s.f # Can also access elements this way s.a s
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Vectorized operations - Start of the end of matlab RIP
s+s s*2 s*3 s/2
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Moving to Pandas DataFrame According to definition from https://pandas.pydata.org/pandas-docs/stable/dsintro.htmldsintro __DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input: __- Dict of 1D ndarrays, lists, dicts, or Series- 2-D numpy.ndarray- Structured or record ndarray- A Series- Another DataFrame __Along with the data, you can optionally pass index (row labels) and columns (column labels) arguments.__
data = {'one': pd.Series([1,2,3,4], index=['a','b','c','d']), 'two': pd.Series([3,4,5,56,6], index=['a','b','f','e','y'])} df = pd.DataFrame(data) df
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
As you can see that it merged the two index together and filled the rest of the values with NaN
df_1 = pd.DataFrame(data, index=['a','b','c','f']) df_1 df_2 = pd.DataFrame(data, columns=['two']) df_2 df df.index df.columns data_for_dict = { 'one': [1,2,4,5], 'two': [2.,5.4,4.,5]} df_from_dict = pd.DataFrame(data_for_dict) df_from_dict['one'] data = np.ones((2,8)) data data.keys() s = pd.Series([3,4,5,5]) s s.values s.keys()
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
__A difference to note here is that Series does have a values and keys attribute whereas an ndarray doesn't. Good to know__
rows = [[1,2,3,43],[3,5,6,6],[66,6,6]] df_rows = pd.DataFrame(rows) df_rows df_rows_with_index = pd.DataFrame(rows, index=['first', 'second','third']) df_rows_with_index # Number of indices should always match the rows count else will raise a shape error try: df_test = pd.DataFrame({'one':[2,23,4,5,56], 'two': [1,3,4,4]}) except ValueError as e: print('Value error raised') print(e) df_test_2 = pd.DataFrame([[1,2,3,4],[2,3,3]]) df_test_2 df_test_3 = pd.DataFrame([[1,2,3,4],[2,3,3]], columns=['one','two','three', 'four']) df_test_3
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
As we can see from the above couple of test df, when we pass a dictionary as data to DataFrame, the arrays needs to be of the same length. But if we pass a row of rows, they are adjusted accordinglyStrange, but need to know the reasoning.
df_test_4
_____no_output_____
MIT
Untitled.ipynb
arora-anmol/pandas-practice
Working with POSTGIS In this chapter we will cover the following topics: Executing a PostGIS ST_Buffer analysis query and exporting it to GeoJSON Finding out whether a point is inside a polygon Splitting LineStrings at intersections using ST_Node Checking the validity of LineStrings Executing a spatial join and assigning point attributes to a polygon Conducting a complex spatial analysis query using ST_Distance() IntroductionA spatial database is nothing but a standard database that can store geometry and execute spatial queries in their simplest forms. We will explore how to run spatial analysis queries, handle connections, and more, all from our Python code. Your ability to answer spatial questions such as "I want to locate all the hotels that are within 2 km of a golf course and less than 5 km from a park" is where PostGIS comes into play. This chaining of requests into a model is where the powers of spatial analysis shine.We will work with the most popular and powerful open source spatial database called PostgreSQL, along with the PostGIS extension, including over 150 functions. Basically, we'll get a full-blown GIS with complex spatial analysis functions for both vectors and rasters, spatial data types, and diverse methods to move spatial data around.If you are looking for more information on PostGIS and a good read, please check out PostGIS Cookbook by Paolo Corti (available at https://www.packtpub.com/big-data-and-business-intelligence/postgis-cookbook). This book explores the wider use of PostGIS and includes a full chapter on PostGIS Programming using Python. Executing a PostGIS ST_BUFFER Analysis Query and exporting it to GeoJSONLet's start by executing our first spatial analysis query from Python against our already running PostgreSQL and PostGIS database. The goal is to generate a 100 m buffer around all schools and export the new buffer polygon to GeoJSON, including the name of a school. The end result will be shown on this map, available (https://github.com/mdiener21/python-geospatial-analysis-cookbook/blob/master/ch04/geodata/out_buff_100m.geojson) on GitHub.To get started, we'll use our data in the PostGIS database. We will begin by accessing our schools table that we uploaded to PostGIS in the Batch importing a folder of Shapefiles into PostGIS using ogr2ogr recipe of Chapter 3, Moving Spatial Data from One Format to Another.Connecting to a PostgreSQL and PostGIS database is accomplished with Psycopg, which is a Python DB API (http://initd.org/psycopg/) implementation. We've already installed this in Chapter 1, Setting Up Your Geospatial Python Environment along with PostgreSQL, Django, and PostGIS.
#!/usr/bin/env python import psycopg2 import json from geojson import loads, Feature, FeatureCollection # Database connection information db_host = "localhost" db_user = "calvin" db_passwd = "planets" db_database = "py_test" db_port = "5432" # connect to database conn = psycopg2.connect(host=db_host, user=db_user, port=db_port, password=db_passwd, database=db_database) # create a cursor cur = conn.cursor() # the PostGIS buffer query buffer_query = """SELECT ST_AsGeoJSON(ST_Transform( ST_Buffer(wkb_geometry, 100,'quad_segs=8'),4326)) AS geom, name FROM geodata.schools""" # execute the query cur.execute(buffer_query) # return all the rows, we expect more than one dbRows = cur.fetchall() # an empty list to hold each feature of our feature collection new_geom_collection = [] # loop through each row in result query set and add to my feature collection # assign name field to the GeoJSON properties for each_poly in dbRows: geom = each_poly[0] name = each_poly[1] geoj_geom = loads(geom) myfeat = Feature(geometry=geoj_geom, properties={'name': name}) new_geom_collection.append(myfeat) # use the geojson module to create the final Feature Collection of features created from for loop above my_geojson = FeatureCollection(new_geom_collection) # define the output folder and GeoJSon file name output_geojson_buf = "../geodata/out_buff_100m.geojson" # save geojson to a file in our geodata folder def write_geojson(): fo = open(output_geojson_buf, "w") fo.write(json.dumps(my_geojson)) fo.close() # run the write function to actually create the GeoJSON file write_geojson() # close cursor cur.close() # close connection conn.close()
_____no_output_____
MIT
spatial_GIS_RS/python_notebooks/4. Working_with_Postgis.ipynb
OpitiCalvin/Scripts_n_Tutorials
How it worksThe database connection is using the pyscopg2 module, so we import the libraries at the start alongside geojson and the standard json modules to handle our GeoJSON export.Our connection is created and then followed immediately with our SQL Buffer query string. The query uses three PostGIS functions. Working your way from the inside out, you will see the ST_Buffer function taking in the geometry of the school points followed by the 100 m buffer distance and the number of circle segments that we would like to generate. ST_Transform then takes the newly created buffer geometry and transforms it into the WGS84 coordinate system (EPSG: 4326) so that we can display it on GitHub, which only displays WGS84 and the projected GeoJSON. Lastly, we'll use the ST_asGeoJSON function to export our geometry as the GeoJSON geometry.Note:PostGIS does not export the complete GeoJSON syntax, only the geometry in the form of the GeoJSON geometry. This is the reason that we need to complete our GeoJSON using the Python geojson module.All of this means that we not only perform analysis on the query, but we also specify the output format and coordinate system all in one go.Next, we will execute the query and fetch all the returned objects using cur.fetchall() so that we can later loop through each returned buffer polygon. Our new_geom_collection list will store each of the new geometries and the feature names. Next, in the for loop function, we'll use the geojson module function, loads(geom), to input our geometry into a GeoJSON geometry object. This is followed by the Feature()function that actually creates our GeoJSON feature. This is then used as the input for the FeatureCollection function where the final, completed GeoJSON is created.Lastly, we'll need to write this new GeoJSON file to disk and save it. Hence, we'll use the new file object where we use the standard Python json.dumps module to export our FeatureCollection.We'll do a little clean up to close the cursor object and connection. Bingo! We are now done and can visualize our final results. Finding out whether a point is inside a polygonA point inside a polygon analysis query is a very common spatial operation. This query can identify objects located within an area such as a polygon. The area of interest in this example is a 100 m buffer polygon around bike paths and we would like to locate all schools that are inside this polygon. Getting readyIn the previous section, we used the schools table to create a buffer. This time around, we will use this table as our input points table. The bikeways table that we imported in Chapter 3, Moving Spatial Data from One Format to Another, will be used as our input lines to generate a new 100 m buffer polygon. Be sure, however, that you have the two datasets in your local PostgreSQL database. How to do itNow, let's dive into some more code to find schools located within 100 m of the bikeways in order to find points inside a polygon:
#!/usr/bin/env python # -*- coding: utf-8 -*- import json import psycopg2 from geojson import loads, Feature, FeatureCollection # Database Connection Info db_host = "localhost" db_user = "pluto" db_passwd = "stars" db_database = "py_geoan_cb" db_port = "5432" # connect to DB conn = psycopg2.connect(host=db_host, user=db_user, port=db_port, password=db_passwd, database=db_database) # create a cursor cur = conn.cursor() # uncomment if needed # cur.execute("Drop table if exists geodata.bikepath_100m_buff;") # query to create a new polygon 100m around the bikepath new_bike_buff_100m = """ CREATE TABLE geodata.bikepath_100m_buff AS SELECT name, ST_Buffer(wkb_geometry, 100) AS geom FROM geodata.bikeways; """ # run the query cur.execute(new_bike_buff_100m) # commit query to database conn.commit() # query to select schools inside the polygon and output geojson is_inside_query = """ SELECT s.name AS name, ST_AsGeoJSON(ST_Transform(s.wkb_geometry,4326)) AS geom FROM geodata.schools AS s, geodata.bikepath_100m_buff AS bp WHERE ST_WITHIN(s.wkb_geometry, bp.geom); """ # execute the query cur.execute(is_inside_query) # return all the rows, we expect more than one db_rows = cur.fetchall() # an empty list to hold each feature of our feature collection new_geom_collection = [] def export2geojson(query_result): """ loop through each row in result query set and add to my feature collection assign name field to the GeoJSON properties :param query_result: pg query set of geometries :return: new geojson file """ for row in db_rows: name = row[0] geom = row[1] geoj_geom = loads(geom) myfeat = Feature(geometry=geoj_geom, properties={'name': name}) new_geom_collection.append(myfeat) # use the geojson module to create the final Feature # Collection of features created from for loop above my_geojson = FeatureCollection(new_geom_collection) # define the output folder and GeoJSon file name output_geojson_buf = "../geodata/out_schools_in_100m.geojson" # save geojson to a file in our geodata folder def write_geojson(): fo = open(output_geojson_buf, "w") fo.write(json.dumps(my_geojson)) fo.close() # run the write function to actually create the GeoJSON file write_geojson() export2geojson(db_rows)
_____no_output_____
MIT
spatial_GIS_RS/python_notebooks/4. Working_with_Postgis.ipynb
OpitiCalvin/Scripts_n_Tutorials
You can now view your newly created GeoJSON file on a great little site created by Mapbox at http://www.geojson.io. Simply drag and drop your GeoJSON file from Windows Explorer in Windows or Nautilus in Ubuntu onto the http://www.geojson.io web page and, Bob's your uncle, you should see 50 or so schools that are located within 100 m of a bikeway in Vancouver. How it worksWe will reuse code to make our database connection, so this should be familiar to you at this point. The new_bike_buff_100m query string contains our query to generate a new 100 m buffer polygon around all the bikeways. We need to execute this query and commit it to the database so that we can access this new set of polygons as input to our actual query that will find schools (points) located inside this new buffer polygon.The is_inside_query string actually does the hard work for us by selecting selecting the values from the field name and the geometry from the geom field. The geometry is wrapped up in two other PostGIS functions to allow us to export our data as GeoJSON in the WGS 84 coordinate system. This will be the input geometry needed to generate our final new GeoJSON file.The WHERE clause uses the ST_Within function to see whether a point is inside the polygon and returns True if the point is within the buffer polygon.Now, we've created a new function that simply wraps up our export to the GeoJSON code that was used in the previous, Executing a PostGIS ST_Buffer analysis query and exporting it to GeoJSON, recipe. This new export2geojson function simply takes one input of our PostGIS query and outputs a GeoJSON file. To set the name and location of the new output file, simply replace the path and name within the function.Finally, all we need to do is call the new function to export the GeoJSON file using the db_rows variable that contains our list of schools as points located within the 100 m buffer polygon. There's more...This example to find all schools located within 100 m of the bike paths could be completed using another PostGIS function called ST_Dwithin.The SQL to select all the schools located within 100 m of the bikepaths would look like this:SELECT * FROM geodata.bikeways as b, geodata.schools as s where ST_DWithin(b.wkb_geometry, s.wkb_geometry, 100) Splitting LineStrings at Intersections using ST_NodeWorking with road data is usually a tricky business because the validity of the data and data structure plays a very important role. If you want to do anything useful with your road data, such as building a routing network, you will need to prepare the data first. The first task is usually to segmentize your lines, which means splitting all lines at intersections where LineStrings cross each other, creating a base network road dataset.NoteBe aware that this recipe will split all lines on all intersections regardless of whether, for example, there is a road-bridge overpass where no intersection should be created. Getting ReadyBefore we get into the details of how to do this, we will use a small section of the OpenStreetMap (OSM) road data for our example. The OSM data is available in your /ch04/geodata/folder called vancouver-osm-data.osm. This data was simply downloaded from the www.openstreetmap.org home page using the Export button located at the top of the page:The OSM data contains not only roads but all the other points and polygons located within the extent that I have chosen. The region of interest is again the Burrard Street bridge in Vancouver.We are going to need to extract all the roads and import them into our PostGIS table. This time, let's try using the ogr2ogr command line directly from the console to upload the OSM streets to our PostGIS database:ogr2ogr -lco SCHEMA=geodata -nlt LINESTRING -f "PostgreSQL" PG:"host=localhost port=5432 user=pluto dbname=py_geoan_cb password=stars" ../geodata/vancouver-osm-data.osm lines -t_srs EPSG:3857This assumes that your OSM data is in the /ch04/geodata folder and the command is run while you are located in the /ch04/code folder.Now this really long thing means that we connect to our PostGIS database as our output and input the vancouver-osm-data.osm file. Create a new table called lines and transform the input OSM projection to EPSG:3857. All data exported from OSM is in EPSG:4326. You can, of course, leave it in this system and simply remove the -t_srs EPSG:3857 part of the command line option.We are now ready to rock and roll with the splitting at intersections. If you like, go ahead and open the data in QGIS (Quantum GIS). In QGIS, you will see that the road data is not split at all road intersections as shown in this screenshot:Here, you can see that McNicoll Avenue is a single LineString crossing over Cypress Street. After we've completed the recipe, we will see that McNicoll Avenue will be split at this intersection.
#!/usr/bin/env python import psycopg2 import json from geojson import loads, Feature, FeatureCollection # Database Connection Info db_host = "localhost" db_user = "pluto" db_passwd = "stars" db_database = "py_geoan_cb" db_port = "5432" # connect to DB conn = psycopg2.connect(host=db_host, user=db_user, port=db_port, password=db_passwd, database=db_database) # create a cursor cur = conn.cursor() # drop table if exists # cur.execute("DROP TABLE IF EXISTS geodata.split_roads;") # split lines at intersections query split_lines_query = """ CREATE TABLE geodata.split_roads (ST_Node(ST_Collect(wkb_geometry)))).geom AS geom FROM geodata.lines;""" cur.execute(split_lines_query) conn.commit() cur.execute("ALTER TABLE geodata.split_roads ADD COLUMN id serial;") cur.execute("ALTER TABLE geodata.split_roads ADD CONSTRAINT split_roads_pkey PRIMARY KEY (id);") # close cursor cur.close() # close connection conn.close()
_____no_output_____
MIT
spatial_GIS_RS/python_notebooks/4. Working_with_Postgis.ipynb
OpitiCalvin/Scripts_n_Tutorials
Well, this was quite simple and we can now see that McNicoll Avenue is split at the intersection with Cypress Street. HOw it worksLooking at the code, we can see that the database connection remains the same and the only new thing is the query itself that creates the intersection. Here three separate PostGIS functions are used to obtain our results: The first function, when working our way inside-out in the query, starts with ST_Collect(wkb_geometry). This simply takes our original geometry column as input. The simple combining of the geometries is all that is going on here. Next up is the actual splitting of the lines using the ST_Node(geometry), inputting the new geometry collection and nodding, which splits our LineStrings at intersections. Finally, we'll use ST_Dump() as a set returning function. This means that it basically explodes all the LineString geometry collections into individual LineStrings. The end of the query with .geom specifies that we only want to export the geometry and not the returned array numbers of the split geometry. Now, we'll execute and commit the query to the database. The commit is an important part because, otherwise, the query will be run but it will not actually create the new table that we are looking to generate. Last but not least, we can close down our cursor and connection. That is that; we now have split LineStrings.Note:Be aware that the new split LineStrings do NOT contain the street names and other attributes. To export the names, we would need to do a join on our data. Such a query to include the attributes on the newly created LineStrings could look like this:CREATE TABLE geodata.split_roads_attributes AS SELECT r.geom, li.name, li.highwayFROM geodata.lines li, geodata.split_roads rWHERE ST_CoveredBy(r.geom, li.wkb_geometry) Checking the Validity of LineStringsWorking with road data has many areas to watch out for and one of these is invalid geometry. Our source data is OSM and is, therefore, collected by a community of users that are not trained by GIS professionals, resulting in errors. To execute spatial queries, the data must be valid or we will have results with errors or no results at all.PostGIS includes the ST_isValid() function that returns True/False on the basis of whether a geometry is valid or not. There is also the ST_isValidReason() function that will output a text description of the geometry error. Finally, the ST_isValidDetail() function will return if the geometry is valid along with the reason and location of the geometry error. These three functions all accomplish similar tasks and selecting one depends on what you want to accomplish. How to do it...1. Now, to determine if geodata.lines are valid, we will run another query that will list all invalid geometries if there are any:
#!/usr/bin/env python # -*- coding: utf-8 -*- import psycopg2 # Database Connection Info db_host = "localhost" db_user = "pluto" db_passwd = "stars" db_database = "py_geoan_cb" db_port = "5432" # connect to DB conn = psycopg2.connect(host=db_host, user=db_user, port=db_port, password=db_passwd, database=db_database) # create a cursor cur = conn.cursor() # the PostGIS buffer query valid_query = """SELECT ogc_fid, ST_IsValidDetail(wkb_geometry) FROM geodata.lines WHERE NOT ST_IsValid(wkb_geometry); """ # execute the query cur.execute(valid_query) # return all the rows, we expect more than one validity_results = cur.fetchall() print validity_results # close cursor cur.close() # close connection conn.close();
_____no_output_____
MIT
spatial_GIS_RS/python_notebooks/4. Working_with_Postgis.ipynb
OpitiCalvin/Scripts_n_Tutorials
This query should return an empty Python list, which means that we have no invalid geometries. If there are objects in your list, then you'll know that you have some manual work to do to correct those geometries. Your best bet is to fire up QGIS and get started with digitizing tools to clean things up. Executing a spatial join and assigning point attributes to a polygonWe'll now get back to some more golf action where we would like to execute a spatial attribute join. We're given a situation where we have a set of polygons, in this case, these are in the form of golf greens without any hole number. Our hole number is stored in a point dataset that is located spatially within the green of each hole. We would like to assign each green its appropriate hole number based on its location within the polygon.The OSM data from the Pebble Beach Golf Course located in Monterey California is our source data. This golf course is one the great golf courses on the PGA tour and is well mapped in OSM. Getting readyImporting our data into PostGIS will be the first step to execute our spatial query. This time around, we will use the shp2pgsql tool to import our data to change things a little since there are so many ways to get data into PostGIS. The shp2pgsql tool is definitely the most well-tested and common way to import Shapefiles into PostGIS. Let's get going and perform this import once again, executing this tool directly from the command line.For Windows users, this should work, but check that the paths are correct or that shp2pgsql.exe is in your system path variable. By doing this, you save having to type the full path to execute.e.g.shp2pgsql -s 4326 ..\geodata\shp\pebble-beach-ply-greens.shp geodata.pebble_beach_greens | psql -h localhost -d py_geoan_cb -p 5432 -U plutoOn a Linux machine your command is basically the same without the long path, assuming that your system links were all set up when you installed PostGIS in Chapter 1, Setting Up Your Geospatial Python Environment.Next up, we need to import our points with the attributes, so let's get to it as follows:shp2pgsql -s 4326 ..\geodata\shp\pebble-beach-pts-hole-num-green.shp geodata.pebble_bea-ch_hole_num | psql -h localhost -d py_geoan_cb -p 5432 -U postgresThat's that! We now have our points and polygons available in the PostGIS Schema geodata setting, which sets the stage for our spatial join. How to do itThe core work is done once again inside our PostGIS query string, assigning the attributes to the polygons, so follow along:
#!/usr/bin/env python # -*- coding: utf-8 -*- import psycopg2 # Database Connection Info db_host = "localhost" db_user = "pluto" db_passwd = "stars" db_database = "py_geoan_cb" db_port = "5432" # connect to DB conn = psycopg2.connect(host=db_host, user=db_user, port=db_port, password=db_passwd, database=db_database) # create a cursor cur = conn.cursor() # assign polygon attributes from points spatial_join = """ UPDATE geodata.pebble_beach_greens AS g SET name = h.name FROM geodata.pebble_beach_hole_num AS h WHERE ST_Contains(g.geom, h.geom); """ cur.execute(spatial_join) conn.commit() # close cursor cur.close() # close connection conn.close()
_____no_output_____
MIT
spatial_GIS_RS/python_notebooks/4. Working_with_Postgis.ipynb
OpitiCalvin/Scripts_n_Tutorials
**Getting Started With Spark using Python** Estimated time needed: **15** minutes ![](http://spark.apache.org/images/spark-logo.png) The Python API Spark is written in Scala, which compiles to Java bytecode, but you can write python code to communicate to the java virtual machine through a library called py4j. Python has the richest API, but it can be somewhat limiting if you need to use a method that is not available, or if you need to write a specialized piece of code. The latency associated with communicating back and forth to the JVM can sometimes cause the code to run slower.An exception to this is the SparkSQL library, which has an execution planning engine that precompiles the queries. Even with this optimization, there are cases where the code may run slower than the native scala version.The general recommendation for PySpark code is to use the "out of the box" methods available as much as possible and avoid overly frequent (iterative) calls to Spark methods. If you need to write high-performance or specialized code, try doing it in scala.But hey, we know Python rules, and the plotting libraries are way better. So, it's up to you! Objectives In this lab, we will go over the basics of Apache Spark and PySpark. We will start with creating the SparkContext and SparkSession. We then create an RDD and apply some basic transformations and actions. Finally we demonstrate the basics dataframes and SparkSQL.After this lab you will be able to:* Create the SparkContext and SparkSession* Create an RDD and apply some basic transformations and actions to RDDs* Demonstrate the use of the basics Dataframes and SparkSQL *** Setup For this lab, we are going to be using Python and Spark (PySpark). These libraries should be installed in your lab environment or in SN Labs.
# Installing required packages !pip install pyspark !pip install findspark import findspark findspark.init() # PySpark is the Spark API for Python. In this lab, we use PySpark to initialize the spark context. from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Exercise 1 - Spark Context and Spark Session In this exercise, you will create the Spark Context and initialize the Spark session needed for SparkSQL and DataFrames.SparkContext is the entry point for Spark applications and contains functions to create RDDs such as `parallelize()`. SparkSession is needed for SparkSQL and DataFrame operations. Task 1: Creating the spark session and context
# Creating a spark context class sc = SparkContext() # Creating a spark session spark = SparkSession \ .builder \ .appName("Python Spark DataFrames basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate()
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 2: Initialize Spark sessionTo work with dataframes we just need to verify that the spark session instance has been created.
spark
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Exercise 2: RDDsIn this exercise we work with Resilient Distributed Datasets (RDDs). RDDs are Spark's primitive data abstraction and we use concepts from functional programming to create and manipulate RDDs. Task 1: Create an RDD.For demonstration purposes, we create an RDD here by calling `sc.parallelize()`\We create an RDD which has integers from 1 to 30.
data = range(1,30) # print first element of iterator print(data[0]) len(data) xrangeRDD = sc.parallelize(data, 4) # this will let us know that we created an RDD xrangeRDD
1
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 2: Transformations A transformation is an operation on an RDD that results in a new RDD. The transformed RDD is generated rapidly because the new RDD is lazily evaluated, which means that the calculation is not carried out when the new RDD is generated. The RDD will contain a series of transformations, or computation instructions, that will only be carried out when an action is called. In this transformation, we reduce each element in the RDD by 1. Note the use of the lambda function. We also then filter the RDD to only contain elements <10.
subRDD = xrangeRDD.map(lambda x: x-1) filteredRDD = subRDD.filter(lambda x : x<10)
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 3: Actions A transformation returns a result to the driver. We now apply the `collect()` action to get the output from the transformation.
print(filteredRDD.collect()) filteredRDD.count()
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 4: Caching Data This simple example shows how to create an RDD and cache it. Notice the **10x speed improvement**! If you wish to see the actual computation time, browse to the Spark UI...it's at host:4040. You'll see that the second calculation took much less time!
import time test = sc.parallelize(range(1,50000),4) test.cache() t1 = time.time() # first count will trigger evaluation of count *and* cache count1 = test.count() dt1 = time.time() - t1 print("dt1: ", dt1) t2 = time.time() # second count operates on cached data only count2 = test.count() dt2 = time.time() - t2 print("dt2: ", dt2) #test.count()
dt1: 1.997375726699829 dt2: 0.41718220710754395
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Exercise 3: DataFrames and SparkSQL In order to work with the extremely powerful SQL engine in Apache Spark, you will need a Spark Session. We have created that in the first Exercise, let us verify that spark session is still active.
spark
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 1: Create Your First DataFrame! You can create a structured data set (much like a database table) in Spark. Once you have done that, you can then use powerful SQL tools to query and join your dataframes.
# Download the data first into a local `people.json` file !curl https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-BD0225EN-SkillsNetwork/labs/data/people.json >> people.json # Read the dataset into a spark dataframe using the `read.json()` function df = spark.read.json("people.json").cache() # Print the dataframe as well as the data schema df.show() df.printSchema() # Register the DataFrame as a SQL temporary view df.createTempView("people")
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Task 2: Explore the data using DataFrame functions and SparkSQLIn this section, we explore the datasets using functions both from dataframes as well as corresponding SQL queries using sparksql. Note the different ways to achieve the same task!
# Select and show basic data columns df.select("name").show() df.select(df["name"]).show() spark.sql("SELECT name FROM people").show() # Perform basic filtering df.filter(df["age"] > 21).show() spark.sql("SELECT age, name FROM people WHERE age > 21").show() # Perfom basic aggregation of data df.groupBy("age").count().show() spark.sql("SELECT age, COUNT(age) as count FROM people GROUP BY age").show()
+----+-----+ | age|count| +----+-----+ | 19| 1| |null| 1| | 30| 1| +----+-----+ +----+-----+ | age|count| +----+-----+ | 19| 1| |null| 0| | 30| 1| +----+-----+
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
*** Question 1 - RDDs Create an RDD with integers from 1-50. Apply a transformation to multiply every number by 2, resulting in an RDD that contains the first 50 even numbers.
# starter code # numbers = range(1, 50) # numbers_RDD = ... # even_numbers_RDD = numbers_RDD.map(lambda x: ..) # Code block for learners to answer numbers = range(1, 50) numbers_RDD = sc.parallelize(data, 4) even_numbers_RDD = numbers_RDD.map(lambda x: x * 2) even_numbers_RDD.collect()
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Question 2 - DataFrames and SparkSQL Similar to the `people.json` file, now read the `people2.json` file into the notebook, load it into a dataframe and apply SQL operations to determine the average age in our people2 file.
# starter code # !curl https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-BD0225EN-SkillsNetwork/labs/data/people2.json >> people2.json # df = spark.read... # df.createTempView.. # spark.sql("SELECT ...") # Code block for learners to answer !curl https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-BD0225EN-SkillsNetwork/labs/data/people2.json >> people2.json df = spark.read.json('people2.json') df.createTempView("people2") spark.sql("SELECT AVG(age) from people2")
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 326 100 326 0 0 478 0 --:--:-- --:--:-- --:--:-- 478
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
Double-click **here** for a hint.<!-- The hint is below:1. The SQL query "Select AVG(column_name) from.." can be used to find the average value of a column. 2. Another possible way is to use the dataframe operations select() and mean()--> Double-click **here** for the solution.<!-- The answer is below:df = spark.read('people2.json')df.createTempView("people2")spark.sql("SELECT AVG(age) from people2")--> Question 3 - SparkSession Close the SparkSession we created for this notebook
# Code block for learners to answer spark.stop()
_____no_output_____
MIT
Coursera/Apache_Spark_Fundamentals.ipynb
SokolovVadim/Big-Data
There are two ways to load the pretrained model, the first way is to load from local model directory. A model directory should consist of two files: config.pkl that describes the configuration of the model and model.pt, which is the model weights. If you use model.save(MODEL_DIR) to save the model, then, it should be good.The below code exemplifies suppose your model is in the 'path', then, you can load the model using path_dir parameter.
path = utils.download_pretrained_model('Morgan_AAC_DAVIS') net = models.model_pretrained(path_dir = path) net.config
_____no_output_____
BSD-3-Clause
DEMO/load_pretraining_models_tutorial.ipynb
markcheung/DeepPurpose
For models that provided by us, you can directly use the pre-designated model names. The full list is in the Github README https://github.com/kexinhuang12345/DeepPurpose/blob/master/README.mdpretrained-models
net = models.model_pretrained(model = 'MPNN_CNN_DAVIS') net.config
Beginning Downloading MPNN_CNN_DAVIS Model... Downloading finished... Beginning to extract zip file... pretrained model Successfully Downloaded...
BSD-3-Clause
DEMO/load_pretraining_models_tutorial.ipynb
markcheung/DeepPurpose
SIR example Deterministic model
def SIR(t, y, b, d, beta, u, v): N = y[0]+y[1]+y[2] return [b*N - d*y[0] - beta*y[1]/N*y[0] - v*y[0], beta*y[1]/N*y[0] - u*y[1] - d*y[1], u*y[1] - d*y[2] + v*y[0]] # Time interval for the simulation t0 = 0 t1 = 120 t_span = (t0, t1) t_eval = np.linspace(t_span[0], t_span[1], 10000) # Initial conditions, N = 1000 I = 5 R = 0 S = N - I y0 = [S, I , R] # Parameters b = 0.002/365 d = 0.0016/365 beta = 0.3 u = 1.0/7.0 v = 0.0 # Solve for SIR equation sol = solve_ivp(lambda t,y: SIR(t, y, b, d, beta, u, v), t_span, y0, method='RK45',t_eval=t_eval) # plot fig, ax = plt.subplots(1, figsize=(6, 4.5)) # plot y1 and y2 together ax.plot(sol.t.T,sol.y.T ) ax.set_ylabel('Number of predator and prey') ax.set_xlabel('time') ax.legend(['Susceptible', 'Infectious', 'Recover']) if saveFigure: filename = 'SIR_deterministic.pdf' fig.savefig(filename, format='pdf', dpi=1000, bbox_inches='tight')
_____no_output_____
MIT
LectureNotes/MC/MC.ipynb
enigne/ScientificComputingBridging
Stochastic model
import numpy as np # Plotting modules import matplotlib.pyplot as plt def sample_discrete(probs): """ Randomly sample an index with probability given by probs. """ # Generate random number q = np.random.rand() # Find index i = 0 p_sum = 0.0 while p_sum < q: p_sum += probs[i] i += 1 return i - 1 # Function to draw time interval and choice of reaction def gillespie_draw(params, propensity_func, population): """ Draws a reaction and the time it took to do that reaction. """ # Compute propensities props = propensity_func(params, population) # Sum of propensities props_sum = props.sum() # Compute time time = np.random.exponential(1.0 / props_sum) # Compute discrete probabilities of each reaction rxn_probs = props / props_sum # Draw reaction from this distribution rxn = sample_discrete(rxn_probs) return rxn, time def gillespie_ssa(params, propensity_func, update, population_0, time_points): """ Uses the Gillespie stochastic simulation algorithm to sample from proability distribution of particle counts over time. Parameters ---------- params : arbitrary The set of parameters to be passed to propensity_func. propensity_func : function Function of the form f(params, population) that takes the current population of particle counts and return an array of propensities for each reaction. update : ndarray, shape (num_reactions, num_chemical_species) Entry i, j gives the change in particle counts of species j for chemical reaction i. population_0 : array_like, shape (num_chemical_species) Array of initial populations of all chemical species. time_points : array_like, shape (num_time_points,) Array of points in time for which to sample the probability distribution. Returns ------- sample : ndarray, shape (num_time_points, num_chemical_species) Entry i, j is the count of chemical species j at time time_points[i]. """ # Initialize output pop_out = np.empty((len(time_points), update.shape[1]), dtype=np.int) # Initialize and perform simulation i_time = 1 i = 0 t = time_points[0] population = population_0.copy() pop_out[0,:] = population while i < len(time_points): while t < time_points[i_time]: # draw the event and time step event, dt = gillespie_draw(params, propensity_func, population) # Update the population population_previous = population.copy() population += update[event,:] # Increment time t += dt # Update the index i = np.searchsorted(time_points > t, True) # Update the population pop_out[i_time:min(i,len(time_points))] = population_previous # Increment index i_time = i return pop_out def simple_propensity(params, population): """ Returns an array of propensities given a set of parameters and an array of populations. """ # Unpack parameters beta, b, d, u, v = params # Unpack population S, I, R = population N = S + I + R return np.array([beta*I*S/N, u*I, v*S, d*S, d*I, d*R, b*N])
_____no_output_____
MIT
LectureNotes/MC/MC.ipynb
enigne/ScientificComputingBridging
Solve SIR in stochastic method
# Column changes S, I, R simple_update = np.array([[-1, 1, 0], [0, -1, 1], [-1, 0, 1], [-1, 0, 0], [0, -1, 0], [0, 0, -1], [1, 0, 0]], dtype=np.int) # Specify parameters for calculation params = np.array([0.3, 0.002/365, 0.0016/365, 1/7.0, 0]) time_points = np.linspace(0, 120, 500) population_0 = np.array([995, 5, 0]) n_simulations = 100 # Seed random number generator for reproducibility np.random.seed(42) # Initialize output array pops = np.empty((n_simulations, len(time_points), 3)) # Run the calculations for i in range(n_simulations): pops[i,:,:] = gillespie_ssa(params, simple_propensity, simple_update, population_0, time_points) # Set up subplots fig, ax = plt.subplots(1, 1, figsize=(6, 4.5)) for j in range(3): ax.plot(time_points, pops[4,:,j], '-', color='C'+str(j)) ax.set_ylabel('Number of predator and prey') ax.set_xlabel('time') ax.legend(['Susceptible', 'Infectious', 'Recover']) if saveFigure: filename = 'SIR_stochastic1.pdf' fig.savefig(filename, format='pdf', dpi=1000, bbox_inches='tight') # Set up subplots fig, ax = plt.subplots(1, 1, figsize=(6, 4.5)) ax.plot(time_points, pops[:,:,0].mean(axis=0), lw=6) ax.plot(time_points, pops[:,:,1].mean(axis=0), lw=6) ax.plot(time_points, pops[:,:,2].mean(axis=0), lw=6) ax.set_ylabel('Number of predator and prey') ax.set_xlabel('time') ax.legend(['Susceptible', 'Infectious', 'Recover']) for j in range(3): for i in range(n_simulations): ax.plot(time_points, pops[i,:,j], '-', lw=0.3, alpha=0.2, color='C'+str(j)) if saveFigure: filename = 'SIR_stochasticAll.pdf' fig.savefig(filename, format='pdf', dpi=1000, bbox_inches='tight')
_____no_output_____
MIT
LectureNotes/MC/MC.ipynb
enigne/ScientificComputingBridging
Simulate interest rate path by the CIR model
import math import numpy as np import matplotlib.pyplot as plt def cir(r0, K, theta, sigma, T=1., N=10,seed=777): np.random.seed(seed) dt = T/float(N) rates = [r0] for i in range(N): dr = K*(theta-rates[-1])*dt + \ sigma*math.sqrt(abs(rates[-1]))*np.random.normal() rates.append(rates[-1] + dr) return range(N+1), rates fig, ax = plt.subplots(1, 1, figsize=(6, 4.5)) for i in range(30): x, y = cir(72, 0.001, 0.01, 0.012, 10., N=200, seed=100+i) ax.plot(x, y) ax.set_ylabel('Assets price') ax.set_xlabel('time') ax.autoscale(enable=True, axis='both', tight=True) if saveFigure: filename = 'CIR.pdf' fig.savefig(filename, format='pdf', dpi=1000, bbox_inches='tight')
_____no_output_____
MIT
LectureNotes/MC/MC.ipynb
enigne/ScientificComputingBridging
Import data and drop redundant data (rates)
# import data df = pd.read_csv('../../data/deepsolar_tract.csv', encoding = "utf-8")
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Clean Data
df = drop_redundant_columns(df) # Create our target column 'has_tiles', and drop additional redundant columns df = create_has_tiles_target_column(df) df.shape # # # Figure out which variables are highly correlated, remove the most correlated ones one by one # corr = pd.DataFrame((df.corr() > 0.8).sum()) # corr.sort_values(by = 0, ascending = False)[0:5] # # # Add highly correlated variables to list 'to_drop' # to_drop = ['poverty_family_count','education_population','population', 'household_count','housing_unit_occupied_count', 'electricity_price_overall'] # Drop highly colinear variables # df = df.drop(to_drop, axis = 1) # VIF score
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Checking for missing values
nulls = pd.DataFrame(df.isna().sum()) nulls.columns = ["missing"] nulls[nulls['missing']>0].head() # drop all missing values df = df.dropna(axis = 0) # Check class imbalance df.has_tiles.value_counts() df.shape
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Train test split
X = df.drop('has_tiles', axis = 1) y = df['has_tiles'] df.shape X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, stratify = y)
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Sampling Techniques
# smote, undersampling, or oversampling X_train, y_train = pick_sampling_method(X_train, y_train, method = 'oversampling') y_train.value_counts()
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Scale Data
scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) X_train.shape
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Modeling
from sklearn.metrics import classification_report
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Vanilla Decision Tree 0.74
## DUMMY dummy = DecisionTreeClassifier() dummy.fit(X_train, y_train) y_pred = dummy.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred))) print("Recall: {}".format(recall_score(y_test, y_pred))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred))) print("F1 Score: {}".format(f1_score(y_test, y_pred))) print(classification_report(y_test, y_pred))
precision recall f1-score support 0 0.24 0.82 0.37 2500 1 0.79 0.21 0.33 8320 accuracy 0.35 10820 macro avg 0.51 0.51 0.35 10820 weighted avg 0.66 0.35 0.34 10820
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Decision Tree with Hyperparameter Tuning
dt = find_hyperparameters(pipe_dt, params_dt, X_train, y_train) dt.best_params_ best_dt = dt.best_estimator_ # Decision Tree: {'dt__max_depth': 2, 'dt__min_samples_leaf': 1, 'dt__min_samples_split': 2} best_dt.fit(X_train, y_train) best_dt.score(X_test, y_test) # Decision Tree: 0.755637707948244 y_pred_dt = best_dt.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_dt))) print("Recall: {}".format(recall_score(y_test, y_pred_dt))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_dt))) print("F1 Score: {}".format(f1_score(y_test, y_pred_dt)))
Precision: 0.9131785238869989 Recall: 0.7420673076923077 Accuracy: 0.7474121996303142 F1 Score: 0.8187785955838471
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Final Model
# with oversampling rf = RandomForestClassifier(max_features = 'sqrt', max_depth = 5, min_samples_leaf = 5, n_estimators = 30) rf.fit(X_train, y_train) y_pred = rf.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred))) print("Recall: {}".format(recall_score(y_test, y_pred))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred))) print("F1 Score: {}".format(f1_score(y_test, y_pred))) print("Balanced Accuracy: {}".format(balanced_accuracy_score(y_test, y_pred))) cm = ConfusionMatrix(rf, fontsize = 'x-large', classes = ['No Solar', 'Solar']) cm.score(X_test, y_test) cm.show()
C:\Users\allis\Anaconda3\lib\site-packages\sklearn\base.py:197: FutureWarning: From version 0.24, get_params will raise an AttributeError if a parameter cannot be retrieved as an instance attribute. Previously it would return None. FutureWarning) C:\Users\allis\Anaconda3\lib\site-packages\yellowbrick\classifier\base.py:232: YellowbrickWarning: could not determine class_counts_ from previously fitted classifier YellowbrickWarning,
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Vanilla Random Forests
rf = RandomForestClassifier() rf.fit(X_train, y_train) y_pred = rf.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred))) print("Recall: {}".format(recall_score(y_test, y_pred))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred))) print("F1 Score: {}".format(f1_score(y_test, y_pred))) print("Balanced Accuracy: {}".format(balanced_accuracy_score(y_test, y_pred))) cm_rf = ConfusionMatrix(rf, classes = ['No Solar', 'Solar'], label_encoder={0: 'No Solar', 1: 'Solar'}) cm_rf.score(X_test, y_test) cm_rf.poof() # vanilla with smote # Precision: 0.8955014655282274 # Recall: 0.8445913461538461 # Accuracy: 0.804713493530499 # F1 Score: 0.8693016638832188 # Balanced Accuracy: 0.758295673076923 # vanilla with oversampling # Precision: 0.8665596698383584 # Recall: 0.9085336538461538 # Accuracy: 0.8220887245841035 # F1 Score: 0.8870504019245439 # Balanced Accuracy: 0.7214668269230768
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Random Forests with Hyperparameter Tuning
### Random Forests rf = find_hyperparameters(pipe_rf, params_rf, X_train, y_train) print(rf.best_params_) best_rf = rf.best_estimator_ #first hyperparamter tuning: {'rf__max_features': 'sqrt', 'rf__min_samples_leaf': 20, 'rf__n_estimators': 30} # Second tuning: {'rf__min_samples_leaf': 5, 'rf__n_estimators': 50} best_rf.fit(X_train, y_train) best_rf.score(X_test, y_test) # Random Forests: 0.793807763401109 y_pred_rf = best_rf.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_rf))) print("Recall: {}".format(recall_score(y_test, y_pred_rf))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_rf))) print("F1 Score: {}".format(f1_score(y_test, y_pred_rf))) print("Balanced Accuracy: {}".format(balanced_accuracy_score(y_test, y_pred_rf)))
Precision: 0.8924837003321442 Recall: 0.8719951923076923 Accuracy: 0.8207948243992607 F1 Score: 0.8821204936470303 Balanced Accuracy: 0.7611975961538462
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Vanilla SVC
svc = SVC() svc.fit(X_train, y_train) y_pred_svc = svc.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_svc))) print("Recall: {}".format(recall_score(y_test, y_pred_svc))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_svc))) print("F1 Score: {}".format(f1_score(y_test, y_pred_svc)))
Precision: 0.9094472225976483 Recall: 0.8087740384615385 Accuracy: 0.7910351201478744 F1 Score: 0.8561613334181563
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
SVC with Hyperparameter Tuning
svc = find_hyperparameters(pipe_svc, params_svc, X_train, y_train) print(svc.best_params_) best_svc = svc.best_estimator_ best_svc.fit(X_train, y_train) best_svc.score(X_test, y_test) y_pred_svc = best_svc.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_svc))) print("Recall: {}".format(recall_score(y_test, y_pred_svc))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_svc))) print("F1 Score: {}".format(f1_score(y_test, y_pred_svc)))
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Vanilla KNN
knn = KNeighborsClassifier() knn.fit(X_train, y_train) y_pred_knn = knn.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_knn))) print("Recall: {}".format(recall_score(y_test, y_pred_knn))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_knn))) print("F1 Score: {}".format(f1_score(y_test, y_pred_knn)))
Precision: 0.9354388413889374 Recall: 0.6443509615384615 Accuracy: 0.6923290203327171 F1 Score: 0.7630773610419187
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
KNN with Hyperparameter Tuning
knn = find_hyperparameters(pipe_knn, params_knn, X_train, y_train) print(knn.best_params_) best_knn = knn.best_estimator_ best_knn.fit(X_train, y_train) best_knn.score(X_test, y_test) y_pred_knn = best_knn.predict(X_test) print("Precision: {}".format(precision_score(y_test, y_pred_knn))) print("Recall: {}".format(recall_score(y_test, y_pred_knn))) print("Accuracy: {}".format(accuracy_score(y_test, y_pred_knn))) print("F1 Score: {}".format(f1_score(y_test, y_pred_knn)))
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Preliminary Conclusions: Model Performance Comparisons Based on comparisons of both accuracy and balanced accuracy scores, our Random Forest Classifier model performed the best with oversampling methods and hyperparameter tuning.
falsepositives = isFalsePositive(df, X_test, y_test, rf) inversefalsepositives = scaler.inverse_transform(falsepositives) inversefalsepositives = pd.DataFrame(inversefalsepositives) inversefalsepositives = inversefalsepositives.set_axis(falsepositives.columns, axis=1, inplace=False) len(inversefalsepositives) ozdf = pd.read_csv("../data/ListOfOppurtunityZonesWithoutAKorHI.csv", encoding = "utf-8") ozdf = ozdf.rename(columns={"Census Tract Number": "Census_Tract_Number", "Tract Type": "Tract_Type", "ACS Data Source": "ACS_Data_Source"}) # results = pd.merge(inversefalsepositives, ozdf, left_on = inversefalsepositives.fips, right_on = ozdf.Census_Tract_Number) results.to_csv('../data/results.csv')
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis
Running Model on Entire Dataset
ozdf = ozdf['Census_Tract_Number'] merged = df.merge(ozdf, how = 'left', left_on='fips',right_on='Census_Tract_Number') merged = merged.dropna() merged.drop('fips', axis = 1, inplace = True) merged['has_tiles'].value_counts() X_ozdf = merged.drop('has_tiles', axis = 1) y_ozdf = merged['has_tiles'] y_pred_ozdf = rf.predict(X_ozdf) y_pred_ozdf y_pred_ozdf = pd.Series(y_pred_ozdf) y_pred_ozdf.value_counts() final = merged.merge(y_pred_ozdf.rename('y_pred'), how = 'left', on = merged.index) final = final[final['y_pred'] == 1]
_____no_output_____
MIT
EDA_Notebooks/EDA_Allison.ipynb
BudBernhard/Mod4Project-DeepSolarAnalysis