markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored. | def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (β2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (β12 lines)
dx = np.zeros((n_x, m, T_x))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m))
dc_prevt = np.zeros((n_a, m))
dWf = np.zeros((n_a, n_a + n_x))
dWi = np.zeros((n_a, n_a + n_x))
dWc = np.zeros((n_a, n_a + n_x))
dWo = np.zeros((n_a, n_a + n_x))
dbf = np.zeros((n_a, 1))
dbi = np.zeros((n_a, 1))
dbc = np.zeros((n_a, 1))
dbo = np.zeros((n_a, 1))
# loop back over the whole sequence
for t in reversed(range(T_x)):
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:,:,t] + da_prevt, dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = gradients["dxt"]
dWf += gradients["dWf"]
dWi += gradients["dWi"]
dWc += gradients["dWc"]
dWo += gradients["dWo"]
dbf += gradients["dbf"]
dbi += gradients["dbi"]
dbc += gradients["dbc"]
dbo += gradients["dbo"]
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = gradients["da_prev"]
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape) | gradients["dx"][1][2] = [-0.00173313 0.08287442 -0.30545663 -0.43281115]
gradients["dx"].shape = (3, 10, 4)
gradients["da0"][2][3] = -0.095911501954
gradients["da0"].shape = (5, 10)
gradients["dWf"][3][1] = -0.0698198561274
gradients["dWf"].shape = (5, 8)
gradients["dWi"][1][2] = 0.102371820249
gradients["dWi"].shape = (5, 8)
gradients["dWc"][3][1] = -0.0624983794927
gradients["dWc"].shape = (5, 8)
gradients["dWo"][1][2] = 0.0484389131444
gradients["dWo"].shape = (5, 8)
gradients["dbf"][4] = [-0.0565788]
gradients["dbf"].shape = (5, 1)
gradients["dbi"][4] = [-0.15399065]
gradients["dbi"].shape = (5, 1)
gradients["dbc"][4] = [-0.29691142]
gradients["dbc"].shape = (5, 1)
gradients["dbo"][4] = [-0.29798344]
gradients["dbo"].shape = (5, 1)
| MIT | Course-5-Sequence-Models/week1/Building+a+Recurrent+Neural+Network+-+Step+by+Step+-+v3.ipynb | xnone/coursera-deep-learning |
Azure ML & Azure Databricks notebooks by Parashar Shah.Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Model Building | import os
import pprint
import numpy as np
from pyspark.ml import Pipeline, PipelineModel
from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
##TESTONLY
# import auth creds from notebook parameters
tenant = dbutils.widgets.get('tenant_id')
username = dbutils.widgets.get('service_principal_id')
password = dbutils.widgets.get('service_principal_password')
auth = azureml.core.authentication.ServicePrincipalAuthentication(tenant, username, password)
# import the Workspace class and check the azureml SDK version
from azureml.core import Workspace
ws = Workspace.from_config(auth = auth)
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
##PUBLISHONLY
## import the Workspace class and check the azureml SDK version
#from azureml.core import Workspace
#
#ws = Workspace.from_config()
#print('Workspace name: ' + ws.name,
# 'Azure region: ' + ws.location,
# 'Subscription id: ' + ws.subscription_id,
# 'Resource group: ' + ws.resource_group, sep = '\n')
#get the train and test datasets
train_data_path = "AdultCensusIncomeTrain"
test_data_path = "AdultCensusIncomeTest"
train = spark.read.parquet(train_data_path)
test = spark.read.parquet(test_data_path)
print("train: ({}, {})".format(train.count(), len(train.columns)))
print("test: ({}, {})".format(test.count(), len(test.columns)))
train.printSchema() | _____no_output_____ | MIT | how-to-use-azureml/azure-databricks/03.Build_model_runHistory.ipynb | keerthiadu/MachineLearningNotebooks |
Define Model | label = "income"
dtypes = dict(train.dtypes)
dtypes.pop(label)
si_xvars = []
ohe_xvars = []
featureCols = []
for idx,key in enumerate(dtypes):
if dtypes[key] == "string":
featureCol = "-".join([key, "encoded"])
featureCols.append(featureCol)
tmpCol = "-".join([key, "tmp"])
# string-index and one-hot encode the string column
#https://spark.apache.org/docs/2.3.0/api/java/org/apache/spark/ml/feature/StringIndexer.html
#handleInvalid: Param for how to handle invalid data (unseen labels or NULL values).
#Options are 'skip' (filter out rows with invalid data), 'error' (throw an error),
#or 'keep' (put invalid data in a special additional bucket, at index numLabels). Default: "error"
si_xvars.append(StringIndexer(inputCol=key, outputCol=tmpCol, handleInvalid="skip"))
ohe_xvars.append(OneHotEncoder(inputCol=tmpCol, outputCol=featureCol))
else:
featureCols.append(key)
# string-index the label column into a column named "label"
si_label = StringIndexer(inputCol=label, outputCol='label')
# assemble the encoded feature columns in to a column named "features"
assembler = VectorAssembler(inputCols=featureCols, outputCol="features")
from azureml.core.run import Run
from azureml.core.experiment import Experiment
import numpy as np
import os
import shutil
model_name = "AdultCensus_runHistory.mml"
model_dbfs = os.path.join("/dbfs", model_name)
run_history_name = 'spark-ml-notebook'
# start a training run by defining an experiment
myexperiment = Experiment(ws, "Ignite_AI_Talk")
root_run = myexperiment.start_logging()
# Regularization Rates -
regs = [0.0001, 0.001, 0.01, 0.1]
# try a bunch of regularization rate in a Logistic Regression model
for reg in regs:
print("Regularization rate: {}".format(reg))
# create a bunch of child runs
with root_run.child_run("reg-" + str(reg)) as run:
# create a new Logistic Regression model.
lr = LogisticRegression(regParam=reg)
# put together the pipeline
pipe = Pipeline(stages=[*si_xvars, *ohe_xvars, si_label, assembler, lr])
# train the model
model_p = pipe.fit(train)
# make prediction
pred = model_p.transform(test)
# evaluate. note only 2 metrics are supported out of the box by Spark ML.
bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')
au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)
au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)
print("Area under ROC: {}".format(au_roc))
print("Area Under PR: {}".format(au_prc))
# log reg, au_roc, au_prc and feature names in run history
run.log("reg", reg)
run.log("au_roc", au_roc)
run.log("au_prc", au_prc)
run.log_list("columns", train.columns)
# save model
model_p.write().overwrite().save(model_name)
# upload the serialized model into run history record
mdl, ext = model_name.split(".")
model_zip = mdl + ".zip"
shutil.make_archive(mdl, 'zip', model_dbfs)
run.upload_file("outputs/" + model_name, model_zip)
#run.upload_file("outputs/" + model_name, path_or_stream = model_dbfs) #cannot deal with folders
# now delete the serialized model from local folder since it is already uploaded to run history
shutil.rmtree(model_dbfs)
os.remove(model_zip)
# Declare run completed
root_run.complete()
root_run_id = root_run.id
print ("run id:", root_run.id)
metrics = root_run.get_metrics(recursive=True)
best_run_id = max(metrics, key = lambda k: metrics[k]['au_roc'])
print(best_run_id, metrics[best_run_id]['au_roc'], metrics[best_run_id]['reg'])
#Get the best run
child_runs = {}
for r in root_run.get_children():
child_runs[r.id] = r
best_run = child_runs[best_run_id]
#Download the model from the best run to a local folder
best_model_file_name = "best_model.zip"
best_run.download_file(name = 'outputs/' + model_name, output_file_path = best_model_file_name) | _____no_output_____ | MIT | how-to-use-azureml/azure-databricks/03.Build_model_runHistory.ipynb | keerthiadu/MachineLearningNotebooks |
Model Evaluation | ##unzip the model to dbfs (as load() seems to require that) and load it.
if os.path.isfile(model_dbfs) or os.path.isdir(model_dbfs):
shutil.rmtree(model_dbfs)
shutil.unpack_archive(best_model_file_name, model_dbfs)
model_p_best = PipelineModel.load(model_name)
# make prediction
pred = model_p_best.transform(test)
output = pred[['hours_per_week','age','workclass','marital_status','income','prediction']]
display(output.limit(5))
# evaluate. note only 2 metrics are supported out of the box by Spark ML.
bce = BinaryClassificationEvaluator(rawPredictionCol='rawPrediction')
au_roc = bce.setMetricName('areaUnderROC').evaluate(pred)
au_prc = bce.setMetricName('areaUnderPR').evaluate(pred)
print("Area under ROC: {}".format(au_roc))
print("Area Under PR: {}".format(au_prc)) | _____no_output_____ | MIT | how-to-use-azureml/azure-databricks/03.Build_model_runHistory.ipynb | keerthiadu/MachineLearningNotebooks |
Model Persistence | ##NOTE: by default the model is saved to and loaded from /dbfs/ instead of cwd!
model_p_best.write().overwrite().save(model_name)
print("saved model to {}".format(model_dbfs))
%sh
ls -la /dbfs/AdultCensus_runHistory.mml/*
dbutils.notebook.exit("success") | _____no_output_____ | MIT | how-to-use-azureml/azure-databricks/03.Build_model_runHistory.ipynb | keerthiadu/MachineLearningNotebooks |
**Installing the transformers library** | !cat /proc/meminfo
!df -h
!pip install transformers | Collecting transformers
[?25l Downloading https://files.pythonhosted.org/packages/d8/f4/9f93f06dd2c57c7cd7aa515ffbf9fcfd8a084b92285732289f4a5696dd91/transformers-3.2.0-py3-none-any.whl (1.0MB)
[K |ββββββββββββββββββββββββββββββββ| 1.0MB 9.0MB/s
[?25hRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/7d/34/09d19aff26edcc8eb2a01bed8e98f13a1537005d31e95233fd48216eed10/sacremoses-0.0.43.tar.gz (883kB)
[K |ββββββββββββββββββββββββββββββββ| 890kB 35.5MB/s
[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)
Collecting tokenizers==0.8.1.rc2
[?25l Downloading https://files.pythonhosted.org/packages/80/83/8b9fccb9e48eeb575ee19179e2bdde0ee9a1904f97de5f02d19016b8804f/tokenizers-0.8.1rc2-cp36-cp36m-manylinux1_x86_64.whl (3.0MB)
[K |ββββββββββββββββββββββββββββββββ| 3.0MB 46.8MB/s
[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Collecting sentencepiece!=0.1.92
[?25l Downloading https://files.pythonhosted.org/packages/d4/a4/d0a884c4300004a78cca907a6ff9a5e9fe4f090f5d95ab341c53d28cbc58/sentencepiece-0.1.91-cp36-cp36m-manylinux1_x86_64.whl (1.1MB)
[K |ββββββββββββββββββββββββββββββββ| 1.1MB 45.9MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.16.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Building wheels for collected packages: sacremoses
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
Created wheel for sacremoses: filename=sacremoses-0.0.43-cp36-none-any.whl size=893257 sha256=d602654d8ce06566bfa73540ba28fc0736d3471c05b2dc28adc49e037e943519
Stored in directory: /root/.cache/pip/wheels/29/3c/fd/7ce5c3f0666dab31a50123635e6fb5e19ceb42ce38d4e58f45
Successfully built sacremoses
Installing collected packages: sacremoses, tokenizers, sentencepiece, transformers
Successfully installed sacremoses-0.0.43 sentencepiece-0.1.91 tokenizers-0.8.1rc2 transformers-3.2.0
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Importing the tools** | import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
import torch
import transformers as ppb
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
import warnings
import re
warnings.filterwarnings('ignore') | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Importing the dataset from Drive** | from google.colab import drive
drive.mount('/content/gdrive')
df=pd.read_csv('gdrive/My Drive/Total_cleaned.csv',delimiter=';')
df1=pd.read_csv('gdrive/My Drive/Final_DUP.csv',delimiter=';')
df2=pd.read_csv('gdrive/My Drive/Final_NDUP.csv',delimiter=';')
df[3] | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Loading the Pre-trained BERT model** | model_class, tokenizer_class, pretrained_weights = (ppb.BertModel, ppb.BertTokenizer, 'bert-base-uncased')
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
model_class, tokenizer_class, pretrained_weights = (ppb.DistilBertModel, ppb.DistilBertTokenizer, 'distilbert-base-uncased')
#tokenizer = ppb.DistilBertTokenizer.from_pretrained(distil_bert, do_lower_case=True, add_special_tokens=True, max_length=128, pad_to_max_length=True)
tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
model = model_class.from_pretrained(pretrained_weights)
#model = TFDistilBertModel.from_pretrained('distilbert-base-uncased') | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Lower case** | df[0]= df[0].str.lower()
df[1]= df[1].str.lower()
df[2]= df[2].str.lower()
df[3]= df[3].str.lower()
df[4]= df[4].str.lower()
df[5]= df[5].str.lower() | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Remove Digits** | df[3] = df[3].str.replace(r'0', '')
df[3] = df[3].str.replace(r'1', '')
df[3] = df[3].str.replace(r'2', '')
df[3] = df[3].str.replace(r'3', '')
df[3] = df[3].str.replace(r'4', '')
df[3] = df[3].str.replace(r'5', '')
df[3] = df[3].str.replace(r'6', '')
df[3] = df[3].str.replace(r'7', '')
df[3] = df[3].str.replace(r'8', '')
df[3] = df[3].str.replace(r'9', '') | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Remove special characters** | df[3] = df[3].str.replace(r'/', '')
df[3] = df[3].str.replace(r'@ ?', '')
df[3] = df[3].str.replace(r'!', '')
df[3] = df[3].str.replace(r'+', '')
df[3] = df[3].str.replace(r'-', '')
df[3] = df[3].str.replace(r'/', '')
df[3] = df[3].str.replace(r':', '')
df[3] = df[3].str.replace(r';', '')
df[3] = df[3].str.replace(r'>', '')
df[3] = df[3].str.replace(r'=', '')
df[3] = df[3].str.replace(r'<', '')
df[3] = df[3].str.replace(r'(', '')
df[3] = df[3].str.replace(r')', '')
df[3] = df[3].str.replace(r'#', '')
df[3] = df[3].str.replace(r'$', '')
df[3] = df[3].str.replace(r'&', '')
df[3] = df[3].str.replace(r'*', '')
df[3] = df[3].str.replace(r'%', '')
df[3] = df[3].str.replace(r'_', '') | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Convert to String type** | df[3] = pd.Series(df[3], dtype="string") # Pblm tokenize : " Input is not valid ,Should be a string, a list/tuple of strings or a list/tuple of integers"
df[2] = pd.Series(df[2], dtype="string")
df[2] = df[2].astype("|S")
df[2].str.decode("utf-8")
df[3] = df[3].astype("|S")
df[3].str.decode("utf-8")
df[3].str.len()
| _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Tokenization** | df.shape
batch_31=df1[:3000]
batch_32=df2[:3000]
df3 = pd.concat([batch_31,batch_32], ignore_index=True)
batch_41=df1[3000:6000]
batch_42=df2[3000:6000]
df4 = pd.concat([batch_41,batch_42], ignore_index=True)
batch_51=df1[6000:9000]
batch_52=df2[6000:9000]
df5 = pd.concat([batch_51,batch_52], ignore_index=True)
batch_61=df1[9000:12000]
batch_62=df2[9000:12000]
df6 = pd.concat([batch_61,batch_62], ignore_index=True)
batch_71=df1[12000:15000]
batch_72=df2[12000:15000]
df7 = pd.concat([batch_71,batch_72], ignore_index=True)
batch_81=df1[15000:18000]
batch_82=df2[15000:18000]
df8 = pd.concat([batch_81,batch_82], ignore_index=True)
batch_91=df1[18000:21000]
batch_92=df2[18000:21000]
df9 = pd.concat([batch_91,batch_92], ignore_index=True)
batch_101=df1[21000:]
batch_102=df2[21000:]
df10 = pd.concat([batch_101,batch_102], ignore_index=True)
batch_31=df1[:4000]
batch_32=df2[:4000]
df3 = pd.concat([batch_31,batch_32], ignore_index=True)
batch_41=df1[4000:8000]
batch_42=df2[4000:8000]
df4 = pd.concat([batch_41,batch_42], ignore_index=True)
batch_51=df1[8000:12000]
batch_52=df2[8000:12000]
df5 = pd.concat([batch_51,batch_52], ignore_index=True)
batch_61=df1[12000:16000]
batch_62=df2[12000:16000]
df6 = pd.concat([batch_61,batch_62], ignore_index=True)
batch_71=df1[16000:20000]
batch_72=df2[16000:20000]
df7 = pd.concat([batch_71,batch_72], ignore_index=True)
batch_81=df1[20000:24000]
batch_82=df2[20000:24000]
df8 = pd.concat([batch_81,batch_82], ignore_index=True)
def _get_segments3(tokens, max_seq_length):
"""Segments: 0 for the first sequence, 1 for the second"""
if len(tokens)>max_seq_length:
raise IndexError("Token length more than max seq length!")
segments = []
first_sep = False
current_segment_id = 0
for token in tokens:
segments.append(current_segment_id)
#print(token)
if token == 102:
#if first_sep:
#first_sep = False
#else:
current_segment_id = 1
return segments + [0] * (max_seq_length - len(tokens)) | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df3** | pair3= df3['Title1'] + [" [SEP] "] + df3['Title2']
tokenized3 = pair3.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len3 = 0 # padding all lists to the same size
for i in tokenized3.values:
if len(i) > max_len3:
max_len3 = len(i)
max_len3 =120
padded3 = np.array([i + [0]*(max_len3-len(i)) for i in tokenized3.values])
np.array(padded3).shape
attention_mask3 = np.where(padded3 != 0, 1, 0)
attention_mask3.shape
input_ids3 = torch.tensor(padded3)
attention_mask3 = torch.tensor(attention_mask3)
input_segments3= np.array([_get_segments3(token, max_len3)for token in tokenized3.values])
token_type_ids3 = torch.tensor(input_segments3)
input_segments3 = torch.tensor(input_segments3)
with torch.no_grad():
last_hidden_states3 = model(input_ids3, attention_mask=attention_mask3, token_type_ids=input_segments3) # <<< 600 rows only !!!
with torch.no_grad():
last_hidden_states3 = model(input_ids3, attention_mask=attention_mask3)
features3 = last_hidden_states3[0][:,0,:].numpy()
features3 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df4** | pair4= df4['Title1'] + [" [SEP] "] + df4['Title2']
tokenized4 = pair4.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len4 = 0 # padding all lists to the same size
for i in tokenized4.values:
if len(i) > max_len4:
max_len4 = len(i)
max_len4 =120
padded4 = np.array([i + [0]*(max_len4-len(i)) for i in tokenized4.values])
np.array(padded4).shape
attention_mask4 = np.where(padded4 != 0, 1, 0)
attention_mask4.shape
input_ids4 = torch.tensor(padded4)
attention_mask4 = torch.tensor(attention_mask4)
def _get_segments3(tokens, max_seq_length):
"""Segments: 0 for the first sequence, 1 for the second"""
if len(tokens)>max_seq_length:
raise IndexError("Token length more than max seq length!")
segments = []
first_sep = False
current_segment_id = 0
for token in tokens:
segments.append(current_segment_id)
#print(token)
if token == 102:
#if first_sep:
#first_sep = False
#else:
current_segment_id = 1
return segments + [0] * (max_seq_length - len(tokens))
input_segments4= np.array([_get_segments3(token, max_len4)for token in tokenized4.values])
token_type_ids4 = torch.tensor(input_segments4)
input_segments4 = torch.tensor(input_segments4)
with torch.no_grad():
last_hidden_states4 = model(input_ids4, attention_mask=attention_mask4, token_type_ids=input_segments4) # <<< 600 rows only !!!
features4 = last_hidden_states4[0][:,0,:].numpy()
features4 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df5** | pair5= df5['Title1'] + [" [SEP] "] + df5['Title2']
tokenized5 = pair5.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
pair5.shape
tokenized5.shape | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Padding** | max_len5 = 0 # padding all lists to the same size
for i in tokenized5.values:
if len(i) > max_len5:
max_len5 = len(i)
max_len5 =120
padded5 = np.array([i + [0]*(max_len5-len(i)) for i in tokenized5.values])
np.array(padded5).shape # Dimensions of the padded variable | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Masking** | attention_mask5 = np.where(padded5 != 0, 1, 0)
attention_mask5.shape
input_ids5 = torch.tensor(padded5)
attention_mask5 = torch.tensor(attention_mask5)
input_ids[0] ######## TITLE 2 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Running the `model()` function through BERT** | def _get_segments3(tokens, max_seq_length):
"""Segments: 0 for the first sequence, 1 for the second"""
if len(tokens)>max_seq_length:
raise IndexError("Token length more than max seq length!")
segments = []
first_sep = False
current_segment_id = 0
for token in tokens:
segments.append(current_segment_id)
#print(token)
if token == 102:
#if first_sep:
#first_sep = False
#else:
current_segment_id = 1
return segments + [0] * (max_seq_length - len(tokens))
input_segments5= np.array([_get_segments3(token, max_len5)for token in tokenized5.values])
token_type_ids5 = torch.tensor(input_segments5)
input_segments5 = torch.tensor(input_segments5)
with torch.no_grad():
last_hidden_states5 = model(input_ids5, attention_mask=attention_mask5, token_type_ids=input_segments5) # <<< 600 rows only !!! | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Slicing the part of the output of BERT : [cls]** | features5 = last_hidden_states5[0][:,0,:].numpy()
features5 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df6** | pair6= df6['Title1'] + [" [SEP] "] + df6['Title2']
tokenized6 = pair6.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len6 = 0 # padding all lists to the same size
for i in tokenized6.values:
if len(i) > max_len6:
max_len6 = len(i)
max_len6=120
padded6 = np.array([i + [0]*(max_len6-len(i)) for i in tokenized6.values])
np.array(padded6).shape # Dimensions of the padded variable
attention_mask6 = np.where(padded6 != 0, 1, 0)
attention_mask6.shape
input_ids6 = torch.tensor(padded6)
attention_mask6 = torch.tensor(attention_mask6)
input_segments6= np.array([_get_segments3(token, max_len6)for token in tokenized6.values])
token_type_ids6 = torch.tensor(input_segments6)
input_segments6 = torch.tensor(input_segments6)
with torch.no_grad():
last_hidden_states6 = model(input_ids6, attention_mask=attention_mask6, token_type_ids=input_segments6) # <<< 600 rows only !!!
features6 = last_hidden_states6[0][:,0,:].numpy()
features6 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df7** | pair7= df7['Title1'] + [" [SEP] "] + df7['Title2']
tokenized7 = pair7.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len7 = 0 # padding all lists to the same size
for i in tokenized7.values:
if len(i) > max_len7:
max_len7 = len(i)
max_len7=120
padded7 = np.array([i + [0]*(max_len7-len(i)) for i in tokenized7.values])
np.array(padded7).shape # Dimensions of the padded variable
attention_mask7 = np.where(padded7 != 0, 1, 0)
attention_mask7.shape
input_ids7 = torch.tensor(padded7)
attention_mask7 = torch.tensor(attention_mask7)
input_segments7= np.array([_get_segments3(token, max_len7)for token in tokenized7.values])
token_type_ids7 = torch.tensor(input_segments7)
input_segments7 = torch.tensor(input_segments7)
with torch.no_grad():
last_hidden_states7 = model(input_ids7, attention_mask=attention_mask7, token_type_ids=input_segments7) # <<< 600 rows only !!!
features7 = last_hidden_states7[0][:,0,:].numpy()
features7 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df8** | pair8= df8['Title1'] + [" [SEP] "] + df8['Title2']
tokenized8 = pair8.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len8 = 0 # padding all lists to the same size
for i in tokenized8.values:
if len(i) > max_len8:
max_len8 = len(i)
max_len8=120
padded8 = np.array([i + [0]*(max_len8-len(i)) for i in tokenized8.values])
np.array(padded8).shape # Dimensions of the padded variable
attention_mask8 = np.where(padded8 != 0, 1, 0)
attention_mask8.shape
input_ids8 = torch.tensor(padded8)
attention_mask8 = torch.tensor(attention_mask8)
input_segments8= np.array([_get_segments3(token, max_len8)for token in tokenized8.values])
token_type_ids8 = torch.tensor(input_segments8)
input_segments8 = torch.tensor(input_segments8)
with torch.no_grad():
last_hidden_states8 = model(input_ids8, attention_mask=attention_mask8, token_type_ids=input_segments8) # <<< 600 rows only !!!
features8 = last_hidden_states8[0][:,0,:].numpy()
features8 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df9** | pair9= df9['Title1'] + [" [SEP] "] + df9['Title2']
tokenized9 = pair9.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len9 = 0 # padding all lists to the same size
for i in tokenized9.values:
if len(i) > max_len9:
max_len9 = len(i)
max_len9=120
padded9 = np.array([i + [0]*(max_len9-len(i)) for i in tokenized9.values])
np.array(padded9).shape # Dimensions of the padded variable
attention_mask9 = np.where(padded9 != 0, 1, 0)
attention_mask9.shape
input_ids9 = torch.tensor(padded9)
attention_mask9 = torch.tensor(attention_mask9)
input_segments9= np.array([_get_segments3(token, max_len9)for token in tokenized9.values])
token_type_ids9 = torch.tensor(input_segments9)
input_segments9 = torch.tensor(input_segments9)
with torch.no_grad():
last_hidden_states9 = model(input_ids9, attention_mask=attention_mask9, token_type_ids=input_segments9)
features9 = last_hidden_states9[0][:,0,:].numpy()
features9 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**df10** | pair10= df10['Title1'] + [" [SEP] "] + df10['Title2']
tokenized10 = pair10.apply((lambda x: tokenizer.encode(x, add_special_tokens=True,truncation=True, max_length=512)))
max_len10 = 0 # padding all lists to the same size
for i in tokenized10.values:
if len(i) > max_len10:
max_len10 = len(i)
max_len10=120
padded10 = np.array([i + [0]*(max_len10-len(i)) for i in tokenized10.values])
np.array(padded10).shape # Dimensions of the padded variable
attention_mask10 = np.where(padded10 != 0, 1, 0)
attention_mask10.shape
input_ids10 = torch.tensor(padded10)
attention_mask10 = torch.tensor(attention_mask10)
input_segments10= np.array([_get_segments3(token, max_len10)for token in tokenized10.values])
token_type_ids10 = torch.tensor(input_segments10)
input_segments10 = torch.tensor(input_segments10)
with torch.no_grad():
last_hidden_states10 = model(input_ids10, attention_mask=attention_mask10, token_type_ids=input_segments10) # <<< 600 rows only !!!
features10 = last_hidden_states10[0][:,0,:].numpy()
features10 | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Classification** | features=np.concatenate([features3,features4,features5,features6,features7])
features=features3
features.shape
Total = pd.concat([df3,df4,df5,df6,df7], ignore_index=True)
Total
labels =df3['Label']
labels =Total['Label']
labels
train_features, test_features, train_labels, test_labels = train_test_split(features, labels,test_size=0.2,random_state=42)
train_labels.shape
test_labels.shape
test_features.shape
#n_splits=2
#cross_val_score=5
parameters = {'C': np.linspace(0.0001, 100, 20)}
grid_search = GridSearchCV(LogisticRegression(), parameters, cv=5)
grid_search.fit(train_features, train_labels)
print('best parameters: ', grid_search.best_params_)
print('best scrores: ', grid_search.best_score_)
lr_clf = LogisticRegression(C=36.84)
lr_clf.fit(train_features, train_labels)
lr_clf.score(test_features, test_labels)
scores = cross_val_score(lr_clf, test_features, test_labels, cv=10)
print("mean: {:.3f} (std: {:.3f})".format(scores.mean(),
scores.std()),
end="\n\n" )
from sklearn.dummy import DummyClassifier
clf = DummyClassifier()
scores = cross_val_score(clf, train_features, train_labels)
print("Dummy classifier score: %0.3f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
df.fillna(0) # Pblm de Nan
#Import svm model
from sklearn import svm
#Create a svm Classifier
clf = svm.SVC(kernel='linear') # Linear Kernel
clf.fit(train_features, train_labels)
y_pred = clf.predict(test_features)
y_pred
###############""""""""""""""
from sklearn import metrics
# Model Accuracy: how often is the classifier correct?
print("Accuracy:",metrics.accuracy_score(test_features, y_pred)) | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Decision tree** | from sklearn.tree import DecisionTreeClassifier
#n_splits=2
#cross_val_score=5
parameters = {'C': max_leaf_nodes=0)}
grid_search = GridSearchCV(DecisionTreeClassifier(), parameters, cv=5)
grid_search.fit(train_features, train_labels)
print('best parameters: ', grid_search.best_params_)
print('best scrores: ', grid_search.best_score_)
clf = DecisionTreeClassifier(max_depth = 2, random_state = 0,criterion='gini')
clf.fit(train_features, train_labels)
# Predict for 1 observation
clf.predict(test_features)
# Predict for multiple observations
#clf.predict(test_features[0:200])
# The score method returns the accuracy of the model
score = clf.score(test_features, test_labels)
print(score)
scores = cross_val_score(clf, test_features, test_labels, cv=10)
print("mean: {:.3f} (std: {:.3f})".format(scores.mean(),
scores.std()),
end="\n\n" ) | mean: 0.814 (std: 0.012)
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**SVM** | from sklearn.svm import SVC
#n_splits=2
#cross_val_score=5
parameters = {'C': np.linspace(0.0001, 100, 20)}
grid_search = GridSearchCV(SVC(), parameters, cv=5)
grid_search.fit(train_features, train_labels)
print('best parameters: ', grid_search.best_params_)
print('best scrores: ', grid_search.best_score_)
svclassifier = SVC(kernel='linear')
svclassifier.fit(train_features, train_labels)
y_pred = svclassifier.predict(test_features)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(test_labels,y_pred))
print(classification_report(test_labels,y_pred))
param_grid = {'C':[1,10,100,1000],'gamma':[1,0.1,0.001,0.0001], 'kernel':['linear','rbf']}
grid = GridSearchCV(SVC(),param_grid,refit = True, verbose=2)
grid.fit(train_features,train_labels)
grid.best_params_
predic = grid.predict(test_features)
print(classification_report(test_labels,predic))
print(confusion_matrix(test_labels, predic)) | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
Cross_Val_SVC | from sklearn.model_selection import cross_val_score
from sklearn import svm
clf = svm.SVC(kernel='linear', C=36.84)
scores = cross_val_score(clf, test_labels, y_pred, cv=5)
scores = cross_val_score(clf, test_features, test_labels, cv=10)
print("mean: {:.3f} (std: {:.3f})".format(scores.mean(),
scores.std()),
end="\n\n" ) | mean: 0.850 (std: 0.011)
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**MLP Best params** | from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(max_iter=100)
from sklearn.datasets import make_classification
parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
}
from sklearn.model_selection import GridSearchCV
clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3)
clf.fit(train_features, train_labels)
# Best paramete set
print('Best parameters found:\n', clf.best_params_)
# All results
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))
y_true, y_pred = test_labels , clf.predict(test_features)
from sklearn.metrics import classification_report, confusion_matrix
print('Results on the test set:')
print(classification_report(y_true, y_pred))
print(confusion_matrix(y_true, y_pred))
clf = MLPClassifier(random_state=1, max_iter=300).fit(train_features, train_labels)
clf.predict_proba(test_features[:1])
clf.predict(test_features[:1000, :])
clf.score(test_features, test_labels)
from sklearn.model_selection import cross_val_score
clf = MLPClassifier.mlp(kernel='linear')
scores = cross_val_score(clf, test_labels, y_pred, cv=5)
scores = cross_val_score(clf, test_features, test_labels, cv=10)
print("mean: {:.3f} (std: {:.3f})".format(scores.mean(),
scores.std()),
end="\n\n" ) | mean: 0.872 (std: 0.014)
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Random Forest** | from sklearn.ensemble import RandomForestRegressor
regressor = RandomForestRegressor(n_estimators=20, random_state=0)
regressor.fit(train_features, train_labels)
y_pred1 = regressor.predict(test_features)
y_pred
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
print(confusion_matrix(test_labels,y_pred))
print(classification_report(test_features,test_labels))
print(accuracy_score(test_features, test_labels)) | [[616 73]
[ 86 625]]
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Naive Bayes** Gaussian | from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(train_features, train_labels)
y_pred = gnb.predict(test_features)
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(test_labels, y_pred)) | Accuracy: 0.8358333333333333
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
*Cross Validation* |
print("Cross Validation:",cross_val_score(gnb, digits.data, digits.target, scoring='accuracy', cv=10).mean()) | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
Bernoulli: | from sklearn.naive_bayes import BernoulliNB
bnb = BernoulliNB(binarize=0.0)
bnb.fit(train_features, train_labels)
print("Score: ",bnb.score(test_features, test_labels)) | Score: 0.829
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
*Cross Validation* | print("Cross Validation:",cross_val_score(bnb, digits.data, digits.target, scoring='accuracy', cv=10).mean()) | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
Multinomial | from sklearn.naive_bayes import MultinomialNB
mnb = MultinomialNB()
mnb.fit(train_features, train_labels)
mnb.score(test_features, test_labels)
mnb = MultinomialNB()
cross_val_score(mnb, digits.data, digits.target, scoring='accuracy', cv=10).mean()
| _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
Best params SVC | from sklearn.svm import SVC
model = SVC()
model.fit(train_features, train_labels)
prediction = model.predict(test_features)
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(test_labels,prediction))
print(confusion_matrix(test_labels, prediction))
param_grid = {'C':[1,10,100,1000],'gamma':[1,0.1,0.001,0.0001], 'kernel':['linear','rbf']}
grid = GridSearchCV(SVC(),param_grid,refit = True, verbose=2)
grid.fit(train_features,train_labels)
grid.best_params_
predic = grid.predict(test_features)
print(classification_report(test_labels,predic))
print(confusion_matrix(test_labels, predic)) | precision recall f1-score support
0 0.86 0.95 0.90 587
1 0.95 0.85 0.90 613
accuracy 0.90 1200
macro avg 0.91 0.90 0.90 1200
weighted avg 0.91 0.90 0.90 1200
[[558 29]
[ 89 524]]
| MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
**Random Forest Best params** | from sklearn.ensemble import RandomForestClassifier
rfc=RandomForestClassifier(random_state=42)
param_grid = {
'n_estimators': [200, 500],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,5,6,7,8],
'criterion' :['gini', 'entropy']
}
CV_rfc = GridSearchCV(estimator=rfc, param_grid=param_grid, cv= 5)
CV_rfc.fit(train_features, train_labels)
CV_rfc.best_params_
rfc1=RandomForestClassifier(random_state=42, max_features='auto', n_estimators= 200, max_depth=8, criterion='gini')
rfc1.fit(train_features, train_labels)
pred=rfc1.predict(test_features)
from sklearn.metrics import accuracy_score
print("Accuracy for Random Forest on CV data: ",accuracy_score(test_labels,pred))
from dnn_classifier import DNNClassifier | _____no_output_____ | MIT | Copie de test_tokens(1).ipynb | asma-miladi/DupBugRep-Scripts |
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression Assignment π―You'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.- [ ] Begin with baselines for classification.- [ ] Use scikit-learn for logistic regression.- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)- [ ] Get your model's test accuracy. (One time, at the end.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals- [ ] Add your own stretch goal(s) !- [ ] Make exploratory visualizations.- [ ] Do one-hot encoding.- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).- [ ] Get and plot your coefficients.- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). **Load Data** | %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
# Load data downloaded from https://srcole.github.io/100burritos/
import pandas as pd
df = pd.read_csv(DATA_PATH+'burritos/burritos.csv')
# Derive binary classification target:
# We define a 'Great' burrito as having an
# overall rating of 4 or higher, on a 5 point scale.
# Drop unrated burritos.
df = df.dropna(subset=['overall'])
df['Great'] = df['overall'] >= 4
# Clean/combine the Burrito categories
df['Burrito'] = df['Burrito'].str.lower()
california = df['Burrito'].str.contains('california')
asada = df['Burrito'].str.contains('asada')
surf = df['Burrito'].str.contains('surf')
carnitas = df['Burrito'].str.contains('carnitas')
df.loc[california, 'Burrito'] = 'California'
df.loc[asada, 'Burrito'] = 'Asada'
df.loc[surf, 'Burrito'] = 'Surf & Turf'
df.loc[carnitas, 'Burrito'] = 'Carnitas'
df.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'
# Drop some high cardinality categoricals
df = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])
# Drop some columns to prevent "leakage"
df = df.drop(columns=['Rec', 'overall']) | _____no_output_____ | MIT | module4-logistic-regression/Logistic_Regression_Assignment.ipynb | afroman32/DS-Unit-2-Linear-Models |
**Train, Val, Test split** | df.shape
# reset index because it skipped a couple of numbers
df.reset_index(inplace = True, drop = True)
df.head(174)
# convert Great column to 1 for True and 0 for False
key = {True: 1, False:0}
df['Great'].replace(key, inplace = True)
# convert date column to datetime
df['Date'] = pd.to_datetime(df['Date'])
df.head(20)
df.isnull().sum()
train = pd.DataFrame()
val = pd.DataFrame()
test = pd.DataFrame()
# train, val, test split
for i in range(0, df.shape[0]):
if df['Date'][i].year <= 2016:
train = train.append(pd.DataFrame(df.loc[i]).T)
elif df['Date'][i].year == 2017:
val = val.append(pd.DataFrame(df.loc[i]).T)
else:
test = test.append(pd.DataFrame(df.loc[i]).T)
print(train.shape, val.shape, test.shape)
# check to make sure the split is correct
print(train['Date'].describe(), '\n')
print(val['Date'].describe(), '\n')
print(test['Date'].describe()) | count 298
unique 110
top 2016-08-30 00:00:00
freq 29
first 2011-05-16 00:00:00
last 2016-12-15 00:00:00
Name: Date, dtype: object
count 85
unique 42
top 2017-04-07 00:00:00
freq 6
first 2017-01-04 00:00:00
last 2017-12-29 00:00:00
Name: Date, dtype: object
count 38
unique 17
top 2019-08-27 00:00:00
freq 9
first 2018-01-02 00:00:00
last 2026-04-25 00:00:00
Name: Date, dtype: object
| MIT | module4-logistic-regression/Logistic_Regression_Assignment.ipynb | afroman32/DS-Unit-2-Linear-Models |
**Baseline** | target = 'Great'
features = ['Yelp', 'Google', 'Cost', 'Hunger', 'Tortilla', 'Temp', 'Meat',
'Fillings', 'Meat:filling', 'Uniformity', 'Salsa', 'Synergy', 'Wrap']
X_train = train[features]
y_train = train[target].astype(int)
X_val = val[features]
y_val = val[target].astype(int)
y_train.value_counts(normalize = True)
y_val.value_counts(normalize = True)
from sklearn.metrics import accuracy_score
majority_case = y_train.mode()[0]
y_pred = [majority_case] * len(y_train)
accuracy_score(y_train, y_pred)
from sklearn.dummy import DummyClassifier
# Fit the Dummy Classifier
baseline = DummyClassifier(strategy = 'most_frequent')
baseline.fit(X_train, y_train)
# Make Predictions on validation data
y_pred = baseline.predict(X_val)
accuracy_score(y_val, y_pred) | _____no_output_____ | MIT | module4-logistic-regression/Logistic_Regression_Assignment.ipynb | afroman32/DS-Unit-2-Linear-Models |
**Logistic Regression Model** | import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
target = 'Great'
features = ['Yelp', 'Google', 'Cost', 'Hunger', 'Tortilla', 'Temp', 'Meat',
'Fillings', 'Meat:filling', 'Uniformity', 'Salsa', 'Synergy', 'Wrap']
X_train = train[features]
y_train = train[target].astype(int)
X_val = val[features]
y_val = val[target].astype(int)
# one hot encode
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
# fill missing values
imputer = SimpleImputer(strategy = 'mean')
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
# scale it
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
# validation error
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy:', model.score(X_val_scaled, y_val))
# define test X matrix
X_test = test[features]
y_test = test[target].astype(int)
# encode X_test
X_test_encoded = encoder.transform(X_test)
# fill missing values
X_test_imputed = imputer.transform(X_test_encoded)
# scale X_test
X_test_scaled = scaler.transform(X_test_imputed)
# get predictions
y_pred = model.predict(X_test_scaled)
print(f'Test Accuracy: {model.score(X_test_scaled, y_test)}')
| _____no_output_____ | MIT | module4-logistic-regression/Logistic_Regression_Assignment.ipynb | afroman32/DS-Unit-2-Linear-Models |
Seminar: Monte-carlo tree searchIn this seminar, we'll implement a vanilla MCTS planning and use it to solve some Gym envs.But before we do that, we first need to modify gym env to allow saving and loading game states to facilitate backtracking. | from collections import namedtuple
from pickle import dumps, loads
from gym.core import Wrapper
# a container for get_result function below. Works just like tuple, but prettier
ActionResult = namedtuple(
"action_result", ("snapshot", "observation", "reward", "is_done", "info"))
class WithSnapshots(Wrapper):
"""
Creates a wrapper that supports saving and loading environemnt states.
Required for planning algorithms.
This class will have access to the core environment as self.env, e.g.:
- self.env.reset() #reset original env
- self.env.ale.cloneState() #make snapshot for atari. load with .restoreState()
- ...
You can also use reset, step and render directly for convenience.
- s, r, done, _ = self.step(action) #step, same as self.env.step(action)
- self.render(close=True) #close window, same as self.env.render(close=True)
"""
def get_snapshot(self, render=False):
"""
:returns: environment state that can be loaded with load_snapshot
Snapshots guarantee same env behaviour each time they are loaded.
Warning! Snapshots can be arbitrary things (strings, integers, json, tuples)
Don't count on them being pickle strings when implementing MCTS.
Developer Note: Make sure the object you return will not be affected by
anything that happens to the environment after it's saved.
You shouldn't, for example, return self.env.
In case of doubt, use pickle.dumps or deepcopy.
"""
if render:
self.render() # close popup windows since we can't pickle them
self.close()
if self.unwrapped.viewer is not None:
self.unwrapped.viewer.close()
self.unwrapped.viewer = None
return dumps(self.env)
def load_snapshot(self, snapshot, render=False):
"""
Loads snapshot as current env state.
Should not change snapshot inplace (in case of doubt, deepcopy).
"""
assert not hasattr(self, "_monitor") or hasattr(
self.env, "_monitor"), "can't backtrack while recording"
if render:
self.render() # close popup windows since we can't load into them
self.close()
self.env = loads(snapshot)
def get_result(self, snapshot, action):
"""
A convenience function that
- loads snapshot,
- commits action via self.step,
- and takes snapshot again :)
:returns: next snapshot, next_observation, reward, is_done, info
Basically it returns next snapshot and everything that env.step would have returned.
"""
self.load_snapshot(snapshot)
s, r, done, _ = self.step(action)
next_snapshot = self.get_snapshot()
return ActionResult(next_snapshot, #fill in the variables
s,
r, done, _) | _____no_output_____ | MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
try out snapshots: | # make env
env = WithSnapshots(gym.make("CartPole-v0"))
env.reset()
n_actions = env.action_space.n
print("initial_state:")
plt.imshow(env.render('rgb_array'))
env.close()
# create first snapshot
snap0 = env.get_snapshot()
# play without making snapshots (faster)
while True:
is_done = env.step(env.action_space.sample())[2]
if is_done:
print("Whoops! We died!")
break
print("final state:")
plt.imshow(env.render('rgb_array'))
env.close()
# reload initial state
env.load_snapshot(snap0)
print("\n\nAfter loading snapshot")
plt.imshow(env.render('rgb_array'))
env.close()
# get outcome (snapshot, observation, reward, is_done, info)
res = env.get_result(snap0, env.action_space.sample())
snap1, observation, reward = res[:3]
# second step
res2 = env.get_result(snap1, env.action_space.sample()) | _____no_output_____ | MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
MCTS: Monte-Carlo tree searchIn this section, we'll implement the vanilla MCTS algorithm with UCB1-based node selection.We will start by implementing the `Node` class - a simple class that acts like MCTS node and supports some of the MCTS algorithm steps.This MCTS implementation makes some assumptions about the environment, you can find those _in the notes section at the end of the notebook_. | assert isinstance(env,WithSnapshots)
class Node:
""" a tree node for MCTS """
#metadata:
parent = None #parent Node
value_sum = 0. #sum of state values from all visits (numerator)
times_visited = 0 #counter of visits (denominator)
def __init__(self,parent,action):
"""
Creates and empty node with no children.
Does so by commiting an action and recording outcome.
:param parent: parent Node
:param action: action to commit from parent Node
"""
self.parent = parent
self.action = action
self.children = set() #set of child nodes
#get action outcome and save it
res = env.get_result(parent.snapshot,action)
self.snapshot,self.observation,self.immediate_reward,self.is_done,_ = res
def is_leaf(self):
return len(self.children)==0
def is_root(self):
return self.parent is None
def get_mean_value(self):
return self.value_sum / self.times_visited if self.times_visited !=0 else 0
def ucb_score(self,scale=10,max_value=1e100):
"""
Computes ucb1 upper bound using current value and visit counts for node and it's parent.
:param scale: Multiplies upper bound by that. From hoeffding inequality, assumes reward range to be [0,scale].
:param max_value: a value that represents infinity (for unvisited nodes)
"""
if self.times_visited == 0:
return max_value
#compute ucb-1 additive component (to be added to mean value)
#hint: you can use self.parent.times_visited for N times node was considered,
# and self.times_visited for n times it was visited
U = np.sqrt(2*np.log(self.parent.times_visited)/self.times_visited)
return self.get_mean_value() + scale*U
#MCTS steps
def select_best_leaf(self):
"""
Picks the leaf with highest priority to expand
Does so by recursively picking nodes with best UCB-1 score until it reaches the leaf.
"""
if self.is_leaf():
return self
children = self.children
# best_child = <select best child node in terms of node.ucb_score()>
best_child = max([(child.ucb_score(), child) for child in children], key=lambda x: x[0])[1]
return best_child.select_best_leaf()
def expand(self):
"""
Expands the current node by creating all possible child nodes.
Then returns one of those children.
"""
assert not self.is_done, "can't expand from terminal state"
for action in range(n_actions):
self.children.add(Node(self,action))
return self.select_best_leaf()
def rollout(self,t_max=10**4):
"""
Play the game from this state to the end (done) or for t_max steps.
On each step, pick action at random (hint: env.action_space.sample()).
Compute sum of rewards from current state till
Note 1: use env.action_space.sample() for random action
Note 2: if node is terminal (self.is_done is True), just return 0
"""
#set env into the appropriate state
env.load_snapshot(self.snapshot)
obs = self.observation
is_done = self.is_done
#<your code here - rollout and compute reward>
rollout_reward = 0
while not is_done and t_max>0:
t_max-=1
_, r, is_done, _ = env.step(env.action_space.sample())
rollout_reward += r
return rollout_reward
def propagate(self,child_value):
"""
Uses child value (sum of rewards) to update parents recursively.
"""
#compute node value
my_value = self.immediate_reward + child_value
#update value_sum and times_visited
self.value_sum+=my_value
self.times_visited+=1
#propagate upwards
if not self.is_root():
self.parent.propagate(my_value)
def safe_delete(self):
"""safe delete to prevent memory leak in some python versions"""
del self.parent
for child in self.children:
child.safe_delete()
del child
class Root(Node):
def __init__(self,snapshot,observation):
"""
creates special node that acts like tree root
:snapshot: snapshot (from env.get_snapshot) to start planning from
:observation: last environment observation
"""
self.parent = self.action = None
self.children = set() #set of child nodes
#root: load snapshot and observation
self.snapshot = snapshot
self.observation = observation
self.immediate_reward = 0
self.is_done=False
@staticmethod
def from_node(node):
"""initializes node as root"""
root = Root(node.snapshot,node.observation)
#copy data
copied_fields = ["value_sum","times_visited","children","is_done"]
for field in copied_fields:
setattr(root,field,getattr(node,field))
return root | _____no_output_____ | MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
Main MCTS loopWith all we implemented, MCTS boils down to a trivial piece of code. | def plan_mcts(root,n_iters=10):
"""
builds tree with monte-carlo tree search for n_iters iterations
:param root: tree node to plan from
:param n_iters: how many select-expand-simulate-propagete loops to make
"""
for _ in range(n_iters):
# PUT CODE HERE
node = root.select_best_leaf()
if node.is_done:
node.propagate(0)
else: #node is not terminal
#<expand-simulate-propagate loop>
child = node.expand()
rollout_reward = child.rollout()
node.propagate(rollout_reward) | _____no_output_____ | MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
Plan and executeIn this section, we use the MCTS implementation to find optimal policy. | env = WithSnapshots(gym.make("CartPole-v0"))
root_observation = env.reset()
root_snapshot = env.get_snapshot()
root = Root(root_snapshot, root_observation)
#plan from root:
plan_mcts(root,n_iters=1000)
from IPython.display import clear_output
from itertools import count
from gym.wrappers import Monitor
total_reward = 0 #sum of rewards
test_env = loads(root_snapshot) #env used to show progress
for i in count():
#get best child
# best_child = <select child with highest mean reward>
best_child = max([(child.get_mean_value(), child) for child in root.children], key=lambda x: x[0])[1]
#take action
s,r,done,_ = test_env.step(best_child.action)
#show image
clear_output(True)
plt.title("step %i"%i)
plt.imshow(test_env.render('rgb_array'))
plt.show()
total_reward += r
if done:
print("Finished with reward = ",total_reward)
break
#discard unrealized part of the tree [because not every child matters :(]
for child in root.children:
if child != best_child:
child.safe_delete()
#declare best child a new root
root = Root.from_node(best_child)
# assert not root.is_leaf(), "We ran out of tree! Need more planning! Try growing tree right inside the loop."
#you may want to expand tree here
#<your code here>
if root.is_leaf():
plan_mcts(root,n_iters=10) | _____no_output_____ | MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
Submit to Coursera | from submit import submit_mcts
submit_mcts(total_reward, "[email protected]", "xx") | Submitted to Coursera platform. See results on assignment page!
| MIT | week6/practice_mcts.ipynb | Iramuk-ganh/practical-rl |
$$\newcommand\bs[1]{\boldsymbol{1}}$$ IntroductionThis chapter is light but contains some important definitions. The identity matrix and the inverse of a matrix are concepts that will be very useful in subsequent chapters. Using these concepts, we will see how vectors and matrices can be transformed. To fully understand the intuition behind these operations, we'll take a look at the geometric intrepreation of linear algebra. This will help us visualize otherwise abstract operatins.Then, at the end of this chapter, we'll use the concepts of matrix inversion and the identity matrix to solve a simple system of linear equations! Once you see this approach, you'll never want to use the algebraic methods of substitution or elimination that you learned in high school! 3.3 Identity and Inverse Matrices Identity matricesThe identity matrix $\bs{I}_n$ is a special matrix of shape ($n \times n$) that is filled with $0$ except for the diagonal, which is filled with $1$.A 3 by 3 identity matrixMore generally,$$I_1 = \begin{bmatrix}1 \end{bmatrix},\ I_2 = \begin{bmatrix}1 & 0 \\0 & 1 \end{bmatrix},\ I_3 = \begin{bmatrix}1 & 0 & 0 \\0 & 1 & 0 \\0 & 0 & 1 \end{bmatrix},\ \cdots ,\ I_n = \begin{bmatrix}1 & 0 & 0 & \cdots & 0 \\0 & 1 & 0 & \cdots & 0 \\0 & 0 & 1 & \cdots & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots \\0 & 0 & 0 & \cdots & 1 \end{bmatrix}$$ An identity matrix can be created with the Numpy function `eye()`: | np.eye(3) # 3 rows, and 3 columns | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
When you "apply" the identity matrix to a vector using the dot product, the result is the same vector:$$\bs{I}_n\bs{x} = \bs{x}$$ Example 1$$\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & 1 & 0 \\\\ 0 & 0 & 1\end{bmatrix}\times\begin{bmatrix} x_{1} \\\\ x_{2} \\\\ x_{3}\end{bmatrix}=\begin{bmatrix} 1 \times x_1 + 0 \times x_2 + 0\times x_3 \\\\ 0 \times x_1 + 1 \times x_2 + 0\times x_3 \\\\ 0 \times x_1 + 0 \times x_2 + 1\times x_3\end{bmatrix}=\begin{bmatrix} x_{1} \\\\ x_{2} \\\\ x_{3}\end{bmatrix}$$Hence, the name **identity** matrix. | x = np.array([[2], [6], [3]])
x
x_id = np.eye(x.shape[0]).dot(x)
x_id | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
More generally, when $\bs{A}$ is an $m\times n$ matrix, it is a property of matrix multiplication that:$$I_m\bs{A} = \bs{A}I_n = \bs{A}$$ Visualizing the intuitionVectors and matrices occupy $n$-dimensional space. This precept allows us to think about linear algebra geometrically and, if we're lucky enough to be working with $<3$ dimensions, visually. For example, if you had a $2$-dimensional vector $\bs{v}$, you could think of the vector as an ordered pair of real numbers ($a,b$). This ordered pair could then be plotted in a Cartesian coordinate system with a line connecting it to the origin:If you had two such vectors ($\bs{v}$ and $\bs{w}$), a simple vector operation like addition would geometrically look like this:Mathematically, that addition looks like this:Now that we've got you thinking about linear algebra geometrically, consider the identity matrix. If you multiply a vector by the identity matrix, you're technically applying a **transformation** to that vector. But since your multiplier was the identity matrix $\bs{I}$, the transformation just outputs the multiplicand, $\bs{x}$, itself. That's what happened above when we saw that $\bs{x}$ was not altered after being multiplied by $\bs{I}$. Visually, nothing would happen to your line.But what if we slightly change our identity matrix? What if, for example, we change the $1$'s to $2$'s like so:$$\begin{bmatrix} 2 & 0 & 0 \\\\ 0 & 2 & 0 \\\\ 0 & 0 & 2\end{bmatrix}$$That would double each element in vector $\bs{x}$. Visually, that would make the line twice as long.The takeaway here is that you can define an **operation matrix** to transform vectors. Hereβs a few examples of the types of transformations you could do: - Scale: make all inputs bigger/smaller - Skew: make certain inputs bigger/smaller - Flip: make inputs negative - Rotate: make new coordinates based on old ones (e.g. East becomes North, North becomes West, etc.)In short, all of these are geometric interpretations of multiplication. Each of them provides a means to warp a vector space. Inverse MatricesIf you have a square matrix (i.e. a matrix with the same number of columns and rows) then you can calculate the inverse of that matrix so long as its [determinant](https://en.wikipedia.org/wiki/Determinant) doesn't equal 0 (more on the determinant in a later lesson!).The inverse of $\bs{A}$ is denoted $\bs{A}^{-1}$. It is the matrix that results in the identity matrix when it is multiplied by $\bs{A}$:$$\bs{A}^{-1}\bs{A}=\bs{I}_n$$This means that if we apply a linear transformation to the space with $\bs{A}$, it is possible to go back with $\bs{A}^{-1}$. It provides a way to cancel the transformation. Example 2$$\bs{A}=\begin{bmatrix} 3 & 0 & 2 \\\\ 2 & 0 & -2 \\\\ 0 & 1 & 1\end{bmatrix}$$For this example, we will use the numpy function `linalg.inv()` to calculate the inverse of $\bs{A}$. Let's start by creating $\bs{A}$. If you want to learn about the nitty gritty details behind this operation, check out [this](https://www.mathsisfun.com/algebra/matrix-inverse-minors-cofactors-adjugate.html) or [this](https://www.mathsisfun.com/algebra/matrix-inverse-row-operations-gauss-jordan.html). | A = np.array([[3, 0, 2], [2, 0, -2], [0, 1, 1]])
A | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
Now we calculate its inverse: | A_inv = np.linalg.inv(A)
A_inv | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
We can check that $\bs{A^{-1}}$ is the inverse of $\bs{A}$ with Python: | A_bis = A_inv.dot(A)
A_bis | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
Sovling a system of linear equationsThe inverse matrix can be used to solve the equation $\bs{Ax}=\bs{b}$ by adding it to each term:$$\bs{A}^{-1}\bs{Ax}=\bs{A}^{-1}\bs{b}$$Since we know by definition that $\bs{A}^{-1}\bs{A}=\bs{I}$, we have:$$\bs{I}_n\bs{x}=\bs{A}^{-1}\bs{b}$$We saw that a vector is not changed when multiplied by the identity matrix. So we can write:$$\bs{x}=\bs{A}^{-1}\bs{b}$$This is great because we now have our vector of variables $\bs{x}$ all by itself on the right side of the equation! This means we can solve for $\bs{x}$ by simply computing the dot product of $\bs{A^-1}$ and $\bs{b}$!Let's try that! Example 3We will take a simple solvable example:$$\begin{cases}y = 2x \\\\y = -x +3\end{cases}$$First, lets be sure we're using the notation we saw in above:$$\begin{cases}A_{1,1}x_1 + A_{1,2}x_2 = b_1 \\\\A_{2,1}x_1 + A_{2,2}x_2= b_2\end{cases}$$Here, $x_1$ corresponds to $x$ and $x_2$ corresponds to $y$. So we have:$$\begin{cases}2x_1 - x_2 = 0 \\\\x_1 + x_2= 3\end{cases}$$Our matrix $\bs{A}$ of weights is:$$\bs{A}=\begin{bmatrix} 2 & -1 \\\\ 1 & 1\end{bmatrix}$$And the vector $\bs{b}$ containing the solutions of individual equations is:$$\bs{b}=\begin{bmatrix} 0 \\\\ 3\end{bmatrix}$$Under the matrix form, our systems becomes:$$\begin{bmatrix} 2 & -1 \\\\ 1 & 1\end{bmatrix}\begin{bmatrix} x_1 \\\\ x_2\end{bmatrix}=\begin{bmatrix} 0 \\\\ 3\end{bmatrix}$$Let's define $\bs{A}$: | A = np.array([[2, -1], [1, 1]])
A | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
And let's define $\bs{b}$: | b = np.array([[0], [3]]) | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
And now let's find the inverse of $\bs{A}$: | A_inv = np.linalg.inv(A)
A_inv | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
Since we saw that$$\bs{x}=\bs{A}^{-1}\bs{b}$$We have: | x = A_inv.dot(b)
x | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
This is our solution! $$\bs{x}=\begin{bmatrix} 1 \\\\ 2\end{bmatrix}$$Going back to the geometric interpretion of linear algebra, you can think of our solution vector $\bs{x}$ as containing a set of coordinates ($1, 2$). This point in a $2$-dimensional Cartesian plane is actually the intersection of the two lines representing the equations! Let's plot this to visually check the solution: | #to draw the equation with Matplotlib, first create a vector with some x values
x = np.arange(-10, 10)
#then create some y values for each of those x values using the equation
y = 2*x
y1 = -x + 3
#then instantiate the plot figure
plt.figure()
#draw the first line
plt.plot(x, y)
#draw the second line
plt.plot(x, y1)
#set the limits of the axes
plt.xlim(0, 3)
plt.ylim(0, 3)
#draw the axes
plt.axvline(x=0, color='grey')
plt.axhline(y=0, color='grey') | _____no_output_____ | Unlicense | Learn Math/3. Linear Algebra/3.3 Identity and Inverse Matrices/3.3 Identity and Inverse Matrices.ipynb | mcallistercs/learning-data-science |
ESML - accelerator: Quick DEMO | import sys, os
sys.path.append(os.path.abspath("../azure-enterprise-scale-ml/esml/common/")) # NOQA: E402
from esml import ESMLDataset, ESMLProject
p = ESMLProject() # Will search in ROOT for your copied SETTINGS folder '../../../settings', you should copy template settings from '../settings'
#p = ESMLProject(True) # Demo settings, will search in internal TEMPLATE SETTINGS folder '../settings'
#p.dev_test_prod = "dev"
p.describe() | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
from azureml.core import Workspacefrom azureml.core.authentication import InteractiveLoginAuthenticationauth = InteractiveLoginAuthentication(tenant_id = p.tenant)ws = Workspace.get(name = p.workspace_name,subscription_id = p.subscription_id,resource_group = p.resource_group,auth=auth)ws.write_config(path=".", file_name="../../ws_config.json")ws = Workspace.from_config("../ws_config.json") Reads config.json 2) ESML will Automap and Autoregister Azure ML Datasets - IN, SILVER, BRONZE, GOLD- `Automap` and `Autoregister` Azure ML Datasets as: `IN, SILVER, BRONZE, GOLD` | from azureml.core import Workspace
ws, config_name = p.authenticate_workspace_and_write_config()
ws = p.get_workspace_from_config()
ws.name
print("Are we in R&D state (no dataset versioning) = {}".format(p.rnd))
p.unregister_all_datasets(ws) # DEMO purpose
datastore = p.init(ws) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3) IN->`BRONZE->SILVER`->Gold- Create dataset from PANDAS - Save to SILVER | import pandas as pd
ds = p.DatasetByName("ds01_diabetes")
df = ds.Bronze.to_pandas_dataframe()
df.head() | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3) BRONZE-SILVER (EDIT rows & SAVE)- Test change rows, same structure = new version (and new file added)- Note: not earlier files in folder are removed. They are needed for other "versions". - Expected: For 3 files: New version, 997 rows: 2 older files=627 + 1 new file=370- Expected (if we delete OLD files): New version, with less rows. 370 instead of 997 | df_filtered = df[df.AGE > 0.015]
print(df.shape[0], df_filtered.shape[0]) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3a) Save `SILVER` ds01_diabetes | aml_silver = p.save_silver(p.DatasetByName("ds01_diabetes"),df_filtered)
aml_silver.name | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
COMPARE `BRONZE vs SILVER`- Compare and validate the feature engineering | ds01 = p.DatasetByName("ds01_diabetes")
bronze_rows = ds01.Bronze.to_pandas_dataframe().shape[0]
silver_rows = ds01.Silver.to_pandas_dataframe().shape[0]
print("Bronze: {}".format(bronze_rows)) # Expected 442 rows
print("Silver: {}".format(silver_rows)) # Expected 185 rows (filtered)
assert bronze_rows == 442,"BRONZE Should have 442 rows to start with, but is {}".format(bronze_rows)
assert silver_rows == 185,"SILVER should have 185 after filtering, but is {}".format(silver_rows) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3b) Save `BRONZE β SILVER` ds02_other | df_edited = p.DatasetByName("ds02_other").Silver.to_pandas_dataframe()
ds02_silver = p.save_silver(p.DatasetByName("ds02_other"),df_edited)
ds02_silver.name | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3c) Merge all `SILVERS -> then save GOLD` | df_01 = ds01.Silver.to_pandas_dataframe()
df_02 = ds02_silver.to_pandas_dataframe()
df_gold1_join = df_01.join(df_02) # left join -> NULL on df_02
print("Diabetes shape: ", df_01.shape)
print(df_gold1_join.shape) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
Save `GOLD` v1 | print(p.rnd)
p.rnd=False # Allow versioning on DATASETS, to have lineage
ds_gold_v1 = p.save_gold(df_gold1_join) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
3c) Ops! "faulty" GOLD - too many features | print(p.Gold.to_pandas_dataframe().shape) # 19 features...I want 11
print("Are we in RnD phase? Or do we have 'versioning on datasets=ON'")
print("RnD phase = {}".format(p.rnd)) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
Save `GOLD` v2 | # Lets just go with features from ds01
ds_gold_v1 = p.save_gold(df_01) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
Get `GOLD` by version | gold_1 = p.get_gold_version(1)
gold_1.to_pandas_dataframe().shape # (185, 19)
gold_2 = p.get_gold_version(2)
gold_2.to_pandas_dataframe().shape # (185, 11)
p.Gold.to_pandas_dataframe().shape # Latest version (185, 11)
df_01_filtered = df_01[df_01.AGE > 0.03807]
ds_gold_v1 = p.save_gold(df_01_filtered)
gold_2 = p.get_gold_version(3) # sliced, from latest version
gold_2.to_pandas_dataframe().shape # (113, 11) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
TRAIN - `AutoMLFactory + ComputeFactory` | from baselayer_azure_ml import AutoMLFactory, ComputeFactory
p.dev_test_prod = "test"
print("what environment are we targeting? = {}".format(p.dev_test_prod))
automl_performance_config = p.get_automl_performance_config()
automl_performance_config
p.dev_test_prod = "dev"
automl_performance_config = p.get_automl_performance_config()
automl_performance_config | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
Get `COMPUTE` for current `ENVIRONMENT` | aml_compute = p.get_training_aml_compute(ws) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
`TRAIN` model -> See other notebook `esml_howto_2_train.ipynb` | from azureml.train.automl import AutoMLConfig
from baselayer_azure_ml import azure_metric_regression
label = "Y"
train_6, validate_set_2, test_set_2 = p.split_gold_3(0.6,label) # Auto-registerin AZURE (M03_GOLD_TRAIN | M03_GOLD_VALIDATE | M03_GOLD_TEST) # Alt: train,testv= p.Gold.random_split(percentage=0.8, seed=23)
automl_config = AutoMLConfig(task = 'regression',
primary_metric = azure_metric_regression.MAE,
experiment_exit_score = '0.208', # DEMO purpose
compute_target = aml_compute,
training_data = p.GoldTrain, # is 'train_6' pandas dataframe, but as an Azure ML Dataset
label_column_name = label,
**automl_performance_config
)
via_pipeline = False
best_run, fitted_model, experiment = AutoMLFactory(p).train_pipeline(automl_config) if via_pipeline else AutoMLFactory(p).train_as_run(automl_config) | _____no_output_____ | MIT | notebook_demos/esml_howto_0_short.ipynb | jostrm/azure-enterprise-scale-ml-usage |
Pymaceuticals Inc.--- Analysis* Capomulin and Ramicane showed the smallest tumor volume at the end of the study.* There appears to be a correlation between mouse weight and the average tumor volume; as weight increases, tumor volume increases.* Capomulin had the lowest IQR, indicating a more narrow spread in the results for this drug regimen. | # Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
mouse_metadata.head()
study_results.head()
# Combine the data into a single dataset
clinical_trial=pd.merge(study_results, mouse_metadata, how='left')
clinical_trial.head()
clinical_trial.shape | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
Summary Statistics | # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
mean_df = clinical_trial.groupby('Drug Regimen').mean().reset_index()
mean_df = mean_df[['Drug Regimen', 'Tumor Volume (mm3)']]
mean_df = mean_df.rename(columns={'Tumor Volume (mm3)':'Mean Tumor Volume'})
mean_df
median_df=clinical_trial.groupby('Drug Regimen').median().reset_index()
median_df=median_df[['Drug Regimen', 'Tumor Volume (mm3)']]
median_df=median_df.rename(columns={'Tumor Volume (mm3)':'Median Tumor Volume'})
median_df
drug_summary=pd.merge(mean_df, median_df, how="inner")
drug_summary
variance_df=clinical_trial.groupby('Drug Regimen').var().reset_index()
variance_df=variance_df[['Drug Regimen', 'Tumor Volume (mm3)']]
variance_df=variance_df.rename(columns={'Tumor Volume (mm3)':'Tumor Volume Variance'})
variance_df
drug_summary=pd.merge(drug_summary, variance_df, how="inner")
drug_summary
std_df=clinical_trial.groupby('Drug Regimen').std().reset_index()
std_df=std_df[['Drug Regimen', 'Tumor Volume (mm3)']]
std_df=std_df.rename(columns={'Tumor Volume (mm3)':'Tumor Volume Std. Dev.'})
std_df
drug_summary=pd.merge(drug_summary, std_df, how="inner")
drug_summary
sem_df=clinical_trial.groupby('Drug Regimen').sem().reset_index()
sem_df=sem_df[['Drug Regimen', 'Tumor Volume (mm3)']]
sem_df=sem_df.rename(columns={'Tumor Volume (mm3)':'Tumor Volume Std. Err.'})
sem_df
drug_summary=pd.merge(drug_summary, sem_df, how="inner")
drug_summary
drug_count=clinical_trial.groupby('Drug Regimen').count().reset_index()
drug_count=drug_count[['Drug Regimen', 'Tumor Volume (mm3)']]
drug_count=drug_count.rename(columns={'Tumor Volume (mm3)':'Count'})
drug_count
drug_summary=pd.merge(drug_summary, drug_count, how="inner")
drug_summary = drug_summary.sort_values('Count', ascending=False)
drug_summary | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
Bar and Pie Charts | # Generate a bar plot showing number of data points for each treatment regimen using pandas
drug_summary.sort_values('Count', ascending=False).plot.bar(x="Drug Regimen", y="Count")
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
plt.bar(drug_summary['Drug Regimen'], drug_summary['Count'], color="b", align="center")
plt.xticks(rotation='vertical')
# Create a gender dataframe
gender_df = clinical_trial.groupby('Sex').count()
gender_df = gender_df[['Mouse ID']]
gender_df = gender_df.rename(columns={'Mouse ID':'Gender Count'})
gender_df
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_df.plot.pie(subplots=True)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
genders= ['female', 'male']
plt.pie(gender_df['Gender Count'], labels=genders, autopct="%1.1f%%")
plt.axis('equal')
plt.show() | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
Quartiles, Outliers and Boxplots | # Calculate the final tumor volume of each mouse.
tumor_df = clinical_trial.groupby('Mouse ID').last()
tumor_df.head()
# Calculate the final tumor volume of each mouse in Capomulin treatment regime.
capomulin = tumor_df.loc[(tumor_df['Drug Regimen'] == "Capomulin"),:]
capomulin.head()
# Calculate the IQR and quantitatively determine if there are any potential outliers.
cap_quartiles = capomulin['Tumor Volume (mm3)'].quantile([.25,.5,.75])
cap_lowerq = cap_quartiles[0.25]
cap_upperq = cap_quartiles[0.75]
cap_iqr = cap_upperq-cap_lowerq
print(f"The lower quartile of the Capomulin test group is: {cap_lowerq}")
print(f"The upper quartile of the Capomulin test group is: {cap_upperq}")
print(f"The interquartile range of the Capomulin test group is: {cap_iqr}")
print(f"The the median of the Capomulin test group is: {cap_quartiles[0.5]} ")
cap_lower_bound = cap_lowerq - (1.5*cap_iqr)
cap_upper_bound = cap_upperq + (1.5*cap_iqr)
print(f"Values below {cap_lower_bound} could be outliers.")
print(f"Values above {cap_upper_bound} could be outliers.")
# Calculate the final tumor volume of each mouse in Ramicane treatment regime.
ramicane = tumor_df.loc[(tumor_df['Drug Regimen'] == "Ramicane"),:]
ramicane.head()
# Calculate the IQR and quantitatively determine if there are any potential outliers.
ram_quartiles = ramicane['Tumor Volume (mm3)'].quantile([.25,.5,.75])
ram_lowerq = ram_quartiles[0.25]
ram_upperq = ram_quartiles[0.75]
ram_iqr = ram_upperq-ram_lowerq
print(f"The lower quartile of the Ramicane test group is: {ram_lowerq}")
print(f"The upper quartile of the Ramicane test group is: {ram_upperq}")
print(f"The interquartile range of the Ramicane test group is: {ram_iqr}")
print(f"The the median of the Ramicane test group is: {ram_quartiles[0.5]} ")
ram_lower_bound = ram_lowerq - (1.5*ram_iqr)
ram_upper_bound = ram_upperq + (1.5*ram_iqr)
print(f"Values below {ram_lower_bound} could be outliers.")
print(f"Values above {ram_upper_bound} could be outliers.")
# Calculate the final tumor volume of each mouse in Infubinol treatment regime.
infubinol = tumor_df.loc[(tumor_df['Drug Regimen'] == "Infubinol"),:]
infubinol.head()
# Calculate the IQR and quantitatively determine if there are any potential outliers.
inf_quartiles = infubinol['Tumor Volume (mm3)'].quantile([.25,.5,.75])
inf_lowerq = inf_quartiles[0.25]
inf_upperq = inf_quartiles[0.75]
inf_iqr = inf_upperq-inf_lowerq
print(f"The lower quartile of the Infubinol test group is: {inf_lowerq}")
print(f"The upper quartile of the Infubinol test group is: {inf_upperq}")
print(f"The interquartile range of the Infubinol test group is: {inf_iqr}")
print(f"The the median of the Infubinol test group is: {inf_quartiles[0.5]} ")
inf_lower_bound = inf_lowerq - (1.5*inf_iqr)
inf_upper_bound = inf_upperq + (1.5*inf_iqr)
print(f"Values below {inf_lower_bound} could be outliers.")
print(f"Values above {inf_upper_bound} could be outliers.")
# Calculate the final tumor volume of each mouse in Ceftamin treatment regime.
ceftamin = tumor_df.loc[(tumor_df['Drug Regimen'] == "Ceftamin"),:]
ceftamin.head()
# Calculate the IQR and quantitatively determine if there are any potential outliers.
cef_quartiles = ceftamin['Tumor Volume (mm3)'].quantile([.25,.5,.75])
cef_lowerq = cef_quartiles[0.25]
cef_upperq = cef_quartiles[0.75]
cef_iqr = cef_upperq-cef_lowerq
print(f"The lower quartile of the Infubinol test group is: {cef_lowerq}")
print(f"The upper quartile of the Infubinol test group is: {cef_upperq}")
print(f"The interquartile range of the Infubinol test group is: {cef_iqr}")
print(f"The the median of the Infubinol test group is: {cef_quartiles[0.5]} ")
cef_lower_bound = cef_lowerq - (1.5*cef_iqr)
cef_upper_bound = cef_upperq + (1.5*cef_iqr)
print(f"Values below {cef_lower_bound} could be outliers.")
print(f"Values above {cef_upper_bound} could be outliers.")
#Created new dataframe for four drugs of interest
regimen_of_interest = tumor_df.loc[(tumor_df['Drug Regimen'] == 'Capomulin') |
(tumor_df['Drug Regimen'] == 'Ramicane') |
(tumor_df['Drug Regimen'] == 'Infubinol')|
(tumor_df['Drug Regimen'] == 'Ceftamin')]
regimen_of_interest
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
regimen_of_interest.boxplot('Tumor Volume (mm3)', by='Drug Regimen', figsize=(10, 5))
plt.show | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
Line and Scatter Plots | # Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
clinical_trial.head()
single_mouse = clinical_trial[['Mouse ID', 'Timepoint', 'Tumor Volume (mm3)', 'Drug Regimen']]
single_mouse = single_mouse.loc[(single_mouse['Drug Regimen'] == "Capomulin"),:].reset_index()
single_mouse = single_mouse.loc[(single_mouse['Mouse ID'] == "b128"),:]
single_mouse
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
plt.plot(single_mouse['Timepoint'], single_mouse['Tumor Volume (mm3)'], color='blue', label="Mouse treated with Capomulin, Subject b128")
plt.ylabel('Tumor Volume (mm3)')
plt.xlabel('Timepoint')
# Create new dataframe
# Capomulin test group
mouse_treatment = clinical_trial[['Mouse ID', 'Drug Regimen']]
mouse_treatment
mean_mouse = clinical_trial.groupby('Mouse ID').mean().reset_index()
mean_mouse.head()
merged_group=pd.merge(mean_mouse, mouse_treatment, how='inner').reset_index()
merged_group.head()
capomulin_test_group = merged_group.loc[(merged_group['Drug Regimen'] == "Capomulin"),:].reset_index()
capomulin_test_group.head()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
weight = capomulin_test_group['Weight (g)']
tumor = capomulin_test_group['Tumor Volume (mm3)']
plt.scatter(weight, tumor, marker="o", facecolors="red", edgecolors="black")
plt.show | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
Correlation and Regression | # Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
vc_slope, vc_int, vc_r, vc_p, vc_std_err = st.linregress(weight, tumor)
vc_fit = vc_slope * weight + vc_int
plt.plot(weight,vc_fit)
weight = capomulin_test_group['Weight (g)']
tumor = capomulin_test_group['Tumor Volume (mm3)']
plt.scatter(weight, tumor, marker="o", facecolors="red", edgecolors="black")
plt.show | _____no_output_____ | ADSL | Pymaceuticals/pymaceuticals_basco.ipynb | bascomary/matplotlib_challenge |
In this notebook we'll provide an example for using different openrouteservice API's to help you look for an apartment. | mkdir ors-apartment
conda create -n ors-apartment python=3.6 shapely
cd ors-apartment
pip install openrouteservice ortools folium
import folium
from openrouteservice import client | _____no_output_____ | Apache-2.0 | python/Apartment_Search.ipynb | Xenovortex/ors-example |
We have just moved to San Francisco with our kids and are looking for the perfect location to get a new home. Our geo intuition tells us we have to look at the data to come to this important decision. So we decide to geek it up a bit. Apartment isochrones There are three different suggested locations for our new home. Let's visualize them and the 15 minute walking radius on a map: | api_key = '' #Provide your personal API key
clnt = client.Client(key=api_key)
# Set up folium map
map1 = folium.Map(tiles='Stamen Toner', location=([37.738684, -122.450523]), zoom_start=12)
# Set up the apartment dictionary with real coordinates
apt_dict = {'first': {'location': [-122.430954, 37.792965]},
'second': {'location': [-122.501636, 37.748653]},
'third': {'location': [-122.446629, 37.736928]}
}
# Request of isochrones with 15 minute footwalk.
params_iso = {'profile': 'foot-walking',
'intervals': [900], # 900/60 = 15 mins
'segments': 900,
'attributes': ['total_pop'] # Get population count for isochrones
}
for name, apt in apt_dict.items():
params_iso['locations'] = apt['location'] # Add apartment coords to request parameters
apt['iso'] = clnt.isochrones(**params_iso) # Perform isochrone request
folium.features.GeoJson(apt['iso']).add_to(map1) # Add GeoJson to map
folium.map.Marker(list(reversed(apt['location'])), # reverse coords due to weird folium lat/lon syntax
icon=folium.Icon(color='lightgray',
icon_color='#cc0000',
icon='home',
prefix='fa',
),
popup=name,
).add_to(map1) # Add apartment locations to map
map1 | _____no_output_____ | Apache-2.0 | python/Apartment_Search.ipynb | Xenovortex/ors-example |
POIs around apartments For the ever-styled foodie parents we are, we need to have the 3 basic things covered: kindergarten, supermarket and hair dresser. Let's see what options we got around our apartments: | # Common request parameters
params_poi = {'request': 'pois',
'sortby': 'distance'}
# POI categories according to
# https://github.com/GIScience/openrouteservice-docs#places-response
categories_poi = {'kindergarten': [153],
'supermarket': [518],
'hairdresser': [395]}
for name, apt in apt_dict.items():
apt['categories'] = dict() # Store in pois dict for easier retrieval
params_poi['geojson'] = apt['iso']['features'][0]['geometry']
print("\n{} apartment".format(name))
for typ, category in categories_poi.items():
params_poi['filter_category_ids'] = category
apt['categories'][typ] = dict()
apt['categories'][typ]['geojson']= clnt.places(**params_poi)['features'] # Actual POI request
print("\t{}: {}".format(typ, # Print amount POIs
len(apt['categories'][typ]['geojson']))) |
first apartment
kindergarten: 1
supermarket: 8
hairdresser: 10
second apartment
kindergarten: 3
supermarket: 1
hairdresser: 4
third apartment
kindergarten: 1
supermarket: 3
hairdresser: 2
| Apache-2.0 | python/Apartment_Search.ipynb | Xenovortex/ors-example |
So, all apartments meet all requirements. Seems like we have to drill down further. Routing from apartments to POIs To decide on a place, we would like to know from which apartment we can reach all required POI categories the quickest. So, first we look at the distances from each apartment to the respective POIs. | # Set up common request parameters
params_route = {'profile': 'foot-walking',
'format_out': 'geojson',
'geometry': 'true',
'geometry_format': 'geojson',
'instructions': 'false',
}
# Set up dict for font-awesome
style_dict = {'kindergarten': 'child',
'supermarket': 'shopping-cart',
'hairdresser': 'scissors'
}
# Store all routes from all apartments to POIs
for apt in apt_dict.values():
for cat, pois in apt['categories'].items():
pois['durations'] = []
for poi in pois['geojson']:
poi_coords = poi['geometry']['coordinates']
# Perform actual request
params_route['coordinates'] = [apt['location'],
poi_coords
]
json_route = clnt.directions(**params_route)
folium.features.GeoJson(json_route).add_to(map1)
folium.map.Marker(list(reversed(poi_coords)),
icon=folium.Icon(color='white',
icon_color='#1a1aff',
icon=style_dict[cat],
prefix='fa'
)
).add_to(map1)
poi_duration = json_route['features'][0]['properties']['summary'][0]['duration']
pois['durations'].append(poi_duration) # Record durations of routes
map1 | _____no_output_____ | Apache-2.0 | python/Apartment_Search.ipynb | Xenovortex/ors-example |
Quickest route to all POIs Now, we only need to determine which apartment is closest to all POI categories. | # Sum up the closest POIs to each apartment
for name, apt in apt_dict.items():
apt['shortest_sum'] = sum([min(cat['durations']) for cat in apt['categories'].values()])
print("{} apartments: {} mins".format(name,
apt['shortest_sum']/60
)
) | first apartments: 37.09 mins
second apartments: 40.325 mins
third apartments: 35.315000000000005 mins
| Apache-2.0 | python/Apartment_Search.ipynb | Xenovortex/ors-example |
Making El Nino AnimationsEl Nino is the warm phase of __[El NiΓ±oβSouthern Oscillation (ENSO)](https://en.wikipedia.org/wiki/El_Ni%C3%B1o%E2%80%93Southern_Oscillation)__. It is a part of a routine climate pattern that occurs when sea surface temperatures in the tropical Pacific Ocean rise to above-normal levels for an extended period of time.In this Notebook we show how to make animations of sea surface temperature anomalies using __[NOAA 1/4Β° Daily Optimum Interpolation Sea Surface Temperature (Daily OISST) dataset (NOAA OISST](https://data.planetos.com/datasets/noaa_oisst_daily_1_4)___API documentation is available at http://docs.planetos.com. If you have questions or comments, join the Planet OS Slack community to chat with our development team. For general information on usage of IPython/Jupyter and Matplotlib, please refer to their corresponding documentation. https://ipython.org/ and http://matplotlib.org/___This Notebook is running on Python3.__ | import os
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
import imageio
import shutil
import datetime
import matplotlib as mpl
mpl.rcParams['font.family'] = 'Avenir Lt Std'
mpl.rcParams.update({'font.size': 25}) | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key! | API_key = open('APIKEY').read().strip()
server='api.planetos.com/'
version = 'v1' | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
This is a part where you should change the time period if you want to get animation of different time frame. Strongest __[El Nino years](http://ggweather.com/enso/oni.htm)__ have been 1982-83, 1997-98 and 2015-16. However, El Nino have occured more frequently. NOAA OISST dataset in Planet OS Datahub starts from 2008, however, we can extend the period if requested (since 1981 September). Feel free to change time_start and time_end to see how anomalies looked like on different years. You can find year when El Nino was present from __[here](http://ggweather.com/enso/oni.htm)__. | time_start = '2016-01-01T00:00:00'
time_end = '2016-03-10T00:00:00'
dataset_key = 'noaa_oisst_daily_1_4'
variable = 'anom'
area = 'pacific'
latitude_north = 40; latitude_south = -40
longitude_west = -180; longitude_east = -77
anim_name = variable + '_animation_' + str(datetime.datetime.strptime(time_start,'%Y-%m-%dT%H:%M:%S').year) + '.mp4' | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Download the data with package API- Create package objects- Send commands for the package creation- Download the package files | dh=datahub.datahub(server,version,API_key)
package = package_api.package_api(dh,dataset_key,variable,longitude_west,longitude_east,latitude_south,latitude_north,time_start,time_end,area_name=area)
package.make_package()
package.download_package() | Package exists
| MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Here we are using xarray to read in the data. We will also rewrite longitude coordinates as they are from 0-360 at first, but Basemap requires longitude -180 to 180. | dd1 = xr.open_dataset(package.local_file_name)
dd1['lon'] = ((dd1.lon+180) % 360) - 180 | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
We like to use Basemap to plot data on it. Here we define the area. You can find more information and documentation about Basemap __[here](https://matplotlib.org/basemap/)__. | m = Basemap(projection='merc', lat_0 = 0, lon_0 = (longitude_east + longitude_west)/2,
resolution = 'l', area_thresh = 0.05,
llcrnrlon=longitude_west, llcrnrlat=latitude_south,
urcrnrlon=longitude_east, urcrnrlat=latitude_north)
lons,lats = np.meshgrid(dd1.lon,dd1.lat)
lonmap,latmap = m(lons,lats) | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Below we make local folder where we save images. These are the images we will use for animation. No worries, in the end, we will delete the folder from your system. | folder = './ani/'
if not os.path.exists(folder):
os.mkdir(folder) | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Now it is time to make images from every time step. Let's also show first time step here: | vmin = -5; vmax = 5
for k in range(0,len(dd1[variable])):
filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'
fig=plt.figure(figsize=(12,10))
ax = fig.add_subplot(111)
pcm = m.pcolormesh(lonmap,latmap,dd1[variable][k,0].data,vmin = vmin, vmax = vmax,cmap='bwr')
m.fillcontinents(color='#58606F')
m.drawcoastlines(color='#222933')
m.drawcountries(color='#222933')
m.drawstates(color='#222933')
parallels = np.arange(-40.,41,40)
# labels = [left,right,top,bottom]
m.drawparallels(parallels,labels=[True,False,True,False])
#meridians = np.arange(10.,351.,2.)
#m.drawmeridians(meridians,labels=[True,False,False,True])
cbar = plt.colorbar(pcm,fraction=0.035, pad=0.03)
ttl = plt.title('SST Anomaly ' + str(dd1[variable].time[k].data)[:-19],fontweight = 'bold')
ttl.set_position([.5, 1.05])
if not os.path.exists(folder):
os.mkdir(folder)
plt.savefig(filename)
if k == 0:
plt.show()
plt.close()
| _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
This is part where we are making animation. | files = sorted(os.listdir(folder))
fileList = []
for file in files:
if not file.startswith('.'):
complete_path = folder + file
fileList.append(complete_path)
writer = imageio.get_writer(anim_name, fps=4)
for im in fileList:
writer.append_data(imageio.imread(im))
writer.close()
print ('Animation is saved as ' + anim_name + ' under current working directory') | Animation is saved as anom_animation_2016.mp4 under current working directory
| MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
And finally, we will delete folder where images where saved. Now you just have animation in your working directory. | shutil.rmtree(folder) | _____no_output_____ | MIT | api-examples/El_Nino_animations.ipynb | steffenmodest/notebooks |
Task 2: Prediction using Unsupervised ML - K- Means Clustering Importing the libraries | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd | _____no_output_____ | Apache-2.0 | Task_2_Clustering.ipynb | BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters |
Importing the dataset | dataset = pd.read_csv('/content/Iris.csv')
dataset.head()
dataset['Species'].describe() | _____no_output_____ | Apache-2.0 | Task_2_Clustering.ipynb | BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters |
Determining K - number of clusters | x = dataset.iloc[:, [0, 1, 2, 3]].values
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 15):
kmeans = KMeans(n_clusters = i, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
| _____no_output_____ | Apache-2.0 | Task_2_Clustering.ipynb | BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters |
Plotting the results - observe 'The elbow' | plt.figure(figsize=(16,8))
plt.style.use('ggplot')
plt.plot(range(1, 15), wcss)
plt.title('The elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS') # Within cluster sum of squares
plt.show() | _____no_output_____ | Apache-2.0 | Task_2_Clustering.ipynb | BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters |
Creating the kmeans classifier with K = 3 | kmeans = KMeans(n_clusters = 3, init = 'k-means++',
max_iter = 300, n_init = 10, random_state = 0)
y_kmeans = kmeans.fit_predict(x) | _____no_output_____ | Apache-2.0 | Task_2_Clustering.ipynb | BakkeshAS/GRIP_Task_2_Predict_Optimum_Clusters |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.