code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
```
from sklearn.datasets import load_breast_cancer
from sklearn.datasets import load_iris
import pandas as pd
from sklearn import datasets
from onepiecepredictor.OnePieceClassifier import OnePieceClassifier
from onepiecepredictor.MultiModelsClassifier import MultiModelsClassifier
from onepiecepredictor.OnePieceRegression import OnePieceRegression
from onepiecepredictor.MultiModelsRegression import MultiModelsRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
```
# A Small package for hyper paramter tuning pipelining and comparing multiple models performance.
### Its a wrapper around sklearn, xgboost, catboost, imblearn packages
## Classification
### Currently Supports 7 models for classification:
* LOGISTIC -> logistic regression, uses LogisticRegression class from sklearn package.
* RF -> Random Forest, uses RandomForestClassifier class from sklearn package.
* SVM -> Support Vector Machine, uses SVC class from sklearn package.
* KNN -> K Nearest Neighbours, uses KNeighborsClassifier class from sklearn package.
* ADABOOST -> Adaptive boosting, uses AdaBoostClassifier class from sklearn package.
* XGBOOST -> Uses XGBClassifier class from xgboost package.
* CATBOOST -> Uses CatBoostClassifier from catboost package.
### Pass one of the key words mentioned above for OnePieceClassifier using the model paramter to use respective model.
## Paramters Information for OnePieceClassifier Class
* X -> array-like(supported by Sklearn). If testTrainSplit is passed, this will be split into train and test
* Y -> array-like(supported by Sklearn). If testTrainSplit is passed, this will be split into train and test
* model -> string Currently supported models: LOGISTIC,RF,SVM,KNN,ADABOOST,XGBOOST,CATBOOST
* testX -> array-like(supported by Sklearn), test data. Ignored if testTrainSplit is passed
* testY -> array-like(supported by Sklearn), test data. Ignored if testTrainSplit is passed
* testTrainSplit -> float, ratio passed will be the amount of test data.
* stratify -> bool, used to perform stratified splitting. If passed data will be split based on Y.
* hyperParams -> dictionary, Hyper parameters specific to the model passed. If passed CV is performed.
* performCV -> bool, Used when hyperParams not passed to perform plain CV.
* folds -> int, No of folds to be used for CV.
* applySmote -> bool, To apply smote to oversample the data. Pass only one of applySmote or underSample
* underSample -> bool, To randomly undersample the majority data.
* sampling -> str, Values supported by SMOTE, RandomUnderSampler classes in imblearn library.
* scoring -> str, Evaluation metric. Currently supported values: accuracy,balanced_accuracy,f1,precision,recall,roc_auc. If not passed accuracy is used.
* targetEncodeCols -> List. List of columns to target encode.
* modelParams -> dictionary, Any model specific parameters can be passed as dictionary.
* multiClass -> Pass true in case of multiclass classification.
## Methods in OnePieceClassifier class
* fit() -> For training.
* predict() -> For Predicting. Returns score and predictions.
* newDataPredict(testData) -> For getting the predictions on completely new data. Returns new predictions.
## Classification HyperParamters With corss Validation and startified splitting.
```
hyperParams = {
'gamma': [0.25, 1],
'max_depth': [3, 4]
}
data = load_breast_cancer()
X = data.data
Y = data.target
op = OnePieceClassifier(X, Y, "XGBOOST",testTrainSplit = 0.3,
stratify = True, hyperParams = hyperParams)
op.fit()
score, preds = op.predict()
score
preds
```
### Explicitly pass train and test data sets.
```
data = load_breast_cancer()
X = data.data
Y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, Y, stratify=Y,test_size=0.3, random_state = 7)
op = OnePieceClassifier(X_train, y_train, "XGBOOST", testX = X_test, testY = y_test
,hyperParams = hyperParams)
op.fit()
score, preds = op.predict()
```
### To use any model with specific model paramters explicitly, pass a dictionary using modelParams parameter.
### For example to use random forest with 'criterion' as entropy instead of default gini.
```
data = load_breast_cancer()
X = data.data
Y = data.target
modelParams = {'criterion' : 'entropy'}
hyperParams = {
'n_estimators': [100, 200],
'max_depth': [2, 3]
}
op = OnePieceClassifier(X, Y, "RF",testTrainSplit = 0.2,
stratify = True, hyperParams = hyperParams, scoring = 'f1',modelParams = modelParams)
op.fit()
op.predict()
```
## To compare performance of multiple classification models with cross validation
```
mc = MultiModelsClassifier(X, Y, testTrainSplit = 0.3,
stratify = True, scoring = 'accuracy', performCV = True)
results = mc.predict()
print(results)
```
### For Imbalanced data, with oversampling. Oversample minorty class to 40% from 10% and test on new data
```
X, Y = make_classification(n_classes=2, class_sep=2,
weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
op = OnePieceClassifier(X, Y, "LOGISTIC",testTrainSplit = 0.2, applySmote = True, sampling = 0.4,
stratify = True, scoring = 'f1')
op.fit()
op.predict()
```
### Predict on new data
```
X, Y = make_classification(n_classes=2, class_sep=2,
weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
preds = op.newDataPredict(X)
preds
```
## Regression
### Currently Supports 7 models for classification:
* LINEAR -> linear regression, uses LinearRegression class from sklearn package.
* RF -> Random Forest, uses RandomForestRegressor class from sklearn package.
* SVM -> Support Vector Machine, uses SVR class from sklearn package.
* KNN -> K Nearest Neighbours, uses KNeighborsRegressor class from sklearn package.
* ADABOOST -> Adaptive boosting, uses AdaBoostRegressor class from sklearn package.
* XGBOOST -> Uses XGBRegressor class from xgboost package.
* CATBOOST -> Uses CatBoostRegressor from catboost package.
### Paramters Information for OnePieceRegression Class
* X -> array-like(supported by Sklearn). If testTrainSplit is passed, this will be split into train and test
* Y -> array-like(supported by Sklearn). If testTrainSplit is passed, this will be split into train and test
* model -> string Currently supported models: LOGISTIC,RF,SVM,KNN,ADABOOST,XGBOOST,CATBOOST
* testX -> array-like(supported by Sklearn), test data. Ignored if testTrainSplit is passed
* testY -> array-like(supported by Sklearn), test data. Ignored if testTrainSplit is passed
* testTrainSplit -> float, ratio passed will be the amount of test data.
* hyperParams -> dictionary, Hyper parameters specific to the model passed. If passed CV is performed.
* performCV -> bool, Used when hyperParams not passed to perform plain CV.
* folds -> int, No of folds to be used for CV.
* scoring -> str, Evaluation metric. Currently supported values: r2,neg_mean_squared_error. If not passed r2 is used.
* targetEncodeCols -> List. List of columns to target encode.
* modelParams -> dictionary, Any model specific parameters can be passed as dictionary.
## Methods in OnePieceRegression class
* fit() -> For training.
* predict() -> For Predicting. Returns score and predictions.
* newDataPredict(testData) -> For getting the predictions on completely new data. Returns new predictions.
## Regression with corss Validation.
```
data = datasets.load_boston()
X = data.data
Y = data.target
oreg = OnePieceRegression(X, Y, "SVM", testTrainSplit = 0.1, performCV = True, folds = 3)
oreg.fit()
score, preds = oreg.predict()
score
preds
```
## To compare performance of multiple classification models with cross validation
```
data = datasets.load_boston()
X = data.data
Y = data.target
mr = MultiModelsRegression(X, Y, testTrainSplit = 0.1,performCV = True)
results = mr.predict()
results
```
|
github_jupyter
|
from sklearn.datasets import load_breast_cancer
from sklearn.datasets import load_iris
import pandas as pd
from sklearn import datasets
from onepiecepredictor.OnePieceClassifier import OnePieceClassifier
from onepiecepredictor.MultiModelsClassifier import MultiModelsClassifier
from onepiecepredictor.OnePieceRegression import OnePieceRegression
from onepiecepredictor.MultiModelsRegression import MultiModelsRegression
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
hyperParams = {
'gamma': [0.25, 1],
'max_depth': [3, 4]
}
data = load_breast_cancer()
X = data.data
Y = data.target
op = OnePieceClassifier(X, Y, "XGBOOST",testTrainSplit = 0.3,
stratify = True, hyperParams = hyperParams)
op.fit()
score, preds = op.predict()
score
preds
data = load_breast_cancer()
X = data.data
Y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, Y, stratify=Y,test_size=0.3, random_state = 7)
op = OnePieceClassifier(X_train, y_train, "XGBOOST", testX = X_test, testY = y_test
,hyperParams = hyperParams)
op.fit()
score, preds = op.predict()
data = load_breast_cancer()
X = data.data
Y = data.target
modelParams = {'criterion' : 'entropy'}
hyperParams = {
'n_estimators': [100, 200],
'max_depth': [2, 3]
}
op = OnePieceClassifier(X, Y, "RF",testTrainSplit = 0.2,
stratify = True, hyperParams = hyperParams, scoring = 'f1',modelParams = modelParams)
op.fit()
op.predict()
mc = MultiModelsClassifier(X, Y, testTrainSplit = 0.3,
stratify = True, scoring = 'accuracy', performCV = True)
results = mc.predict()
print(results)
X, Y = make_classification(n_classes=2, class_sep=2,
weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
op = OnePieceClassifier(X, Y, "LOGISTIC",testTrainSplit = 0.2, applySmote = True, sampling = 0.4,
stratify = True, scoring = 'f1')
op.fit()
op.predict()
X, Y = make_classification(n_classes=2, class_sep=2,
weights=[0.1, 0.9], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=1000, random_state=10)
preds = op.newDataPredict(X)
preds
data = datasets.load_boston()
X = data.data
Y = data.target
oreg = OnePieceRegression(X, Y, "SVM", testTrainSplit = 0.1, performCV = True, folds = 3)
oreg.fit()
score, preds = oreg.predict()
score
preds
data = datasets.load_boston()
X = data.data
Y = data.target
mr = MultiModelsRegression(X, Y, testTrainSplit = 0.1,performCV = True)
results = mr.predict()
results
| 0.72952 | 0.947527 |
# Custom pre-processors with the V2 protocol
Most of the time, the requests that we send to our model need some kind of processing.
For example, extra information may need to be fetched (e.g. from a feature store), or processed, in order to obtain the actual tensors required by the model. One example for this use case are NLP models, where natural language needs first to be tokenised according to a vocabulary, or embedded by a 2nd model.
In this tutorial, we will focus on this latter scenario.
In particular, we will explore how to deploy a _tokeniser_ pre-transformer that converts our natural language text to tokens.
This tokeniser will then be part of an inference graph, so that its output gets routed to a [GPT-2 model deployed using Triton](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html).
> **NOTE**: The tokeniser logic and the Triton artifacts are taken from the [GPT-2 Model example](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html). To learn more about these, feel free to check that tutorial.

## Creating a Tokeniser
In order to create a custom pre-processing step, the first step will be to [write a **custom runtime**](https://mlserver.readthedocs.io/en/latest/runtimes/custom.html) using [MLServer](https://mlserver.readthedocs.io/en/latest/).
MLServer is a production-grade inference server, whose main goal is to ease up the serving of models through a REST and gRPC interface compatible with the [V2 Inference Protocol](https://kserve.github.io/website/modelserving/inference_api/).
As well as an inference server, MLServer also exposes a *framework* which can be leveraged to easily **write your custom inference runtimes**.
These custom runtimes can be used to write any custom logic, including (you guessed it!) our tokeniser pre-processor.
Therefore, we will start by extending the base `mlserver.MLModel` class, adding our custom logic.
Note that this logic is taken (almost) verbatim from the [GPT-2 Model example](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html).
```
# %load tokeniser/runtime.py
from mlserver import MLModel
from mlserver.types import InferenceRequest, InferenceResponse
from mlserver.codecs import NumpyCodec
from mlserver.codecs.string import StringRequestCodec
from transformers import GPT2Tokenizer
class Tokeniser(MLModel):
async def load(self) -> bool:
self._tokeniser = GPT2Tokenizer.from_pretrained("gpt2")
self.ready = True
return self.ready
async def predict(self, inference_request: InferenceRequest) -> InferenceResponse:
sentences = StringRequestCodec.decode(inference_request)
tokenised = self._tokeniser(sentences, return_tensors="np")
outputs = []
for name, payload in tokenised.items():
inference_output = NumpyCodec.encode(name=name, payload=payload)
# Transformer's TF GPT2 model expects `INT32` inputs by default, so
# let's enforce them
inference_output.datatype = "INT32"
outputs.append(inference_output)
return InferenceResponse(
model_name=self.name, model_version=self.version, outputs=outputs
)
```
Note that the pre-processing logic is implemented in the `predict()` method.
At the moment, the MLServer framework doesn't expose the concept of pre- and post-processing.
However, it's possible to implement this is a _"pseudo-model"_, thus relying on the service orchestrator of Seldon Core, who will be responsible of chaining the output of our tokeniser to the next model.
### Requirements and default model settings
Besides writing the logic of our custom runtime, we will also need to provide the extra requirements that will be used by our environment.
This can be done through a plain `requirements.txt` file.
Alternatively, for a finer control, it'd also be possible to leverage [Conda's environment files](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#create-env-file-manually) to specify our environment.
```
# %load tokeniser/requirements.txt
mlserver==1.0.0
transformers==4.12.3
```
On top of this, we will also add a `model-settings.json` file with the default settings for our model.
MLServer uses these files to provide extra configuration (e.g. number of parallel workers, adaptive batching configuration, etc.) for each model.
In our case, we will use this file to tell MLServer that it should always use our custom runtime by default and name our models as `tokeniser` (unless other name is specified).
```
# %load tokeniser/model-settings.json
{
"name": "tokeniser",
"implementation": "runtime.Tokeniser"
}
```
### Testing our tokeniser
> **NOTE**: To test our custom runtime locally, we will need to install the same set of dependencies that will be bundled and deployed remotely.
To achieve this, we can re-use the environment that was described on the previous section:
```bash
pip install -r ./tokeniser/requirements.txt
```
Since we're leveraging MLServer to write our custom pre-processor, it should be **easy to test it locally**.
For this, we will start MLServer using the [`mlserver start` subcommand](https://mlserver.readthedocs.io/en/latest/reference/cli.html#mlserver-start).
Note that this command has to be carried out on a separate terminal:
```bash
mlserver start ./tokeniser
```
We can then send a test request using `curl` as follows:
```
%%bash
curl localhost:8080/v2/models/tokeniser/infer \
-H 'Content-Type: application/json' \
-d '{"inputs": [{"name": "sentences", "datatype": "BYTES", "shape": [1, 11], "data": "hello world"}]}' \
| python -m json.tool
```
As we can see above, the input `hello world` gets tokenised into `[31373, 995]`, thus confirming that our custom runtime is working as expected locally.
### Building the image
Once we have our custom code tested and ready, we should be able to build our custom image by using the [`mlserver build` subcommand](https://mlserver.readthedocs.io/en/latest/reference/cli.html#mlserver-build).
This image will be created under the `gpt2-tokeniser:0.1.0` tag.
```
%%bash
mlserver build ./tokeniser --tag gpt2-tokeniser:0.1.0
```
## Deploying our inference graph
Now that we have our custom tokeniser built and ready, we are able to deploy it alongside our GPT-2 model.
This can be achieved through a `SeldonDeployment` manifest which **links both models**.
That is, our tokeniser, plus the actual GPT-2 model.
As outlined above, this manifest will re-use the image and resources built in the [GPT-2 Model example](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html), which is accessible from GCS.
> **NOTE:** This manifest expects that the `gpt2-tokeniser:0.1.0` image built in the previous section **is accessible** from within the cluster where Seldon Core has been installed. If you are [using `kind`](https://docs.seldon.io/projects/seldon-core/en/latest/install/kind.html), you should be able to load the image into your local cluster with the following command:
```bash
kind load docker-image gpt2-tokeniser:0.1.0
```
```
# %load seldondeployment.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: gpt2
spec:
protocol: kfserving
predictors:
- name: default
graph:
name: tokeniser
children:
- name: gpt2
implementation: TRITON_SERVER
modelUri: gs://seldon-models/triton/onnx_gpt2
componentSpecs:
- spec:
containers:
- name: tokeniser
image: gpt2-tokeniser:0.1.0
env:
# Use always a writable HuggingFace cache location regardless
# of the user
- name: TRANSFORMERS_CACHE
value: /opt/mlserver/.cache
```
The final step will be to apply this manifest into the cluster, where Seldon Core is running.
For example, to deploy the manifest into the `models` namespace, we could run the following command:
```
!kubectl create namespace models --dry-run=client -o yaml | kubectl apply -f -
!kubectl apply -f seldondeployment.yaml -n models
```
### Testing our deployed inference graph
Finally, we can test that our deployed inference graph is working as expected by sending a request.
If we assume that our cluster can be reached in `localhost:8003`, we can send a request using `cURL` as:
```
%%bash
curl localhost:8003/seldon/models/gpt2/v2/models/infer \
-H 'Content-Type: application/json' \
-d '{"inputs": [{"name": "sentences", "datatype": "BYTES", "shape": [1, 11], "data": ["hello world"]}]}' \
| python -m json.tool
```
As we can see above, our plain-text request is now going successfully through the `tokeniser`, acting as a pre-processor, whose output then gets routed to the actual GPT-2 model.
|
github_jupyter
|
# %load tokeniser/runtime.py
from mlserver import MLModel
from mlserver.types import InferenceRequest, InferenceResponse
from mlserver.codecs import NumpyCodec
from mlserver.codecs.string import StringRequestCodec
from transformers import GPT2Tokenizer
class Tokeniser(MLModel):
async def load(self) -> bool:
self._tokeniser = GPT2Tokenizer.from_pretrained("gpt2")
self.ready = True
return self.ready
async def predict(self, inference_request: InferenceRequest) -> InferenceResponse:
sentences = StringRequestCodec.decode(inference_request)
tokenised = self._tokeniser(sentences, return_tensors="np")
outputs = []
for name, payload in tokenised.items():
inference_output = NumpyCodec.encode(name=name, payload=payload)
# Transformer's TF GPT2 model expects `INT32` inputs by default, so
# let's enforce them
inference_output.datatype = "INT32"
outputs.append(inference_output)
return InferenceResponse(
model_name=self.name, model_version=self.version, outputs=outputs
)
# %load tokeniser/requirements.txt
mlserver==1.0.0
transformers==4.12.3
# %load tokeniser/model-settings.json
{
"name": "tokeniser",
"implementation": "runtime.Tokeniser"
}
pip install -r ./tokeniser/requirements.txt
```
Since we're leveraging MLServer to write our custom pre-processor, it should be **easy to test it locally**.
For this, we will start MLServer using the [`mlserver start` subcommand](https://mlserver.readthedocs.io/en/latest/reference/cli.html#mlserver-start).
Note that this command has to be carried out on a separate terminal:
We can then send a test request using `curl` as follows:
As we can see above, the input `hello world` gets tokenised into `[31373, 995]`, thus confirming that our custom runtime is working as expected locally.
### Building the image
Once we have our custom code tested and ready, we should be able to build our custom image by using the [`mlserver build` subcommand](https://mlserver.readthedocs.io/en/latest/reference/cli.html#mlserver-build).
This image will be created under the `gpt2-tokeniser:0.1.0` tag.
## Deploying our inference graph
Now that we have our custom tokeniser built and ready, we are able to deploy it alongside our GPT-2 model.
This can be achieved through a `SeldonDeployment` manifest which **links both models**.
That is, our tokeniser, plus the actual GPT-2 model.
As outlined above, this manifest will re-use the image and resources built in the [GPT-2 Model example](https://docs.seldon.io/projects/seldon-core/en/latest/examples/triton_gpt2_example.html), which is accessible from GCS.
> **NOTE:** This manifest expects that the `gpt2-tokeniser:0.1.0` image built in the previous section **is accessible** from within the cluster where Seldon Core has been installed. If you are [using `kind`](https://docs.seldon.io/projects/seldon-core/en/latest/install/kind.html), you should be able to load the image into your local cluster with the following command:
The final step will be to apply this manifest into the cluster, where Seldon Core is running.
For example, to deploy the manifest into the `models` namespace, we could run the following command:
### Testing our deployed inference graph
Finally, we can test that our deployed inference graph is working as expected by sending a request.
If we assume that our cluster can be reached in `localhost:8003`, we can send a request using `cURL` as:
| 0.897833 | 0.948106 |
```
from azureml.core import Workspace, Experiment
#ws = Workspace.get(name="udacity-project")
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="udacity-project-experiment")
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
run = exp.start_logging()
from azureml.core.compute import ComputeTarget, AmlCompute
# TODO: Create compute cluster
# Use vm_size = "Standard_D2_V2" in your provisioning configuration.
# max_nodes should be no greater than 4.
cluster_name = "cpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=4)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config) ## creating the cluster
compute_target.wait_for_completion(show_output=True)
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform, choice
import os
import shutil
# Specify parameter sampler
ps = RandomParameterSampling({
"--C": choice(0.01, 0.1, 0.5, 1.0, 1.5, 2.0, 3.0),
"--max_iter": choice(50, 100, 150, 200)
}
)
# Specify a Policy
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
script_path = "./training"
script_name = 'train.py'
if "training" not in os.listdir():
os.mkdir(script_path)
shutil.copy(script_name, script_path)
# Create a SKLearn estimator for use with train.py
est = SKLearn(source_directory=script_path, entry_script=script_name, compute_target=compute_target)
# Create a HyperDriveConfig using the estimator, hyperparameter sampler, and policy.
hyperdrive_config = HyperDriveConfig(estimator=est,
hyperparameter_sampling=ps,
policy=policy,
primary_metric_name="Accuracy",
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=24,
max_concurrent_runs=4
)
# Submit your hyperdrive run to the experiment and show run details with the widget.
hyperdrive_run = exp.submit(config=hyperdrive_config, show_output=True)
hyperdrive_run
RunDetails(hyperdrive_run).show()
hyperdrive_run.wait_for_completion(show_output=True)
import joblib
# Get your best run and save the model from that run.
best_run = hyperdrive_run.get_best_run_by_primary_metric()
best_run_metrics = best_run.get_metrics()
best_run_details = best_run.get_details()
print("Best Run ID: {0}".format(best_run.id))
print("Accuracy: {0:.3f}".format(best_run_metrics["Accuracy"]))
print("Parameters: {0}".format(best_run_details["runDefinition"]["arguments"]))
joblib.dump(value=best_run.id, filename="./outputs/best_model.joblib")
best_run.upload_file("outputs/best_model.joblib", "outputs/best_model.joblib")
best_run.register_model("best_model", model_path="outputs/best_model.joblib")
from azureml.data.dataset_factory import TabularDatasetFactory
# Create TabularDataset using TabularDatasetFactory
# Data is available at:
# "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
ds_url = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
ds = TabularDatasetFactory.from_delimited_files(path=ds_url)
df = ds.to_pandas_dataframe()
df.head(10)
!ls
import pandas as pd
def clean_data(data):
# Dict for cleaning data
months = {"jan":1, "feb":2, "mar":3, "apr":4, "may":5, "jun":6, "jul":7, "aug":8, "sep":9, "oct":10, "nov":11, "dec":12}
weekdays = {"mon":1, "tue":2, "wed":3, "thu":4, "fri":5, "sat":6, "sun":7}
# Clean and one hot encode data
x_df = data.to_pandas_dataframe().dropna()
jobs = pd.get_dummies(x_df.job, prefix="job")
x_df.drop("job", inplace=True, axis=1)
x_df = x_df.join(jobs)
x_df["marital"] = x_df.marital.apply(lambda s: 1 if s == "married" else 0)
x_df["default"] = x_df.default.apply(lambda s: 1 if s == "yes" else 0)
x_df["housing"] = x_df.housing.apply(lambda s: 1 if s == "yes" else 0)
x_df["loan"] = x_df.loan.apply(lambda s: 1 if s == "yes" else 0)
contact = pd.get_dummies(x_df.contact, prefix="contact")
x_df.drop("contact", inplace=True, axis=1)
x_df = x_df.join(contact)
education = pd.get_dummies(x_df.education, prefix="education")
x_df.drop("education", inplace=True, axis=1)
x_df = x_df.join(education)
x_df["month"] = x_df.month.map(months)
x_df["day_of_week"] = x_df.day_of_week.map(weekdays)
x_df["poutcome"] = x_df.poutcome.apply(lambda s: 1 if s == "success" else 0)
y_df = x_df.pop("y").apply(lambda s: 1 if s == "yes" else 0)
return x_df, y_df
# Use the clean_data function to clean your data.
x, y = clean_data(ds)
import pandas as pd
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=66)
df_train = pd.concat([X_train, y_train], axis=1).reset_index(drop=True)
df_train.to_csv("bank_train_data.csv", index=False)
df_train.head()
if not os.path.isdir("dataset"):
os.mkdir("./dataset")
shutil.copy("bank_train_data.csv", "./dataset")
data_store = ws.get_default_datastore()
data_store.upload(src_dir="./dataset", target_path="bankmarketing", overwrite=True, show_progress=True)
# Uploading the training data as a tabular dataset
ds_train = TabularDatasetFactory.from_delimited_files(path=data_store.path("bankmarketing/bank_train_data.csv"))
from azureml.train.automl import AutoMLConfig
# Set parameters for AutoMLConfig
# NOTE: DO NOT CHANGE THE experiment_timeout_minutes PARAMETER OR YOUR INSTANCE WILL TIME OUT.
# If you wish to run the experiment longer, you will need to run this notebook in your own
# Azure tenant, which will incur personal costs.
automl_config = AutoMLConfig(
experiment_timeout_minutes=30,
task="classification",
primary_metric="accuracy",
compute_target=compute_target,
training_data=ds_train,
label_column_name="y",
n_cross_validations=5)
# Submit your automl run
exp_2 = Experiment(ws, "automl-experiment")
run_2 = exp_2.submit(automl_config, show_output=False)
exp_2
run_2
from azureml.widgets import RunDetails
RunDetails(run_2).show()
run_2.wait_for_completion(show_output=True)
# Retrieve and save your best automl model.
best_run, model = run_2.get_output()
joblib.dump(model, "outputs/best_automl_model.joblib")
best_run.upload_file("outputs/best_automl_model.joblib", "outputs/best_automl_model.joblib")
best_run.register_model("auto-ml-model" , model_path = "outputs/best_automl_model.joblib")
print(model)
print(model.steps)
```
---
|
github_jupyter
|
from azureml.core import Workspace, Experiment
#ws = Workspace.get(name="udacity-project")
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="udacity-project-experiment")
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
run = exp.start_logging()
from azureml.core.compute import ComputeTarget, AmlCompute
# TODO: Create compute cluster
# Use vm_size = "Standard_D2_V2" in your provisioning configuration.
# max_nodes should be no greater than 4.
cluster_name = "cpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=4)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config) ## creating the cluster
compute_target.wait_for_completion(show_output=True)
from azureml.widgets import RunDetails
from azureml.train.sklearn import SKLearn
from azureml.train.hyperdrive.run import PrimaryMetricGoal
from azureml.train.hyperdrive.policy import BanditPolicy
from azureml.train.hyperdrive.sampling import RandomParameterSampling
from azureml.train.hyperdrive.runconfig import HyperDriveConfig
from azureml.train.hyperdrive.parameter_expressions import uniform, choice
import os
import shutil
# Specify parameter sampler
ps = RandomParameterSampling({
"--C": choice(0.01, 0.1, 0.5, 1.0, 1.5, 2.0, 3.0),
"--max_iter": choice(50, 100, 150, 200)
}
)
# Specify a Policy
policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
script_path = "./training"
script_name = 'train.py'
if "training" not in os.listdir():
os.mkdir(script_path)
shutil.copy(script_name, script_path)
# Create a SKLearn estimator for use with train.py
est = SKLearn(source_directory=script_path, entry_script=script_name, compute_target=compute_target)
# Create a HyperDriveConfig using the estimator, hyperparameter sampler, and policy.
hyperdrive_config = HyperDriveConfig(estimator=est,
hyperparameter_sampling=ps,
policy=policy,
primary_metric_name="Accuracy",
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=24,
max_concurrent_runs=4
)
# Submit your hyperdrive run to the experiment and show run details with the widget.
hyperdrive_run = exp.submit(config=hyperdrive_config, show_output=True)
hyperdrive_run
RunDetails(hyperdrive_run).show()
hyperdrive_run.wait_for_completion(show_output=True)
import joblib
# Get your best run and save the model from that run.
best_run = hyperdrive_run.get_best_run_by_primary_metric()
best_run_metrics = best_run.get_metrics()
best_run_details = best_run.get_details()
print("Best Run ID: {0}".format(best_run.id))
print("Accuracy: {0:.3f}".format(best_run_metrics["Accuracy"]))
print("Parameters: {0}".format(best_run_details["runDefinition"]["arguments"]))
joblib.dump(value=best_run.id, filename="./outputs/best_model.joblib")
best_run.upload_file("outputs/best_model.joblib", "outputs/best_model.joblib")
best_run.register_model("best_model", model_path="outputs/best_model.joblib")
from azureml.data.dataset_factory import TabularDatasetFactory
# Create TabularDataset using TabularDatasetFactory
# Data is available at:
# "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
ds_url = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv"
ds = TabularDatasetFactory.from_delimited_files(path=ds_url)
df = ds.to_pandas_dataframe()
df.head(10)
!ls
import pandas as pd
def clean_data(data):
# Dict for cleaning data
months = {"jan":1, "feb":2, "mar":3, "apr":4, "may":5, "jun":6, "jul":7, "aug":8, "sep":9, "oct":10, "nov":11, "dec":12}
weekdays = {"mon":1, "tue":2, "wed":3, "thu":4, "fri":5, "sat":6, "sun":7}
# Clean and one hot encode data
x_df = data.to_pandas_dataframe().dropna()
jobs = pd.get_dummies(x_df.job, prefix="job")
x_df.drop("job", inplace=True, axis=1)
x_df = x_df.join(jobs)
x_df["marital"] = x_df.marital.apply(lambda s: 1 if s == "married" else 0)
x_df["default"] = x_df.default.apply(lambda s: 1 if s == "yes" else 0)
x_df["housing"] = x_df.housing.apply(lambda s: 1 if s == "yes" else 0)
x_df["loan"] = x_df.loan.apply(lambda s: 1 if s == "yes" else 0)
contact = pd.get_dummies(x_df.contact, prefix="contact")
x_df.drop("contact", inplace=True, axis=1)
x_df = x_df.join(contact)
education = pd.get_dummies(x_df.education, prefix="education")
x_df.drop("education", inplace=True, axis=1)
x_df = x_df.join(education)
x_df["month"] = x_df.month.map(months)
x_df["day_of_week"] = x_df.day_of_week.map(weekdays)
x_df["poutcome"] = x_df.poutcome.apply(lambda s: 1 if s == "success" else 0)
y_df = x_df.pop("y").apply(lambda s: 1 if s == "yes" else 0)
return x_df, y_df
# Use the clean_data function to clean your data.
x, y = clean_data(ds)
import pandas as pd
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=66)
df_train = pd.concat([X_train, y_train], axis=1).reset_index(drop=True)
df_train.to_csv("bank_train_data.csv", index=False)
df_train.head()
if not os.path.isdir("dataset"):
os.mkdir("./dataset")
shutil.copy("bank_train_data.csv", "./dataset")
data_store = ws.get_default_datastore()
data_store.upload(src_dir="./dataset", target_path="bankmarketing", overwrite=True, show_progress=True)
# Uploading the training data as a tabular dataset
ds_train = TabularDatasetFactory.from_delimited_files(path=data_store.path("bankmarketing/bank_train_data.csv"))
from azureml.train.automl import AutoMLConfig
# Set parameters for AutoMLConfig
# NOTE: DO NOT CHANGE THE experiment_timeout_minutes PARAMETER OR YOUR INSTANCE WILL TIME OUT.
# If you wish to run the experiment longer, you will need to run this notebook in your own
# Azure tenant, which will incur personal costs.
automl_config = AutoMLConfig(
experiment_timeout_minutes=30,
task="classification",
primary_metric="accuracy",
compute_target=compute_target,
training_data=ds_train,
label_column_name="y",
n_cross_validations=5)
# Submit your automl run
exp_2 = Experiment(ws, "automl-experiment")
run_2 = exp_2.submit(automl_config, show_output=False)
exp_2
run_2
from azureml.widgets import RunDetails
RunDetails(run_2).show()
run_2.wait_for_completion(show_output=True)
# Retrieve and save your best automl model.
best_run, model = run_2.get_output()
joblib.dump(model, "outputs/best_automl_model.joblib")
best_run.upload_file("outputs/best_automl_model.joblib", "outputs/best_automl_model.joblib")
best_run.register_model("auto-ml-model" , model_path = "outputs/best_automl_model.joblib")
print(model)
print(model.steps)
| 0.448909 | 0.359111 |
# Часть 2: Deep Learning
Для начала напомним основные понятия машинного обучения.
Что такое моделирование? Это когда мы пытаемся как-то задать зависимость (функцию) между фичами ($X$) и целевой переменной ($y$). Для этого мы пользуемся какими-то предположениями о том, как должно выглядеть идеальное решение. Например, мы можем сказать «$y = f_k(x) = kx + \epsilon$ для какого-то $k$», то есть мы можем предположить, что переменные $x$ и $y$ связаны как-то линейно, но коэффициент пропорциональности мы не знаем, и тогда он будет называться **параметром** модели. После этого мы вводим какую-то **функцию потерь** (например, $l(y') = (y'-y)^2$) и подбираем параметры модели, которые минимизируют её ожидаемое значение.
Как подбирать эти параметры? В простых случаях, вроде линейной регрессии, они находятся аналитически: нужно взять производную, приравнять к нулю и решить систему уравнений. Но иногда эти функции намного сложнее. Как их оптимизировать?
## Добро пожаловать в мир дифференцируемых функций
Автор определяет следующую «иерархию хороших функций», с точки зрения лёгкости нахождения минимума:
* Аналитически решаемые — их глобальный минимум можно выразить какой-то простой формулой. Пример: линейная регрессия.
* Выпуклые. У них гарантируется решение, причём единственное, и оно быстро ищется разными методами, в частности градиентным спуском (но обычно можно даже быстрее). Пример: логистическая регрессия.
* Дифференцируемые. К ним можно применить градиентный спуск, и, возможно, он сойдётся не к локальному минимуму, а к глобальному. **<-- YOU ARE HERE**
* Дискретные. Тут всё грустно, но нам хотя бы можно быстро узнать её значение.
* Невычислимые. Иногда нам нужно оценивать что-нибудь совсем не формализуемое математикой — например, качество перевода, или поведение пользователя. Невычислимыми функциями в частности занимается Reinforcement Learning.
В курсе вы будете заниматься разными способами задания моделей, состоящих только из дифференцируемых относительно параметров преобразований, что позволит искать (иногда успешно, иногда нет) набор параметров, при которых ожидание функции потерь имеет минимальное значение.
## Градиентный спуск
**Градиентом** называют вектор (набор чисел), каждой компонентой которого является значение производной по очередному аргументу (при фиксированных остальных).
**Ок, зачем он нужен?** Пусть у нас есть какая-нибудь функция, которую мы хотим минимизировать, и мы предполагаем, что она выглядит как что-то типа гладкой ямы. Тогда мы можем попытаться действовать так: начнем с какой-нибудь точки и будем делать много очень маленьких шажков в сторону наибольшего уменьшения функции, пока не придем в локальный минимум.
* Что значит «в сторону наибольшего уменьшения»? Это значит «против градиаента».
* Что такое «маленький шажок»? Это значит $-\lambda \cdot (f'_1, f'_2, \ldots, f'_n)$. Обычно $\lambda$ это что-то типа $10^{-3}$. Этот параметр называется learning rate (скорость обучения).
* Что значит «пока не придём в локальный минимум». Это значит «пока градиент не ноль». На практике будем проверять, что норма (т. е. длина вектора) больше определенного очень малого $\epsilon$.

Если learning rate достоаточно маленький, мы точно придем хотя бы в локальный минимум. Этот метод называется **градиентным спуском**, и он очень часто применяется для оптимизации тех функций, у которых везде можно быстро посчитать градиент. Гарантированно находить глобальный минимум произвольной функции наука пока не умеет (и вряд ли когда-либо научится).
### Стохастический градиентный спуск
Нам может потребоваться много итераций, чтобы градиентный спуск сошелся. Более того, время выполнения одной итерации может быть очень большим хотя бы потому, что нам нужно каждый раз просмотреть весь датасет. Поэтому для каждого шага градиентного спуска будем использовать не точный градиент, а его оценку: выберем несколько десятков примеров — такой набор называется батчем (англ. batch — пакет, группа) — посчитаем на них градиенты и усредним. Получаем шумную, но приемлимую для нас оценку градиента. Такой вид градиентного спуска называют стохастическим (SGD — stochastic gradient descent).
Почему бы не брать вообще один пример? На самом деле, с точки зрения теории — можно. Но на практике оптимальный размер батча совсем маленьким делать не стоит из-за параллелизма: на устройстах, на которых эти градиенты считаются, затратится меньше времени на пример, если данные обрабатывать не поштучно, а группами (включите GPU в Google Colab и выполните ячейки снизу).
[Можно показать](https://openreview.net/pdf?id=B1Yy1BxCZ), что чтобы скомпенсировать уменьшение размера батча, нужно во столько же раз уменьшить learning rate. Иными словами, более шумные оценки градиента можно компенсировать более мелкими и аккуратными шагами.
<img width='500px' src='https://sqream.com/wp-content/uploads/2017/03/cpu_vs_gpu-11.png'>
Если вы работаете локально, то зайдите на https://pytorch.org/get-started/locally/ и установите PyTorch.
```
import torch
import numpy
A = numpy.random.randn(1000, 5000)
B = numpy.random.randn(5000, 2000)
%time C = numpy.matmul(A, B)
A = torch.randn(1000, 5000)
B = torch.randn(5000, 2000)
%time C = torch.matmul(A, B)
# если вы открыли тетрадку через Google Colab, то включите GPU
# (сверху слева Runtime -> Change runtime type... -> GPU)
A = torch.randn(1000, 5000).cuda()
B = torch.randn(5000, 2000).cuda()
%time C = torch.matmul(A, B)
```
### Эвристики
В сложных моделях — например, нейросетях — поверхности оптимизируемых функций обычно выглядят весьма страшно:
<img width='250px' src='https://ml4a.github.io/images/figures/non_convex_function.png'>
«Плохих» точек в процессе оптимизации на самом деле существует два типа: где градиент нулевой и где градиент бесконечный.
**Моментум**. Что делать, если мы попали в какую-то точку, где градиента практически нет? Будем шагать каждый раз не в сторону градиента в данной точке, а в сторону *экспоненциально усреднённого* градиента по всем предыдущим итерациям (градиенты с последних итераций будут иметь больший вес). Для этого вводится специальный гиперпараметр $0 < \gamma < 1$, а рядом с каждым параметром хранится усреднённое среднее его градиентов, которое обновляется по следующей формуле:
$$ \hat{g}_i = \hat{g}_{i-1} \cdot \gamma + g_i $$
<img width='250px' src='https://upload.wikimedia.org/wikipedia/commons/thumb/1/1e/Saddle_point.svg/300px-Saddle_point.svg.png'>
**RMSProp**. Что делать, если мы на «обрыве»? Будем поддерживать таким же образом усреднённые *квадраты* градиентов, и при обновлении параметров нормировать градиент, деля его на корень из этой оценки. Так оптимизатор будет адаптироваться под «турбулентные» регионы, уменьшая в них размер шага и не давая параметрам улететь куда-то далего из-за обрывов.
<img width='250px' src='https://3.bp.blogspot.com/-fJQ8OM1dHl4/WV363VZZVqI/AAAAAAAAFSk/0e0EuS3WZ9gv5jW93cuF-XjU2FAN42VMQCLcBGAs/s1600/gradient_clipping.png'>
Алгоритм, объединяющий эти две эвристики, называется **Adam**. Он является одним из самых часто используемых оптимизаторов в глубоком обучении.
Подробнее прочитать про эвристики в градиентном спуске можно тут: http://ruder.io/optimizing-gradient-descent/
----
## Практическая часть: фреймворки
Чтобы оптимизировать функцию потерь относительно параметров градиентным спуском, нужно для начала этот градиент хотя бы посчитать. Как вы убедитесь на 3-м занятии, это больно делать вручную. Для этого существуют фреймворки, которые сами посчитают производные за нас. Помимо своей основной функции (поддержка эффективного автоматического дифференцирования и оптимизаторов), они также включают полезные абстракции для машинного обучения.
Фреймворков много, и становится ещё больше. Мы будем использовать **PyTorch**. Вам он будет очень напоминать numpy — по сути он может делать всё то же, только ещё и считать градиенты относительно параметров.
PyTorch можно использовать как замену numpy:
```
x = torch.tensor([1., 2., 3.])
y = torch.tensor([4., 5., 6.])
z = x + y
print(z)
# при создании переменных можно поставить флаг requires_grad
x = torch.tensor([1., 2., 3], requires_grad=True)
# с этим флагом мы можем делать те же операции, что и раньше
y = torch.tensor([4., 5., 6], requires_grad=True)
z = torch.dot(x, y)
print(z)
```
...но теперь z кое-что знает о себе:
```
print(z.grad_fn)
```
`z` — скаляр, и мы можем продиффиренцировать весь граф относительно него:
```
z.backward()
```
И теперь рядом со всеми переменными с requires_grad=True, которые как-либо использовались при получении z, теперь будут их градиенты.
```
print(x.grad)
print(y.grad)
```
Мы сможем потом использовать эти градиенты, чтобы оптимально подвинуть параметры в градиентном спуске.
## MNIST
Это всё было абстрактно. Рассмотрим более конкретный пример.
Датасет MNIST включает в себя 70000 черно-белых изображений цифр от 0 до 9, каждое 28 на 28 пиксилей. Задача — предсказать по изображению наиболее вероятную цифру, соответствующую изображению.
<img width='400px' src='https://camo.githubusercontent.com/24545a9ca1aa3b5d1036bd3deaed3ed7ec6cfdc4/68747470733a2f2f692e696d6775722e636f6d2f4954726d3978342e706e67'>
**Нейронная сеть** — это просто какая-то последовательность дифференцируемых операций со входными данными. Обычно эти массовые операции над векторами называют **слоями**. Самый простой пример — матричное умножение, за которым следует операция `softmax`:
$$ \sigma(x)_k = \frac{e^{x_k}}{\sum_i e^{x_i}} $$
Она возвращает вероятностное распределение: нетрудно убедиться, что каждый элемент неотрицателен, и все $\sigma_i$ суммируются в единицу. Если кто помнит, мы только что описали логистическую регрессию, которая тоже в каком-то смысле является очень простой нейросетью.
Обучим какую-нибудь нейросеть, которая принимает вектора размера $784 = 28^2$ и возвращает вероятностное распределение. **ВАЖНО** из-за вычислительных причин мы почти всегда будем обрабатывать данные по батчам, и поэтому размерности входных и промежуточных данных всегда будут вида (batch_size x dim).
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
%matplotlib inline
def get_loader(train, batch_size):
'''Cкачает мнист и сохранит где-то рядом.'''
# Dataset в PyTorch -- это какой-то объект, который оборачивает сырые данные и делает с ними какой-нибудь препроцессинг
dataset = datasets.MNIST('mnist', train=train, download=True,
transform=transforms.ToTensor())
# DataLoader делает из датасета генератор, который возвращает данные, сгруппированные по батчам
loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
return loader
train = get_loader(True, 64)
val = get_loader(False, 64)
```
Вам для первого занятия не нужно деталь знать, как устроены Dataset-ы и DataLoader-ы, но в будущем будет полезным прочитать туториал с pytorch.org: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
В качестве функции потерь выберем кроссэнтропию — так же, как и в логистической регрессии. В PyTorch есть функция, которая принимает логарифмы вероятностей и правильные ответы и возвращает кроссэнтропию — `nn.NLLLoss` (negative log likelihood loss). Из-за вычислительных причин (в основном, проблем с точностью), мы почти всегда будем работать с логарифмами вероятностей, а не с самими вероятностями. Чтобы сеть их выдавала, нужно последним слоем добавить слой `nn.LogSoftmax`.
### Как создать простую модель
Базовая конструкция для наших нейросетей - nn.Sequential, которая принимает в качестве своих параметров последовательность слоев, через которые будут последовательно проходить наши данные. Есть два типа слоев: одним нужно знать размерности тензоров, а другим нет. Важно понимать, что поскольку наши данные проходят через слои последовательно, то и размеры тензора могут меняться от слоя к слою.
Перечислим необходимые на первых порах слои:
* nn.Linear — это слой, применяющий линейное преобразование. Собственно, в простых моделях мы обучаем именно его - подбираем необходимые коэффициенты преобразования. Для использования nn.Linear надо указывать размерность входного тензора и размерность желаемого выходного. К примеру, nn.Linear(784, 10) преобразует тензор размера (batch_size, 784) в тензор размера (batch_size, 10).
* nn.ReLU — это слой, который применяет функцию ReLU, обладающую свойством нелинейности. Зачем это нужно - чуть позже.
* nn.Sigmoid — это слой, который применяет функцию Sigmoid, обладающую двумя свойствами: во-первых, нелинейностью, а во-вторых, ее значения лежат в промежутке [0, 1]. Если нужна только нелинейность, то лучше использовать ReLU (из-за проблемызатухающего градиента)
* nn.Softmax — про него писали выше. Этот слой по данным выдает вероятностное распределение. В основном нужен для задач классификации.
### Зачем нужна нелинейность
Нелинейные функции также называются функциями-активаторами. Слои с этими функциями обычно не содержат параметров, которые оптимизируются при обучении, а нужны для того, чтобы линейные функции не комбинировались. Ведь композиция линейных функций — это тоже линейная функция. Тогда для композиции слоев B и C существует слой А с такими же параметрами, но требующий меньше ресурсов на обучение.
```
model = nn.Sequential(
# какие-нибудь nn.Linear и нелинейности
# ...
nn.LogSoftmax(dim=1)
)
```
Кроссэнтропия не очень информативна — она меряется в каких-то попугаях, а не в понятных единицах. Нас скорее интересует абсолютная точность классификации:
```
def accuracy(model, val):
total = 0
correct = 0
for X, y in val:
X = X.view(-1, 784)
res = model(X)
res = res.argmax(dim=1)
total += res.shape[0]
correct += (res == y).sum().item()
return correct / total
```
## Обучение
Следующие блоки кода очень важны, потому что их мы будем использовать постоянно. Что тут происходит:
1. optimizer —это тот объект, который будет отвечать за градиентный спуск и обновление параметров модели.
2. criterion — это та самая функция потерь, которую мы минимизируем.
3. epoch — эпохи. Мы хотим сколько-то раз (например, 10) обработать весь тренировочный датасет, и провести на нем обучение.
4. zero_grad — мы обнуляем все данные градиентов, которые оптимизатор хранил до этого.
5. output — получаем результат работы модели.
6. loss — считаем функцию потерь.
7. backward — мы получаем градиенты, которыми на этом шаге оптимизатор будет пользоваться при обновлении параметров модели. (см. [backpropagation](https://colab.research.google.com/drive/1U2rElWU-0QVjSy421fsTrRMPUK2p9v9F#scrollTo=JpKNvmHR1I_e&line=12&uniqifier=1))
8. step — оптимизатор обновляет всю модель.
```
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
criterion = nn.NLLLoss()
# ^ попробуйте какой-нибудь другой и сравните, если ещё не уверовали в кроссэнтропию
train_losses = []
for epoch in range(10):
for X, y in train:
X = X.view(-1, 784) # разгладим картинку в вектор
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
train_losses.append(loss.item())
# как думаете, зачем нужен .item()?
# подсказка: лосс хранит информацию о своей истории
# попробуйте убрать .item() и посмотреть на расход памяти
optimizer.step()
print(accuracy(model, train), accuracy(model, val))
plt.plot(train_losses)
plt.show()
```
### Регуляризация
Вы можете заметить, что с какого-то момента функция потерь на `val` перестаёт падать (а потом и начинает расти), при этом лосс на `train` стабильно убывает. Это связано с переобучением. Если сеть достаточно большая, то нейроны смогут адаптироваться для получения меньшего лосса на отдельном примере, что не очень хорошо обобщается под данные, которые модель ещё не видела. Например сеть может выучить правило «если этот пиксель имеет такое-то значение, то это шестёрка», в структуре сети это будет выражаться очень сильной связью между нейронами. Для этого в нейросетях используют методы регуляризации на веса сети или на процесс обучения.
Самый популярный из них на данный момент — Dropout (в PyTorch — `nn.Dropout`). Это отдельный слой, который во время обучения с вероятностью $p$ независимо по всем элементам зануляет их значения. Это мешает нейронам адаптироваться.
<img width='600px' src='https://cdn-images-1.medium.com/max/1200/1*iWQzxhVlvadk6VAJjsgXgg.png'>
## Автоэнкодеры
**Автоэнкодеры** — это сети, которые учатся восстанавливать свои же входные данные. Такой тип обучения иногда называют self-supervised.
<img width='400px' src='https://habrastorage.org/web/cf6/228/613/cf6228613fdc4f8fb819cbd41bb677eb.png'>
Казалось бы, выучить функцию $f(x) = x$ очень легко, но в автоэнкодеры устроены так, что внутри них вся информация в какой-то момент проходит через скрытый слой небольшой размерности, и поэтому автоэнкодеры просто не имеют возможности идеально точно скопировать свой вход на выходе.
Поэтому сети приходится выучивать в этом крытом очень сжатое и информативное представление данных, что потом можно будет использовать для разных интересных вещей.
Например, для визуализации: можно сделать скрытый слой размера 2 и вывести данные на плоскость.
<img width='800px' src='https://i.stack.imgur.com/2gSs1.png'>
Повсеместно используемый PCA на самом деле является частным случаем автоэнкодера: из преобразований разрешается использовать только линейные.
Мы также можем использовать скрытые состояния для морфинга — плавного перехода между объектами.
<img width='250px' src='https://camo.githubusercontent.com/fa61cfca07320919eb6430a2a06f98d3e68e29c1/68747470733a2f2f692e696d6775722e636f6d2f4f72554a7339562e676966'>
Обозначим уже обученный на данных энкодер как функцию $e$, а декодер как функцию $g$. Тогда морфинг между изображениями $A$ и $B$ мы можем сделать так: переведем изображения A и B в скрытые состояния $a = e(A)$ и $b = e(B)$, а затем каждый кадр генерируется как
$$ C = d((1-t) \cdot a + t \cdot b) $$
где $t$ равномерно изменяется от 0 до 1. Иными словами, мы берём все точки на отрезке ab и последовательно декодируем.
Это вам и предстоит реализовать.
```
class Autoencoder(nn.Module):
def __init__(self):
super().__init__()
self.encode = nn.Sequential(
# мы хотим перевести картинку в какое-нибудь X-мерное пространство
)
self.decode = nn.Sequential(
# а теперь наоборот - из Х-мерного в картинку
nn.Sigmoid()
# картинки -- это тензоры со значениями от 0 до 1
# нет особого смысла выводить что-то не из этого промежутка
)
def forward(self, x):
return self.decode(self.encode(x))
model = Autoencoder()
criterion = torch.nn.MSELoss()
# ^ попробуйте также другие меры разности (например, абсолютную ошибку)
optimizer = torch.optim.Adam(model.parameters())
for epoch in range(10):
train_loss = 0
for data, _ in train:
# ^ лэйблы нам не нужны
data = data.view(-1, 784)
optimizer.zero_grad()
reconstructed = model(data)
loss = criterion(data, reconstructed)
loss.backward()
train_loss += loss.item()
optimizer.step()
print('epoch %d, loss %.4f' % (epoch, train_loss / len(train)))
```
Теперь попытаемся сделать гифку, как выше.
Анимации `matplotlib` — это жесть, не надо особо пытаться разобраться в коде снизу. Возможно, вам придётся пройти квест и поставить `ffmpeg` (`apt instal ffmpeg`, `pip install ffmpeg` и перезапуска тетрадки должно хватить в большинстве случаев).
```
from matplotlib import animation
from matplotlib.animation import FuncAnimation
from IPython.display import HTML, display
def get(x):
return train.dataset[x][0].view(1, 784)
def imshow(img):
pic = img.numpy().astype('float')
plt.axis('off')
return plt.imshow(pic, cmap='Greys', animated=True)
def morph(inputs, steps, delay):
# перегоняем в латентное пространство все картинки на входе
latent = [model.encode(get(k)).data for k in inputs]
fig = plt.figure()
images = []
for a, b in zip(latent, latent[1:] + [latent[0]]):
for t in numpy.linspace(0, 1, steps):
# получаем проинтерполированную точку
c = a*(1-t)+b*t
# ...и декодируем её в изображение
morphed = model.decode(c).data
morphed = morphed.view(28, 28)
images.append([imshow(morphed)])
ani = animation.ArtistAnimation(fig, images, interval=delay)
display(HTML(ani.to_html5_video()))
morph(numpy.random.randint(0, len(train.dataset), 30), 20, 30)
```
# Домашнее задание
* Получить точность 97% на валидации MNIST.
* Реализовать морфинг автоэнкодером (получите красивую гифку).
* Визуализировать MNIST автоэнкодером (просто обучить автоэнкодер с латентным пространством размерности 2 и вывести через scatter точки разного цвета).
### *Свёртки
Если у вас останется время, вы можете улучшить результаты, используя свёртки.
О свёрточных сетях в деталях вы узнаете на следующем занятии, а пока что вы можете использовать `nn.Conv2d`, `nn.MaxPool2d`, и `nn.ConvTranspose2d` просто как более продвинутые слои для классификатора и автоэнкодера, даже не особо понимая, как они внутри работают.
Основная задача «нейроинженеров» — придумать, как выглядело бы решение этой задачи на уровне программы с неизвестными параметрами, и подбирать соответствующие архитектуры. [Эксперименты с дропаутом](https://arxiv.org/pdf/1701.05369.pdf) показывают, что в Linear примерно 99% весов на самом деле можно выкинуть. Логично, что в оптимальной архитектуре не должно быть бесполезных весов — лишние параметры всегда ведут к переобучению. В случае с картинками решение в том, чтобы использовать информацию о расположении пикселей относительно друг друга, чтобы создать слой, который смотрит на более релевантные фичи. Мотивация заключается примерно в этом, подробнее — через неделю.
<img width='250px' src='https://cdn-images-1.medium.com/max/1600/0*iqNdZWyNeCr5tCkc.'>
|
github_jupyter
|
import torch
import numpy
A = numpy.random.randn(1000, 5000)
B = numpy.random.randn(5000, 2000)
%time C = numpy.matmul(A, B)
A = torch.randn(1000, 5000)
B = torch.randn(5000, 2000)
%time C = torch.matmul(A, B)
# если вы открыли тетрадку через Google Colab, то включите GPU
# (сверху слева Runtime -> Change runtime type... -> GPU)
A = torch.randn(1000, 5000).cuda()
B = torch.randn(5000, 2000).cuda()
%time C = torch.matmul(A, B)
x = torch.tensor([1., 2., 3.])
y = torch.tensor([4., 5., 6.])
z = x + y
print(z)
# при создании переменных можно поставить флаг requires_grad
x = torch.tensor([1., 2., 3], requires_grad=True)
# с этим флагом мы можем делать те же операции, что и раньше
y = torch.tensor([4., 5., 6], requires_grad=True)
z = torch.dot(x, y)
print(z)
print(z.grad_fn)
z.backward()
print(x.grad)
print(y.grad)
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
%matplotlib inline
def get_loader(train, batch_size):
'''Cкачает мнист и сохранит где-то рядом.'''
# Dataset в PyTorch -- это какой-то объект, который оборачивает сырые данные и делает с ними какой-нибудь препроцессинг
dataset = datasets.MNIST('mnist', train=train, download=True,
transform=transforms.ToTensor())
# DataLoader делает из датасета генератор, который возвращает данные, сгруппированные по батчам
loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size, shuffle=True)
return loader
train = get_loader(True, 64)
val = get_loader(False, 64)
model = nn.Sequential(
# какие-нибудь nn.Linear и нелинейности
# ...
nn.LogSoftmax(dim=1)
)
def accuracy(model, val):
total = 0
correct = 0
for X, y in val:
X = X.view(-1, 784)
res = model(X)
res = res.argmax(dim=1)
total += res.shape[0]
correct += (res == y).sum().item()
return correct / total
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
criterion = nn.NLLLoss()
# ^ попробуйте какой-нибудь другой и сравните, если ещё не уверовали в кроссэнтропию
train_losses = []
for epoch in range(10):
for X, y in train:
X = X.view(-1, 784) # разгладим картинку в вектор
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
train_losses.append(loss.item())
# как думаете, зачем нужен .item()?
# подсказка: лосс хранит информацию о своей истории
# попробуйте убрать .item() и посмотреть на расход памяти
optimizer.step()
print(accuracy(model, train), accuracy(model, val))
plt.plot(train_losses)
plt.show()
class Autoencoder(nn.Module):
def __init__(self):
super().__init__()
self.encode = nn.Sequential(
# мы хотим перевести картинку в какое-нибудь X-мерное пространство
)
self.decode = nn.Sequential(
# а теперь наоборот - из Х-мерного в картинку
nn.Sigmoid()
# картинки -- это тензоры со значениями от 0 до 1
# нет особого смысла выводить что-то не из этого промежутка
)
def forward(self, x):
return self.decode(self.encode(x))
model = Autoencoder()
criterion = torch.nn.MSELoss()
# ^ попробуйте также другие меры разности (например, абсолютную ошибку)
optimizer = torch.optim.Adam(model.parameters())
for epoch in range(10):
train_loss = 0
for data, _ in train:
# ^ лэйблы нам не нужны
data = data.view(-1, 784)
optimizer.zero_grad()
reconstructed = model(data)
loss = criterion(data, reconstructed)
loss.backward()
train_loss += loss.item()
optimizer.step()
print('epoch %d, loss %.4f' % (epoch, train_loss / len(train)))
from matplotlib import animation
from matplotlib.animation import FuncAnimation
from IPython.display import HTML, display
def get(x):
return train.dataset[x][0].view(1, 784)
def imshow(img):
pic = img.numpy().astype('float')
plt.axis('off')
return plt.imshow(pic, cmap='Greys', animated=True)
def morph(inputs, steps, delay):
# перегоняем в латентное пространство все картинки на входе
latent = [model.encode(get(k)).data for k in inputs]
fig = plt.figure()
images = []
for a, b in zip(latent, latent[1:] + [latent[0]]):
for t in numpy.linspace(0, 1, steps):
# получаем проинтерполированную точку
c = a*(1-t)+b*t
# ...и декодируем её в изображение
morphed = model.decode(c).data
morphed = morphed.view(28, 28)
images.append([imshow(morphed)])
ani = animation.ArtistAnimation(fig, images, interval=delay)
display(HTML(ani.to_html5_video()))
morph(numpy.random.randint(0, len(train.dataset), 30), 20, 30)
| 0.717804 | 0.984679 |
# Natural Language Inference: Using Attention
:label:`sec_natural-language-inference-attention`
We introduced the natural language inference task and the SNLI dataset in :numref:`sec_natural-language-inference-and-dataset`. In view of many models that are based on complex and deep architectures, Parikh et al. proposed to address natural language inference with attention mechanisms and called it a "decomposable attention model" :cite:`Parikh.Tackstrom.Das.ea.2016`.
This results in a model without recurrent or convolutional layers, achieving the best result at the time on the SNLI dataset with much fewer parameters.
In this section, we will describe and implement this attention-based method (with MLPs) for natural language inference, as depicted in :numref:`fig_nlp-map-nli-attention`.

:label:`fig_nlp-map-nli-attention`
## The Model
Simpler than preserving the order of tokens in premises and hypotheses,
we can just align tokens in one text sequence to every token in the other, and vice versa,
then compare and aggregate such information to predict the logical relationships
between premises and hypotheses.
Similar to alignment of tokens between source and target sentences in machine translation,
the alignment of tokens between premises and hypotheses
can be neatly accomplished by attention mechanisms.

:label:`fig_nli_attention`
:numref:`fig_nli_attention` depicts the natural language inference method using attention mechanisms.
At a high level, it consists of three jointly trained steps: attending, comparing, and aggregating.
We will illustrate them step by step in the following.
```
from mxnet import gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
```
### Attending
The first step is to align tokens in one text sequence to each token in the other sequence.
Suppose that the premise is "i do need sleep" and the hypothesis is "i am tired".
Due to semantical similarity,
we may wish to align "i" in the hypothesis with "i" in the premise,
and align "tired" in the hypothesis with "sleep" in the premise.
Likewise, we may wish to align "i" in the premise with "i" in the hypothesis,
and align "need" and "sleep" in the premise with "tired" in the hypothesis.
Note that such alignment is *soft* using weighted average,
where ideally large weights are associated with the tokens to be aligned.
For ease of demonstration, :numref:`fig_nli_attention` shows such alignment in a *hard* way.
Now we describe the soft alignment using attention mechanisms in more detail.
Denote by $\mathbf{A} = (\mathbf{a}_1, \ldots, \mathbf{a}_m)$
and $\mathbf{B} = (\mathbf{b}_1, \ldots, \mathbf{b}_n)$ the premise and hypothesis,
whose number of tokens are $m$ and $n$, respectively,
where $\mathbf{a}_i, \mathbf{b}_j \in \mathbb{R}^{d}$ ($i = 1, \ldots, m, j = 1, \ldots, n$) is a $d$-dimensional word vector.
For soft alignment, we compute the attention weights $e_{ij} \in \mathbb{R}$ as
$$e_{ij} = f(\mathbf{a}_i)^\top f(\mathbf{b}_j),$$
:eqlabel:`eq_nli_e`
where the function $f$ is an MLP defined in the following `mlp` function.
The output dimension of $f$ is specified by the `num_hiddens` argument of `mlp`.
```
def mlp(num_hiddens, flatten):
net = nn.Sequential()
net.add(nn.Dropout(0.2))
net.add(nn.Dense(num_hiddens, activation='relu', flatten=flatten))
net.add(nn.Dropout(0.2))
net.add(nn.Dense(num_hiddens, activation='relu', flatten=flatten))
return net
```
It should be highlighted that, in :eqref:`eq_nli_e`
$f$ takes inputs $\mathbf{a}_i$ and $\mathbf{b}_j$ separately rather than takes a pair of them together as the input.
This *decomposition* trick leads to only $m + n$ applications (linear complexity) of $f$ rather than $mn$ applications
(quadratic complexity).
Normalizing the attention weights in :eqref:`eq_nli_e`,
we compute the weighted average of all the token vectors in the hypothesis
to obtain representation of the hypothesis that is softly aligned with the token indexed by $i$ in the premise:
$$
\boldsymbol{\beta}_i = \sum_{j=1}^{n}\frac{\exp(e_{ij})}{ \sum_{k=1}^{n} \exp(e_{ik})} \mathbf{b}_j.
$$
Likewise, we compute soft alignment of premise tokens for each token indexed by $j$ in the hypothesis:
$$
\boldsymbol{\alpha}_j = \sum_{i=1}^{m}\frac{\exp(e_{ij})}{ \sum_{k=1}^{m} \exp(e_{kj})} \mathbf{a}_i.
$$
Below we define the `Attend` class to compute the soft alignment of hypotheses (`beta`) with input premises `A` and soft alignment of premises (`alpha`) with input hypotheses `B`.
```
class Attend(nn.Block):
def __init__(self, num_hiddens, **kwargs):
super(Attend, self).__init__(**kwargs)
self.f = mlp(num_hiddens=num_hiddens, flatten=False)
def forward(self, A, B):
# Shape of `A`/`B`: (b`atch_size`, no. of tokens in sequence A/B,
# `embed_size`)
# Shape of `f_A`/`f_B`: (`batch_size`, no. of tokens in sequence A/B,
# `num_hiddens`)
f_A = self.f(A)
f_B = self.f(B)
# Shape of `e`: (`batch_size`, no. of tokens in sequence A,
# no. of tokens in sequence B)
e = npx.batch_dot(f_A, f_B, transpose_b=True)
# Shape of `beta`: (`batch_size`, no. of tokens in sequence A,
# `embed_size`), where sequence B is softly aligned with each token
# (axis 1 of `beta`) in sequence A
beta = npx.batch_dot(npx.softmax(e), B)
# Shape of `alpha`: (`batch_size`, no. of tokens in sequence B,
# `embed_size`), where sequence A is softly aligned with each token
# (axis 1 of `alpha`) in sequence B
alpha = npx.batch_dot(npx.softmax(e.transpose(0, 2, 1)), A)
return beta, alpha
```
### Comparing
In the next step, we compare a token in one sequence with the other sequence that is softly aligned with that token.
Note that in soft alignment, all the tokens from one sequence, though with probably different attention weights, will be compared with a token in the other sequence.
For easy of demonstration, :numref:`fig_nli_attention` pairs tokens with aligned tokens in a *hard* way.
For example, suppose that the attending step determines that "need" and "sleep" in the premise are both aligned with "tired" in the hypothesis, the pair "tired--need sleep" will be compared.
In the comparing step, we feed the concatenation (operator $[\cdot, \cdot]$) of tokens from one sequence and aligned tokens from the other sequence into a function $g$ (an MLP):
$$\mathbf{v}_{A,i} = g([\mathbf{a}_i, \boldsymbol{\beta}_i]), i = 1, \ldots, m\\ \mathbf{v}_{B,j} = g([\mathbf{b}_j, \boldsymbol{\alpha}_j]), j = 1, \ldots, n.$$
:eqlabel:`eq_nli_v_ab`
In :eqref:`eq_nli_v_ab`, $\mathbf{v}_{A,i}$ is the comparison between token $i$ in the premise and all the hypothesis tokens that are softly aligned with token $i$;
while $\mathbf{v}_{B,j}$ is the comparison between token $j$ in the hypothesis and all the premise tokens that are softly aligned with token $j$.
The following `Compare` class defines such as comparing step.
```
class Compare(nn.Block):
def __init__(self, num_hiddens, **kwargs):
super(Compare, self).__init__(**kwargs)
self.g = mlp(num_hiddens=num_hiddens, flatten=False)
def forward(self, A, B, beta, alpha):
V_A = self.g(np.concatenate([A, beta], axis=2))
V_B = self.g(np.concatenate([B, alpha], axis=2))
return V_A, V_B
```
### Aggregating
With two sets of comparison vectors $\mathbf{v}_{A,i}$ ($i = 1, \ldots, m$) and $\mathbf{v}_{B,j}$ ($j = 1, \ldots, n$) on hand,
in the last step we will aggregate such information to infer the logical relationship.
We begin by summing up both sets:
$$
\mathbf{v}_A = \sum_{i=1}^{m} \mathbf{v}_{A,i}, \quad \mathbf{v}_B = \sum_{j=1}^{n}\mathbf{v}_{B,j}.
$$
Next we feed the concatenation of both summarization results into function $h$ (an MLP) to obtain the classification result of the logical relationship:
$$
\hat{\mathbf{y}} = h([\mathbf{v}_A, \mathbf{v}_B]).
$$
The aggregation step is defined in the following `Aggregate` class.
```
class Aggregate(nn.Block):
def __init__(self, num_hiddens, num_outputs, **kwargs):
super(Aggregate, self).__init__(**kwargs)
self.h = mlp(num_hiddens=num_hiddens, flatten=True)
self.h.add(nn.Dense(num_outputs))
def forward(self, V_A, V_B):
# Sum up both sets of comparison vectors
V_A = V_A.sum(axis=1)
V_B = V_B.sum(axis=1)
# Feed the concatenation of both summarization results into an MLP
Y_hat = self.h(np.concatenate([V_A, V_B], axis=1))
return Y_hat
```
### Putting All Things Together
By putting the attending, comparing, and aggregating steps together,
we define the decomposable attention model to jointly train these three steps.
```
class DecomposableAttention(nn.Block):
def __init__(self, vocab, embed_size, num_hiddens, **kwargs):
super(DecomposableAttention, self).__init__(**kwargs)
self.embedding = nn.Embedding(len(vocab), embed_size)
self.attend = Attend(num_hiddens)
self.compare = Compare(num_hiddens)
# There are 3 possible outputs: entailment, contradiction, and neutral
self.aggregate = Aggregate(num_hiddens, 3)
def forward(self, X):
premises, hypotheses = X
A = self.embedding(premises)
B = self.embedding(hypotheses)
beta, alpha = self.attend(A, B)
V_A, V_B = self.compare(A, B, beta, alpha)
Y_hat = self.aggregate(V_A, V_B)
return Y_hat
```
## Training and Evaluating the Model
Now we will train and evaluate the defined decomposable attention model on the SNLI dataset.
We begin by reading the dataset.
### Reading the dataset
We download and read the SNLI dataset using the function defined in :numref:`sec_natural-language-inference-and-dataset`. The batch size and sequence length are set to $256$ and $50$, respectively.
```
batch_size, num_steps = 256, 50
train_iter, test_iter, vocab = d2l.load_data_snli(batch_size, num_steps)
```
### Creating the Model
We use the pretrained 100-dimensional GloVe embedding to represent the input tokens.
Thus, we predefine the dimension of vectors $\mathbf{a}_i$ and $\mathbf{b}_j$ in :eqref:`eq_nli_e` as 100.
The output dimension of functions $f$ in :eqref:`eq_nli_e` and $g$ in :eqref:`eq_nli_v_ab` is set to 200.
Then we create a model instance, initialize its parameters,
and load the GloVe embedding to initialize vectors of input tokens.
```
embed_size, num_hiddens, devices = 100, 200, d2l.try_all_gpus()
net = DecomposableAttention(vocab, embed_size, num_hiddens)
net.initialize(init.Xavier(), ctx=devices)
glove_embedding = d2l.TokenEmbedding('glove.6b.100d')
embeds = glove_embedding[vocab.idx_to_token]
net.embedding.weight.set_data(embeds)
```
### Training and Evaluating the Model
In contrast to the `split_batch` function in :numref:`sec_multi_gpu` that takes single inputs such as text sequences (or images),
we define a `split_batch_multi_inputs` function to take multiple inputs such as premises and hypotheses in minibatches.
```
#@save
def split_batch_multi_inputs(X, y, devices):
"""Split multi-input `X` and `y` into multiple devices."""
X = list(zip(*[gluon.utils.split_and_load(
feature, devices, even_split=False) for feature in X]))
return (X, gluon.utils.split_and_load(y, devices, even_split=False))
```
Now we can train and evaluate the model on the SNLI dataset.
```
lr, num_epochs = 0.001, 4
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr})
loss = gluon.loss.SoftmaxCrossEntropyLoss()
d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices,
split_batch_multi_inputs)
```
### Using the Model
Finally, define the prediction function to output the logical relationship between a pair of premise and hypothesis.
```
#@save
def predict_snli(net, vocab, premise, hypothesis):
"""Predict the logical relationship between the premise and hypothesis."""
premise = np.array(vocab[premise], ctx=d2l.try_gpu())
hypothesis = np.array(vocab[hypothesis], ctx=d2l.try_gpu())
label = np.argmax(net([premise.reshape((1, -1)),
hypothesis.reshape((1, -1))]), axis=1)
return 'entailment' if label == 0 else 'contradiction' if label == 1 \
else 'neutral'
```
We can use the trained model to obtain the natural language inference result for a sample pair of sentences.
```
predict_snli(net, vocab, ['he', 'is', 'good', '.'], ['he', 'is', 'bad', '.'])
```
## Summary
* The decomposable attention model consists of three steps for predicting the logical relationships between premises and hypotheses: attending, comparing, and aggregating.
* With attention mechanisms, we can align tokens in one text sequence to every token in the other, and vice versa. Such alignment is soft using weighted average, where ideally large weights are associated with the tokens to be aligned.
* The decomposition trick leads to a more desirable linear complexity than quadratic complexity when computing attention weights.
* We can use pretrained word vectors as the input representation for downstream natural language processing task such as natural language inference.
## Exercises
1. Train the model with other combinations of hyperparameters. Can you get better accuracy on the test set?
1. What are major drawbacks of the decomposable attention model for natural language inference?
1. Suppose that we want to get the level of semantical similarity (e.g., a continuous value between 0 and 1) for any pair of sentences. How shall we collect and label the dataset? Can you design a model with attention mechanisms?
[Discussions](https://discuss.d2l.ai/t/395)
|
github_jupyter
|
from mxnet import gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
def mlp(num_hiddens, flatten):
net = nn.Sequential()
net.add(nn.Dropout(0.2))
net.add(nn.Dense(num_hiddens, activation='relu', flatten=flatten))
net.add(nn.Dropout(0.2))
net.add(nn.Dense(num_hiddens, activation='relu', flatten=flatten))
return net
class Attend(nn.Block):
def __init__(self, num_hiddens, **kwargs):
super(Attend, self).__init__(**kwargs)
self.f = mlp(num_hiddens=num_hiddens, flatten=False)
def forward(self, A, B):
# Shape of `A`/`B`: (b`atch_size`, no. of tokens in sequence A/B,
# `embed_size`)
# Shape of `f_A`/`f_B`: (`batch_size`, no. of tokens in sequence A/B,
# `num_hiddens`)
f_A = self.f(A)
f_B = self.f(B)
# Shape of `e`: (`batch_size`, no. of tokens in sequence A,
# no. of tokens in sequence B)
e = npx.batch_dot(f_A, f_B, transpose_b=True)
# Shape of `beta`: (`batch_size`, no. of tokens in sequence A,
# `embed_size`), where sequence B is softly aligned with each token
# (axis 1 of `beta`) in sequence A
beta = npx.batch_dot(npx.softmax(e), B)
# Shape of `alpha`: (`batch_size`, no. of tokens in sequence B,
# `embed_size`), where sequence A is softly aligned with each token
# (axis 1 of `alpha`) in sequence B
alpha = npx.batch_dot(npx.softmax(e.transpose(0, 2, 1)), A)
return beta, alpha
class Compare(nn.Block):
def __init__(self, num_hiddens, **kwargs):
super(Compare, self).__init__(**kwargs)
self.g = mlp(num_hiddens=num_hiddens, flatten=False)
def forward(self, A, B, beta, alpha):
V_A = self.g(np.concatenate([A, beta], axis=2))
V_B = self.g(np.concatenate([B, alpha], axis=2))
return V_A, V_B
class Aggregate(nn.Block):
def __init__(self, num_hiddens, num_outputs, **kwargs):
super(Aggregate, self).__init__(**kwargs)
self.h = mlp(num_hiddens=num_hiddens, flatten=True)
self.h.add(nn.Dense(num_outputs))
def forward(self, V_A, V_B):
# Sum up both sets of comparison vectors
V_A = V_A.sum(axis=1)
V_B = V_B.sum(axis=1)
# Feed the concatenation of both summarization results into an MLP
Y_hat = self.h(np.concatenate([V_A, V_B], axis=1))
return Y_hat
class DecomposableAttention(nn.Block):
def __init__(self, vocab, embed_size, num_hiddens, **kwargs):
super(DecomposableAttention, self).__init__(**kwargs)
self.embedding = nn.Embedding(len(vocab), embed_size)
self.attend = Attend(num_hiddens)
self.compare = Compare(num_hiddens)
# There are 3 possible outputs: entailment, contradiction, and neutral
self.aggregate = Aggregate(num_hiddens, 3)
def forward(self, X):
premises, hypotheses = X
A = self.embedding(premises)
B = self.embedding(hypotheses)
beta, alpha = self.attend(A, B)
V_A, V_B = self.compare(A, B, beta, alpha)
Y_hat = self.aggregate(V_A, V_B)
return Y_hat
batch_size, num_steps = 256, 50
train_iter, test_iter, vocab = d2l.load_data_snli(batch_size, num_steps)
embed_size, num_hiddens, devices = 100, 200, d2l.try_all_gpus()
net = DecomposableAttention(vocab, embed_size, num_hiddens)
net.initialize(init.Xavier(), ctx=devices)
glove_embedding = d2l.TokenEmbedding('glove.6b.100d')
embeds = glove_embedding[vocab.idx_to_token]
net.embedding.weight.set_data(embeds)
#@save
def split_batch_multi_inputs(X, y, devices):
"""Split multi-input `X` and `y` into multiple devices."""
X = list(zip(*[gluon.utils.split_and_load(
feature, devices, even_split=False) for feature in X]))
return (X, gluon.utils.split_and_load(y, devices, even_split=False))
lr, num_epochs = 0.001, 4
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr})
loss = gluon.loss.SoftmaxCrossEntropyLoss()
d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices,
split_batch_multi_inputs)
#@save
def predict_snli(net, vocab, premise, hypothesis):
"""Predict the logical relationship between the premise and hypothesis."""
premise = np.array(vocab[premise], ctx=d2l.try_gpu())
hypothesis = np.array(vocab[hypothesis], ctx=d2l.try_gpu())
label = np.argmax(net([premise.reshape((1, -1)),
hypothesis.reshape((1, -1))]), axis=1)
return 'entailment' if label == 0 else 'contradiction' if label == 1 \
else 'neutral'
predict_snli(net, vocab, ['he', 'is', 'good', '.'], ['he', 'is', 'bad', '.'])
| 0.871612 | 0.993581 |
<h1><font size=12>
Weather Derivatites </h1>
<h1> Rainfall Simulator <br></h1>
Developed by [Jesus Solano](mailto:[email protected]) <br>
16 September 2018
```
# Import needed libraries.
import numpy as np
import pandas as pd
import random as rand
import matplotlib.pyplot as plt
from scipy.stats import bernoulli
from scipy.stats import gamma
import time
```
## Simulation Function Core
```
### Build the simulation core.
# Updates the state of the day based on yesterday state.
def updateState(yesterdayDate, yesterdayState, monthTransitions):
yesterdayMonth = yesterdayDate.month
successProbability = monthTransitions['p'+str(yesterdayState)+'1'][yesterdayMonth]
todayState = bernoulli.rvs(successProbability)
return todayState
# Simulates one run of simulation.
def oneRun(daysNumber, startDate, initialState, monthTransitions,fittedGamma):
# Create a variable to store the last day state.
yesterdayState = initialState
# Generate a timestamp with all days in simulation.
dates = pd.date_range(startDate, periods=daysNumber, freq='D')
# Define the total rainfall amount over the simulation.
rainfall = 0
# Loop over days in simulation to calculate rainfall ammount.
for day in dates:
# Update today state based on the yesterday state.
todayState = updateState(day-1, yesterdayState, monthTransitions)
# Computes total accumulated rainfall.
if todayState == 1:
todayRainfall = gamma.rvs(fittedGamma['Shape'][0],fittedGamma['Loc'][0],fittedGamma['Scale'][0])
# Updates rainfall amount.
rainfall += todayRainfall
yesterdayState = todayState
return rainfall
# Run only one iteration(Print structure of results)
# Simulations iterations.
iterations = 10000
# Transitions probabilites.
monthTransitionsProb = pd.read_csv('../results/visibleMarkov/monthTransitions.csv', index_col=0)
# Rainfall amount parameters( Gamma parameters)
fittedGamma = pd.read_csv('../results/visibleMarkov/fittedGamma.csv', index_col=0)
# Number of simulation days(i.e 30, 60)
daysNumber = 30
# Simulation start date ('1995-04-22')
startDate = '2018-08-18'
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = 0
oneRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma)
```
## Complete Simulation
```
# Run total iterations.
def totalRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations):
# Initialize time
startTime = time.time()
# Array to store all precipitations.
rainfallPerIteration = [None]*iterations
# Loop over each iteration(simulation)
for i in range(iterations):
iterationRainfall = oneRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma)
rainfallPerIteration[i] = iterationRainfall
# Calculate time
currentTime = time.time() - startTime
# Logging time.
print('The elapsed time over simulation is: ', currentTime, ' seconds.')
return rainfallPerIteration
#### Define parameters simulation.
# Simulations iterations.
iterations = 1000
# Transitions probabilites.
monthTransitionsProb = pd.read_csv('../results/visibleMarkov/monthTransitions.csv', index_col=0)
# Rainfall amount parameters( Gamma parameters)
fittedGamma = pd.read_csv('../results/visibleMarkov/fittedGamma.csv', index_col=0)
# Number of simulation days(i.e 30, 60)
daysNumber = 30
# Simulation start date ('1995-04-22')
startDate = '2018-08-18'
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = 1
```
## Final Results
```
# Final Analysis.
finalSimulation = totalRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations)
fig = plt.figure(figsize=(20, 10))
plt.hist(finalSimulation,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
plt.title('Rainfall Simulation')
plt.xlabel('Rainfall Amount [mm]')
plt.ylabel('Probability ')
plt.grid()
plt.show()
```
# Financial Analysis
|
github_jupyter
|
# Import needed libraries.
import numpy as np
import pandas as pd
import random as rand
import matplotlib.pyplot as plt
from scipy.stats import bernoulli
from scipy.stats import gamma
import time
### Build the simulation core.
# Updates the state of the day based on yesterday state.
def updateState(yesterdayDate, yesterdayState, monthTransitions):
yesterdayMonth = yesterdayDate.month
successProbability = monthTransitions['p'+str(yesterdayState)+'1'][yesterdayMonth]
todayState = bernoulli.rvs(successProbability)
return todayState
# Simulates one run of simulation.
def oneRun(daysNumber, startDate, initialState, monthTransitions,fittedGamma):
# Create a variable to store the last day state.
yesterdayState = initialState
# Generate a timestamp with all days in simulation.
dates = pd.date_range(startDate, periods=daysNumber, freq='D')
# Define the total rainfall amount over the simulation.
rainfall = 0
# Loop over days in simulation to calculate rainfall ammount.
for day in dates:
# Update today state based on the yesterday state.
todayState = updateState(day-1, yesterdayState, monthTransitions)
# Computes total accumulated rainfall.
if todayState == 1:
todayRainfall = gamma.rvs(fittedGamma['Shape'][0],fittedGamma['Loc'][0],fittedGamma['Scale'][0])
# Updates rainfall amount.
rainfall += todayRainfall
yesterdayState = todayState
return rainfall
# Run only one iteration(Print structure of results)
# Simulations iterations.
iterations = 10000
# Transitions probabilites.
monthTransitionsProb = pd.read_csv('../results/visibleMarkov/monthTransitions.csv', index_col=0)
# Rainfall amount parameters( Gamma parameters)
fittedGamma = pd.read_csv('../results/visibleMarkov/fittedGamma.csv', index_col=0)
# Number of simulation days(i.e 30, 60)
daysNumber = 30
# Simulation start date ('1995-04-22')
startDate = '2018-08-18'
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = 0
oneRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma)
# Run total iterations.
def totalRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations):
# Initialize time
startTime = time.time()
# Array to store all precipitations.
rainfallPerIteration = [None]*iterations
# Loop over each iteration(simulation)
for i in range(iterations):
iterationRainfall = oneRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma)
rainfallPerIteration[i] = iterationRainfall
# Calculate time
currentTime = time.time() - startTime
# Logging time.
print('The elapsed time over simulation is: ', currentTime, ' seconds.')
return rainfallPerIteration
#### Define parameters simulation.
# Simulations iterations.
iterations = 1000
# Transitions probabilites.
monthTransitionsProb = pd.read_csv('../results/visibleMarkov/monthTransitions.csv', index_col=0)
# Rainfall amount parameters( Gamma parameters)
fittedGamma = pd.read_csv('../results/visibleMarkov/fittedGamma.csv', index_col=0)
# Number of simulation days(i.e 30, 60)
daysNumber = 30
# Simulation start date ('1995-04-22')
startDate = '2018-08-18'
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = 1
# Final Analysis.
finalSimulation = totalRun(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations)
fig = plt.figure(figsize=(20, 10))
plt.hist(finalSimulation,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
plt.title('Rainfall Simulation')
plt.xlabel('Rainfall Amount [mm]')
plt.ylabel('Probability ')
plt.grid()
plt.show()
| 0.422266 | 0.906901 |
<a href="https://colab.research.google.com/github/djw8605/Pearc19-StashCache-Tools/blob/master/StashCache_Percent_Difference.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import io
bell_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,593548,330709,188170,104127
25perc.data,40368,63929,12602,10141
1perc.data,811,82,3172,1653
big_blast.data,1373221,667589,569703,600365
50perc.data,209167,29096,36023,28875
75perc.data,115250,37430,25758,24749
5perc.data,39473,21852,6522,2404
'''
bell_df = pd.read_csv(io.BytesIO(bell_data))
nu_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,41936,26895,23464,23630
25perc.data,1893,3891,4539,2471
1perc.data,303,82,3466,1037
big_blast.data,107924,104267,106650,102042
50perc.data,5608,9592,10192,6139
75perc.data,55947,5532,7179,6735
5perc.data,2147,1040,6838,1062
'''
nu_df = pd.read_csv(io.BytesIO(nu_data))
colorado_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,33231,25138,171042,152467
25perc.data,2108,1598,16293,12261
1perc.data,702,30,2381,2065
big_blast.data,142787,151780,867209,525011
50perc.data,7695,4670,37257,18394
75perc.data,10885,4797,40700,29384
5perc.data,3443,244,4306,4585
'''
colorado_df = pd.read_csv(io.BytesIO(colorado_data))
mwt2_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,25943,26464,25061,34575
25perc.data,3160,1659,4126,1956
1perc.data,1898,42,1183,503
big_blast.data,112637,123423,111614,113911
50perc.data,5204,5843,10022,5870
75perc.data,5654,5310,10143,6231
5perc.data,825,195,545,788
'''
mwt2_df = pd.read_csv(io.BytesIO(mwt2_data))
syr_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,63492,68778,135127,69400
25perc.data,5568,2503,24237,8610
1perc.data,147,79,13909,6148
big_blast.data,319431,372661,379732,274453
50perc.data,12444,11628,38162,19537
75perc.data,14402,12302,45697,17505
5perc.data,846,724,33924,3886
'''
syr_df = pd.read_csv(io.BytesIO(syr_data))
mwt2_df
def calc_difference(df):
for row in df.iterrows():
if row[1]['File'] == "95perc.data":
return ((row[1]['stashcp Cached'] - row[1]['HTTP Cached']) / row[1]['HTTP Cached']) * 100
bell_perc_difference = calc_difference(bell_df)
syr_perc_difference = calc_difference(syr_df)
colorado_perc_difference = calc_difference(colorado_df)
nu_perc_difference = calc_difference(nu_df)
mwt2_perc_difference = calc_difference(mwt2_df)
print("Bellarmine",bell_perc_difference)
print("Syracuse",syr_perc_difference)
print("Colorado",colorado_perc_difference)
print("Nebraska",nu_perc_difference)
print("Chicago",mwt2_perc_difference)
```
|
github_jupyter
|
import pandas as pd
import io
bell_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,593548,330709,188170,104127
25perc.data,40368,63929,12602,10141
1perc.data,811,82,3172,1653
big_blast.data,1373221,667589,569703,600365
50perc.data,209167,29096,36023,28875
75perc.data,115250,37430,25758,24749
5perc.data,39473,21852,6522,2404
'''
bell_df = pd.read_csv(io.BytesIO(bell_data))
nu_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,41936,26895,23464,23630
25perc.data,1893,3891,4539,2471
1perc.data,303,82,3466,1037
big_blast.data,107924,104267,106650,102042
50perc.data,5608,9592,10192,6139
75perc.data,55947,5532,7179,6735
5perc.data,2147,1040,6838,1062
'''
nu_df = pd.read_csv(io.BytesIO(nu_data))
colorado_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,33231,25138,171042,152467
25perc.data,2108,1598,16293,12261
1perc.data,702,30,2381,2065
big_blast.data,142787,151780,867209,525011
50perc.data,7695,4670,37257,18394
75perc.data,10885,4797,40700,29384
5perc.data,3443,244,4306,4585
'''
colorado_df = pd.read_csv(io.BytesIO(colorado_data))
mwt2_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,25943,26464,25061,34575
25perc.data,3160,1659,4126,1956
1perc.data,1898,42,1183,503
big_blast.data,112637,123423,111614,113911
50perc.data,5204,5843,10022,5870
75perc.data,5654,5310,10143,6231
5perc.data,825,195,545,788
'''
mwt2_df = pd.read_csv(io.BytesIO(mwt2_data))
syr_data = b'''\
File,HTTP,HTTP Cached,stashcp,stashcp Cached
95perc.data,63492,68778,135127,69400
25perc.data,5568,2503,24237,8610
1perc.data,147,79,13909,6148
big_blast.data,319431,372661,379732,274453
50perc.data,12444,11628,38162,19537
75perc.data,14402,12302,45697,17505
5perc.data,846,724,33924,3886
'''
syr_df = pd.read_csv(io.BytesIO(syr_data))
mwt2_df
def calc_difference(df):
for row in df.iterrows():
if row[1]['File'] == "95perc.data":
return ((row[1]['stashcp Cached'] - row[1]['HTTP Cached']) / row[1]['HTTP Cached']) * 100
bell_perc_difference = calc_difference(bell_df)
syr_perc_difference = calc_difference(syr_df)
colorado_perc_difference = calc_difference(colorado_df)
nu_perc_difference = calc_difference(nu_df)
mwt2_perc_difference = calc_difference(mwt2_df)
print("Bellarmine",bell_perc_difference)
print("Syracuse",syr_perc_difference)
print("Colorado",colorado_perc_difference)
print("Nebraska",nu_perc_difference)
print("Chicago",mwt2_perc_difference)
| 0.241579 | 0.735238 |
### Mapping and Reducing
#### *map* and *starmap*
You should already know the `map` and `reduce` built-in functions, so let's quickly review them:
The `map` function applies a given function (that takes a single argument) to an iterable of values and yields (lazily) the result of applying the function to each element of the iterable.
Let's see a simple example that calculates the square of values in an iterable:
```
maps = map(lambda x: x**2, range(5))
list(maps)
```
Keep in mind that `map` returns an iterator, so it will become exhausted:
```
list(maps)
```
Of course, we can supply multiple values to a function by using an iterable of iterables (e.g. tuples) and unpacking the tuple in the function - but we still only use a single argument:
```
def add(t):
return t[0] + t[1]
list(map(add, [(0,0), [1,1], range(2,4)]))
```
Remember how we can unpack an iterable into separate positional arguments?
```
def add(x, y):
return x + y
t = (2, 3)
add(*t)
```
It would be nice if we could do that with the `map` function as well.
For example, it would be nice to do the following:
```
list(map(add, [(0,0), (1,1), (2,2)]))
```
But of course that is not going to work, since `add` expects two arguments, and only a single one (the tuple) was provided.
This is where `starmap` comes in - it will essentially `*` each element of the iterable before passing it to the function defined in the map:
```
from itertools import starmap
list(starmap(add, [(0,0), (1,1), (2,2)]))
```
#### Accumulation
You should already know the `sum` function - it simply calculates the sum of all the elements in an iterable:
```
sum([10, 20, 30])
```
It simply returns the final sum.
Sometimes we want to perform other operations than just summing up the values. Maybe we want to find the product of all the values in an iterable.
To do so, we would then use the `reduce` function available in the `functools` module. You should already be familiar with that function, but let's review it quickly.
The `reduce` function requires a `binary` function (a function that takes two arguments). It then applies that binary function to the first two elements of the iterable, obtains a result, then continues applying the binary function using the previous result and the next item in the iterable.
Optionally we can specify a seed value that is used as the 'first' element.
For example, to obtain the product of all values in an iterable:
```
from functools import reduce
reduce(lambda x, y: x*y, [1, 2, 3, 4])
```
We can even specify a "start" value:
```
reduce(lambda x, y: x*y, [1, 2, 3, 4], 10)
```
You'll note that with both `sum` and `reduce`, only the final result is shown - none of the intermediate results are available.
Sometimes we want to see the intermediate results as well.
Let's see how we might try it with the `sum` function:|
```
def sum_(iterable):
it = iter(iterable)
acc = next(it)
yield acc
for item in it:
acc += item
yield acc
```
And we can use it as follows:
```
for item in sum_([10, 20, 30]):
print(item)
```
Of course, this is just going to work for a sum.
We may want the same functionality with arbitrary binary functions, just like `reduce` was more general than `sum`.
We could try doing it ourselves as follows:
```
def running_reduce(fn, iterable, start=None):
it = iter(iterable)
if start is None:
accumulator = next(it)
else:
accumulator = start
yield accumulator
for item in it:
accumulator = fn(accumulator, item)
yield accumulator
```
Let's try a running sum first.
We'll use the `operator` module instead of using lambdas.
```
import operator
list(running_reduce(operator.add, [10, 20, 30]))
```
Now we can also use other binary operators, such as multiplication:
```
list(running_reduce(operator.mul, [1, 2, 3, 4]))
```
And of course, we can even set a "start" value:
```
list(running_reduce(operator.mul, [1, 2, 3, 4], 10))
```
While this certainly works, we really don't need to code this ourselves - that's exactly what the `accumulate` function in `itertools` does for us.
The order of the arguments however is different, The iterable is defined first - that's because the binary function is optional, and defaults to addition if we don't specify it. Also it does not have a "start" value option. If you really need that feature, you could use the technique I just showed you.
```
from itertools import accumulate
list(accumulate([10, 20, 30]))
```
We can find the running product of an iterable:
```
list(accumulate([1, 2, 3, 4], operator.mul))
```
|
github_jupyter
|
maps = map(lambda x: x**2, range(5))
list(maps)
list(maps)
def add(t):
return t[0] + t[1]
list(map(add, [(0,0), [1,1], range(2,4)]))
def add(x, y):
return x + y
t = (2, 3)
add(*t)
list(map(add, [(0,0), (1,1), (2,2)]))
from itertools import starmap
list(starmap(add, [(0,0), (1,1), (2,2)]))
sum([10, 20, 30])
from functools import reduce
reduce(lambda x, y: x*y, [1, 2, 3, 4])
reduce(lambda x, y: x*y, [1, 2, 3, 4], 10)
def sum_(iterable):
it = iter(iterable)
acc = next(it)
yield acc
for item in it:
acc += item
yield acc
for item in sum_([10, 20, 30]):
print(item)
def running_reduce(fn, iterable, start=None):
it = iter(iterable)
if start is None:
accumulator = next(it)
else:
accumulator = start
yield accumulator
for item in it:
accumulator = fn(accumulator, item)
yield accumulator
import operator
list(running_reduce(operator.add, [10, 20, 30]))
list(running_reduce(operator.mul, [1, 2, 3, 4]))
list(running_reduce(operator.mul, [1, 2, 3, 4], 10))
from itertools import accumulate
list(accumulate([10, 20, 30]))
list(accumulate([1, 2, 3, 4], operator.mul))
| 0.361052 | 0.989013 |
# The Machine Learning landscape
## Types of ML systems
You can classify the algorithms using different methods
- whether or not trained by humans [`supervised, unsupervised, semi-supervised, reinforcement learning`]
- whether or not they can learn incrementally on the fly [`online vs batch`]
- whether they work by comparing test against train or by detecting patterns in train to predict the test data [`instance-based vs model-based`]
## Classification based on training required
### Supervised learning
You feed labeled data to the algorithm. `Classification` and `Regression` are the kinds of problems that can be solved with supervised learning. Some popular algorithms
- k-Nearest Neighbors (KNN)
- Linear regression
- Logistic regression
- Support Vector Machines (SVM)
- Decision Trees and Random Forests
- Neural networks
### Unsupervised learning
Training data is unlabeled. System tries to figure out the relationships. `Clustering`, `anomaly detection` and `Dimensionality Reduction` are good problems that can be solved with this type of learning. Some popular algorithms
- Clustering
- K-Means
- Hierarchical Cluster Analysis (HCA)
- Expectation Maximization
- Viz and dimensionality reduction
- Principal Component Analysis (PCA)
- Kernel PCA
- Locally-Linear Embedding (LLE)
- t-distributed stochastic neighbor embedding (t-SNE)
- Association rule learning (dig into large amounts of data, find interesting relationships b/w attributes)
- Apriori
- Eclat
### Semisupervised learning
Algorithms that can learn with partially labelled data and lots of unlabeled data. Some examples of algorithms
- deep belief networks (DBN)
- restricted boltzmann machines (RBMs)
### Reinforcement learning
Learning system (`agent`) can observe the environment, select and perform actions and get `rewards` or `penalties`. It must learn by itself to get the most reward over time (`policy`). Thus a `policy` defines what action the `agent` must take in a given situation.
## Classification based on learning rate
### Batch learning
- system is incapable of learning incrementally. Since training takes a lot of time and resources, it is done offline. Hence also called *offline learning*
- when new data arrives, the system must be taken offline and trained on full dataset (not just the new part).
### Online learning
- system can be trained incrementally. Usually data is fed in mini batches.
- this also helps if the training data is huge that it will not fit in one machine's memory. Then data can be fed in mini-batches, removed for the next set etc.
- **learning rate** determines how fast the system can adapt to new or changing data.
## Classification based on generalization
### Instance based learning
- learns from examples by heart then generalizes to new cases using a *measure of similarity*
### Model based learning
- system builds a model using the training data and uses the model to make predictions.
## Main challenges of ML
- insufficient training data - the "unreasonable effectiveness of data" paper
- non representative training data - sampling bias, poor quality data, irrelevant features (can be rectified through feature engineering - feature selection and extraction)
- over-fitting the training data - happens when the model is too complex relative to the amount of noisiness of the data. One solution is to *constrain* the model.
- **hyperparameters** -
- underfitting the training data - when the model is too simple to learn the phenomena in the data.
|
github_jupyter
|
# The Machine Learning landscape
## Types of ML systems
You can classify the algorithms using different methods
- whether or not trained by humans [`supervised, unsupervised, semi-supervised, reinforcement learning`]
- whether or not they can learn incrementally on the fly [`online vs batch`]
- whether they work by comparing test against train or by detecting patterns in train to predict the test data [`instance-based vs model-based`]
## Classification based on training required
### Supervised learning
You feed labeled data to the algorithm. `Classification` and `Regression` are the kinds of problems that can be solved with supervised learning. Some popular algorithms
- k-Nearest Neighbors (KNN)
- Linear regression
- Logistic regression
- Support Vector Machines (SVM)
- Decision Trees and Random Forests
- Neural networks
### Unsupervised learning
Training data is unlabeled. System tries to figure out the relationships. `Clustering`, `anomaly detection` and `Dimensionality Reduction` are good problems that can be solved with this type of learning. Some popular algorithms
- Clustering
- K-Means
- Hierarchical Cluster Analysis (HCA)
- Expectation Maximization
- Viz and dimensionality reduction
- Principal Component Analysis (PCA)
- Kernel PCA
- Locally-Linear Embedding (LLE)
- t-distributed stochastic neighbor embedding (t-SNE)
- Association rule learning (dig into large amounts of data, find interesting relationships b/w attributes)
- Apriori
- Eclat
### Semisupervised learning
Algorithms that can learn with partially labelled data and lots of unlabeled data. Some examples of algorithms
- deep belief networks (DBN)
- restricted boltzmann machines (RBMs)
### Reinforcement learning
Learning system (`agent`) can observe the environment, select and perform actions and get `rewards` or `penalties`. It must learn by itself to get the most reward over time (`policy`). Thus a `policy` defines what action the `agent` must take in a given situation.
## Classification based on learning rate
### Batch learning
- system is incapable of learning incrementally. Since training takes a lot of time and resources, it is done offline. Hence also called *offline learning*
- when new data arrives, the system must be taken offline and trained on full dataset (not just the new part).
### Online learning
- system can be trained incrementally. Usually data is fed in mini batches.
- this also helps if the training data is huge that it will not fit in one machine's memory. Then data can be fed in mini-batches, removed for the next set etc.
- **learning rate** determines how fast the system can adapt to new or changing data.
## Classification based on generalization
### Instance based learning
- learns from examples by heart then generalizes to new cases using a *measure of similarity*
### Model based learning
- system builds a model using the training data and uses the model to make predictions.
## Main challenges of ML
- insufficient training data - the "unreasonable effectiveness of data" paper
- non representative training data - sampling bias, poor quality data, irrelevant features (can be rectified through feature engineering - feature selection and extraction)
- over-fitting the training data - happens when the model is too complex relative to the amount of noisiness of the data. One solution is to *constrain* the model.
- **hyperparameters** -
- underfitting the training data - when the model is too simple to learn the phenomena in the data.
| 0.94656 | 0.988358 |
```
#export
from fastai2.basics import *
from fastai2.text.core import *
from fastai2.text.data import *
from fastai2.text.models.core import *
from fastai2.text.models.awdlstm import *
from fastai2.callback.rnn import *
from fastai2.callback.progress import *
#hide
from nbdev.showdoc import *
#default_exp text.learner
```
# Learner for the text application
> All the functions necessary to build `Learner` suitable for transfer learning in NLP
The most important functions of this module are `language_model_learner` and `text_classifier_learner`. They will help you define a `Learner` using a pretrained model. See the [text tutorial](http://dev.fast.ai/tutorial.text) for exmaples of use.
## Loading a pretrained model
In text, to load a pretrained model, we need to adapt the embeddings of the vocabulary used for the pre-training to the vocabulary of our current corpus.
```
#export
def match_embeds(old_wgts, old_vocab, new_vocab):
"Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`."
bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight']
wgts_m = wgts.mean(0)
new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1)))
if bias is not None:
bias_m = bias.mean(0)
new_bias = bias.new_zeros((len(new_vocab),))
old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)}
for i,w in enumerate(new_vocab):
idx = old_o2i.get(w, -1)
new_wgts[i] = wgts[idx] if idx>=0 else wgts_m
if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m
old_wgts['0.encoder.weight'] = new_wgts
if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone()
old_wgts['1.decoder.weight'] = new_wgts.clone()
if bias is not None: old_wgts['1.decoder.bias'] = new_bias
return old_wgts
```
For words in `new_vocab` that don't have a corresponding match in `old_vocab`, we use the mean of all pretrained embeddings.
```
wgts = {'0.encoder.weight': torch.randn(5,3)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
test_eq(new[0], old[0])
test_eq(new[1], old[2])
test_eq(new[2], old.mean(0))
test_eq(new[3], old[1])
#hide
#With bias
wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias']
test_eq(new_w[0], old_w[0])
test_eq(new_w[1], old_w[2])
test_eq(new_w[2], old_w.mean(0))
test_eq(new_w[3], old_w[1])
test_eq(new_b[0], old_b[0])
test_eq(new_b[1], old_b[2])
test_eq(new_b[2], old_b.mean(0))
test_eq(new_b[3], old_b[1])
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
def load_ignore_keys(model, wgts):
"Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order"
sd = model.state_dict()
for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone()
return model.load_state_dict(sd)
#export
@delegates(Learner.__init__)
class TextLearner(Learner):
"Basic class for a `Learner` in NLP."
def __init__(self, model, dls, alpha=2., beta=1., moms=(0.8,0.7,0.8), **kwargs):
super().__init__(model, dls, moms=moms, **kwargs)
self.add_cbs([ModelReseter(), RNNRegularizer(alpha=alpha, beta=beta)])
def save_encoder(self, file):
"Save the encoder to `file` in the model directory"
if rank_distrib(): return # don't save if slave proc
encoder = get_model(self.model)[0]
if hasattr(encoder, 'module'): encoder = encoder.module
torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth'))
def load_encoder(self, file, device=None):
"Load the encoder `file` from the model directory, optionally ensuring it's on `device`"
encoder = get_model(self.model)[0]
if device is None: device = self.dls.device
if hasattr(encoder, 'module'): encoder = encoder.module
distrib_barrier()
encoder.load_state_dict(torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device))
self.freeze()
return self
def load_pretrained(self, wgts_fname, vocab_fname, model=None):
"Load a pretrained model and adapt it to the data vocabulary."
old_vocab = Path(vocab_fname).load()
new_vocab = _get_text_vocab(self.dls)
distrib_barrier()
wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage)
if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer
wgts = match_embeds(wgts, old_vocab, new_vocab)
load_ignore_keys(self.model if model is None else model, wgts)
self.freeze()
return self
```
Adds a `ModelReseter` and an `RNNRegularizer` with `alpha` and `beta` to the callbacks, the rest is the same as `Learner` init.
This `Learner` adds functionality to the base class:
```
show_doc(TextLearner.load_pretrained)
```
`wgts_fname` should point to the weights of the pretrained model and `vocab_fname` to the vocabulary used to pretrain it.
```
show_doc(TextLearner.save_encoder)
```
The model directory is `Learner.path/Learner.model_dir`.
```
show_doc(TextLearner.load_encoder)
```
## Language modeling predictions
For language modeling, the predict method is quite different form the other applications, which is why it needs its own subclass.
```
#export
def decode_spec_tokens(tokens):
"Decode the special tokens in `tokens`"
new_toks,rule,arg = [],None,None
for t in tokens:
if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t
elif rule is None: new_toks.append(t)
elif rule == TK_MAJ:
new_toks.append(t[:1].upper() + t[1:].lower())
rule = None
elif rule == TK_UP:
new_toks.append(t.upper())
rule = None
elif arg is None:
try: arg = int(t)
except: rule = None
else:
if rule == TK_REP: new_toks.append(t * arg)
else: new_toks += [t] * arg
return new_toks
test_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text'])
test_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT'])
test_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa'])
test_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word'])
#export
class LMLearner(TextLearner):
"Add functionality to `TextLearner` when dealingwith a language model"
def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False,
decoder=decode_spec_tokens):
"Return `text` and the `n_words` that come after"
self.model.reset()
idxs = self.dls.test_dl([text]).items[0].to(self.dls.device)
if no_unk: unk_idx = self.dls.vocab.index(UNK)
for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)):
with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)])
res = preds[0][-1]
if no_unk: res[unk_idx] = 0.
if min_p is not None:
if (res >= min_p).float().sum() == 0:
warn(f"There is no item with probability >= {min_p}, try a lower value.")
else: res[res < min_p] = 0.
if temperature != 1.: res.pow_(1 / temperature)
idx = torch.multinomial(res, 1).item()
idxs = torch.cat([idxs, idxs.new([idx])])
num = self.dls.train_ds.numericalize
tokens = [num.vocab[i] for i in idxs if num.vocab[i] not in [BOS, PAD]]
sep = self.dls.train_ds.tokenizer[-1].sep
return sep.join(decoder(tokens))
@delegates(Learner.get_preds)
def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs)
show_doc(LMLearner, title_level=3)
show_doc(LMLearner.predict)
```
The words are picked randomly among the predictions, depending on the probability of each index. `no_unk` means we never pick the `UNK` token, `tempreature` is applied to the predictions, if `min_p` is passed, we don't consider the indices with a probability lower than it. Set `no_bar` to `True` if you don't want any progress bar, and you can pass a long a custom `decoder` to process the predicted tokens.
## `Learner` convenience functions
```
#export
from fastai2.text.models.core import _model_meta
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
@delegates(Learner.__init__)
def language_model_learner(dls, arch, config=None, drop_mult=1., pretrained=True, pretrained_fnames=None, **kwargs):
"Create a `Learner` with a language model from `dls` and `arch`."
vocab = _get_text_vocab(dls)
model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult)
meta = _model_meta[arch]
learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs)
#TODO: add backard
#url = 'url_bwd' if data.backwards else 'url'
if pretrained or pretrained_fnames:
if pretrained_fnames is not None:
fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])]
else:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'] , c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames)
return learn
```
You can use the `config` to customize the architecture used (change the values from `awd_lstm_lm_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available) or you can pass specific `pretrained_fnames` containing your own pretrained model and the corresponding vocabulary. All other arguments are passed to `Learner`.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')
learn = language_model_learner(dls, AWD_LSTM)
learn.predict('This movie is about', n_words=20)
#export
@delegates(Learner.__init__)
def text_classifier_learner(dls, arch, seq_len=72, config=None, pretrained=True, drop_mult=0.5, n_out=None,
lin_ftrs=None, ps=None, max_len=72*20, **kwargs):
"Create a `Learner` with a text classifier from `dls` and `arch`."
vocab = _get_text_vocab(dls)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`"
model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config,
drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len)
meta = _model_meta[arch]
learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs)
if pretrained:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'], c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames, model=learn.model[0])
learn.freeze()
return learn
```
You can use the `config` to customize the architecture used (change the values from `awd_lstm_clas_config` for this), `pretrained` will use fastai's pretrained model for this `arch` (if available). `drop_mult` is a global multiplier applied to control all dropouts. `n_out` is usually infered from the `dls` but you may pass it.
The model uses a `SentenceEncoder`, which means the texts are passed `seq_len` tokens at a time, and will only compute the gradients on the last `max_len` steps. `lin_ftrs` and `ps` are passed to `get_text_classifier`.
All other arguments are passed to `Learner`.
```
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')
learn = text_classifier_learner(dls, AWD_LSTM)
```
## Show methods -
```
#export
@typedispatch
def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
for i,l in enumerate(['input', 'target']):
ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))]
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs):
rows = get_empty_df(len(samples))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
for i,l in enumerate(['input', 'target']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)]
outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses))
for i,l in enumerate(['predicted', 'probability', 'loss']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)]
display_df(pd.DataFrame(rows))
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
|
github_jupyter
|
#export
from fastai2.basics import *
from fastai2.text.core import *
from fastai2.text.data import *
from fastai2.text.models.core import *
from fastai2.text.models.awdlstm import *
from fastai2.callback.rnn import *
from fastai2.callback.progress import *
#hide
from nbdev.showdoc import *
#default_exp text.learner
#export
def match_embeds(old_wgts, old_vocab, new_vocab):
"Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`."
bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight']
wgts_m = wgts.mean(0)
new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1)))
if bias is not None:
bias_m = bias.mean(0)
new_bias = bias.new_zeros((len(new_vocab),))
old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)}
for i,w in enumerate(new_vocab):
idx = old_o2i.get(w, -1)
new_wgts[i] = wgts[idx] if idx>=0 else wgts_m
if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m
old_wgts['0.encoder.weight'] = new_wgts
if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone()
old_wgts['1.decoder.weight'] = new_wgts.clone()
if bias is not None: old_wgts['1.decoder.bias'] = new_bias
return old_wgts
wgts = {'0.encoder.weight': torch.randn(5,3)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
test_eq(new[0], old[0])
test_eq(new[1], old[2])
test_eq(new[2], old.mean(0))
test_eq(new[3], old[1])
#hide
#With bias
wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
old_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias']
test_eq(new_w[0], old_w[0])
test_eq(new_w[1], old_w[2])
test_eq(new_w[2], old_w.mean(0))
test_eq(new_w[3], old_w[1])
test_eq(new_b[0], old_b[0])
test_eq(new_b[1], old_b[2])
test_eq(new_b[2], old_b.mean(0))
test_eq(new_b[3], old_b[1])
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
def load_ignore_keys(model, wgts):
"Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order"
sd = model.state_dict()
for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone()
return model.load_state_dict(sd)
#export
@delegates(Learner.__init__)
class TextLearner(Learner):
"Basic class for a `Learner` in NLP."
def __init__(self, model, dls, alpha=2., beta=1., moms=(0.8,0.7,0.8), **kwargs):
super().__init__(model, dls, moms=moms, **kwargs)
self.add_cbs([ModelReseter(), RNNRegularizer(alpha=alpha, beta=beta)])
def save_encoder(self, file):
"Save the encoder to `file` in the model directory"
if rank_distrib(): return # don't save if slave proc
encoder = get_model(self.model)[0]
if hasattr(encoder, 'module'): encoder = encoder.module
torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth'))
def load_encoder(self, file, device=None):
"Load the encoder `file` from the model directory, optionally ensuring it's on `device`"
encoder = get_model(self.model)[0]
if device is None: device = self.dls.device
if hasattr(encoder, 'module'): encoder = encoder.module
distrib_barrier()
encoder.load_state_dict(torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device))
self.freeze()
return self
def load_pretrained(self, wgts_fname, vocab_fname, model=None):
"Load a pretrained model and adapt it to the data vocabulary."
old_vocab = Path(vocab_fname).load()
new_vocab = _get_text_vocab(self.dls)
distrib_barrier()
wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage)
if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer
wgts = match_embeds(wgts, old_vocab, new_vocab)
load_ignore_keys(self.model if model is None else model, wgts)
self.freeze()
return self
show_doc(TextLearner.load_pretrained)
show_doc(TextLearner.save_encoder)
show_doc(TextLearner.load_encoder)
#export
def decode_spec_tokens(tokens):
"Decode the special tokens in `tokens`"
new_toks,rule,arg = [],None,None
for t in tokens:
if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t
elif rule is None: new_toks.append(t)
elif rule == TK_MAJ:
new_toks.append(t[:1].upper() + t[1:].lower())
rule = None
elif rule == TK_UP:
new_toks.append(t.upper())
rule = None
elif arg is None:
try: arg = int(t)
except: rule = None
else:
if rule == TK_REP: new_toks.append(t * arg)
else: new_toks += [t] * arg
return new_toks
test_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text'])
test_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT'])
test_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa'])
test_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word'])
#export
class LMLearner(TextLearner):
"Add functionality to `TextLearner` when dealingwith a language model"
def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False,
decoder=decode_spec_tokens):
"Return `text` and the `n_words` that come after"
self.model.reset()
idxs = self.dls.test_dl([text]).items[0].to(self.dls.device)
if no_unk: unk_idx = self.dls.vocab.index(UNK)
for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)):
with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)])
res = preds[0][-1]
if no_unk: res[unk_idx] = 0.
if min_p is not None:
if (res >= min_p).float().sum() == 0:
warn(f"There is no item with probability >= {min_p}, try a lower value.")
else: res[res < min_p] = 0.
if temperature != 1.: res.pow_(1 / temperature)
idx = torch.multinomial(res, 1).item()
idxs = torch.cat([idxs, idxs.new([idx])])
num = self.dls.train_ds.numericalize
tokens = [num.vocab[i] for i in idxs if num.vocab[i] not in [BOS, PAD]]
sep = self.dls.train_ds.tokenizer[-1].sep
return sep.join(decoder(tokens))
@delegates(Learner.get_preds)
def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs)
show_doc(LMLearner, title_level=3)
show_doc(LMLearner.predict)
#export
from fastai2.text.models.core import _model_meta
#export
def _get_text_vocab(dls):
vocab = dls.vocab
if isinstance(vocab, L): vocab = vocab[0]
return vocab
#export
@delegates(Learner.__init__)
def language_model_learner(dls, arch, config=None, drop_mult=1., pretrained=True, pretrained_fnames=None, **kwargs):
"Create a `Learner` with a language model from `dls` and `arch`."
vocab = _get_text_vocab(dls)
model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult)
meta = _model_meta[arch]
learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs)
#TODO: add backard
#url = 'url_bwd' if data.backwards else 'url'
if pretrained or pretrained_fnames:
if pretrained_fnames is not None:
fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])]
else:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'] , c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames)
return learn
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')
learn = language_model_learner(dls, AWD_LSTM)
learn.predict('This movie is about', n_words=20)
#export
@delegates(Learner.__init__)
def text_classifier_learner(dls, arch, seq_len=72, config=None, pretrained=True, drop_mult=0.5, n_out=None,
lin_ftrs=None, ps=None, max_len=72*20, **kwargs):
"Create a `Learner` with a text classifier from `dls` and `arch`."
vocab = _get_text_vocab(dls)
if n_out is None: n_out = get_c(dls)
assert n_out, "`n_out` is not defined, and could not be infered from data, set `dls.c` or pass `n_out`"
model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config,
drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len)
meta = _model_meta[arch]
learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs)
if pretrained:
if 'url' not in meta:
warn("There are no pretrained weights for that architecture yet!")
return learn
model_path = untar_data(meta['url'], c_key='model')
fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]
learn = learn.load_pretrained(*fnames, model=learn.model[0])
learn.freeze()
return learn
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
dls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')
learn = text_classifier_learner(dls, AWD_LSTM)
#export
@typedispatch
def show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
for i,l in enumerate(['input', 'target']):
ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]
ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))]
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs):
if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)
display_df(pd.DataFrame(ctxs))
return ctxs
#export
@typedispatch
def plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs):
rows = get_empty_df(len(samples))
samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)
for i,l in enumerate(['input', 'target']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)]
outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses))
for i,l in enumerate(['predicted', 'probability', 'loss']):
rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)]
display_df(pd.DataFrame(rows))
#hide
from nbdev.export import notebook2script
notebook2script()
| 0.595845 | 0.831485 |
**[Machine Learning Micro-Course Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**
---
## Recap
You've built your first model, and now it's time to optimize the size of the tree to make better predictions. Run this cell to set up your coding environment where the previous step left off.
```
# Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE: {:,.0f}".format(val_mae))
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex5 import *
print("\nSetup complete")
```
# Exercises
You could write the function `get_mae` yourself. For now, we'll supply it. This is the same function you read about in the previous lesson. Just run the cell below.
```
def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y):
model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae)
```
## Step 1: Compare Different Tree Sizes
Write a loop that tries the following values for *max_leaf_nodes* from a set of possible values.
Call the *get_mae* function on each value of max_leaf_nodes. Store the output in some way that allows you to select the value of `max_leaf_nodes` that gives the most accurate model on your data.
```
candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500]
# Write loop to find the ideal tree size from candidate_max_leaf_nodes
for candidate in candidate_max_leaf_nodes:
scores = get_mae(candidate, train_X, val_X, train_y, val_y)
print(scores)
# Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500)
best_tree_size = 100
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
```
## Step 2: Fit Model Using All Data
You know the best tree size. If you were going to deploy this model in practice, you would make it even more accurate by using all of the data and keeping that tree size. That is, you don't need to hold out the validation data now that you've made all your modeling decisions.
```
# Fill in argument to make optimal size and uncomment
final_model = DecisionTreeRegressor(max_leaf_nodes=best_tree_size, random_state=1)
# fit the final model and uncomment the next two lines
final_model.fit(X, y)
step_2.check()
# step_2.hint()
# step_2.solution()
```
You've tuned this model and improved your results. But we are still using Decision Tree models, which are not very sophisticated by modern machine learning standards. In the next step you will learn to use Random Forests to improve your models even more.
# Keep Going
You are ready for **[Random Forests](https://www.kaggle.com/dansbecker/random-forests).**
---
**[Machine Learning Micro-Course Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**
|
github_jupyter
|
# Code you have previously used to load data
import pandas as pd
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# Path of the file to read
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# Create target object and call it y
y = home_data.SalePrice
# Create X
features = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[features]
# Split into validation and training data
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Specify Model
iowa_model = DecisionTreeRegressor(random_state=1)
# Fit Model
iowa_model.fit(train_X, train_y)
# Make validation predictions and calculate mean absolute error
val_predictions = iowa_model.predict(val_X)
val_mae = mean_absolute_error(val_predictions, val_y)
print("Validation MAE: {:,.0f}".format(val_mae))
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex5 import *
print("\nSetup complete")
def get_mae(max_leaf_nodes, train_X, val_X, train_y, val_y):
model = DecisionTreeRegressor(max_leaf_nodes=max_leaf_nodes, random_state=0)
model.fit(train_X, train_y)
preds_val = model.predict(val_X)
mae = mean_absolute_error(val_y, preds_val)
return(mae)
candidate_max_leaf_nodes = [5, 25, 50, 100, 250, 500]
# Write loop to find the ideal tree size from candidate_max_leaf_nodes
for candidate in candidate_max_leaf_nodes:
scores = get_mae(candidate, train_X, val_X, train_y, val_y)
print(scores)
# Store the best value of max_leaf_nodes (it will be either 5, 25, 50, 100, 250 or 500)
best_tree_size = 100
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
# Fill in argument to make optimal size and uncomment
final_model = DecisionTreeRegressor(max_leaf_nodes=best_tree_size, random_state=1)
# fit the final model and uncomment the next two lines
final_model.fit(X, y)
step_2.check()
# step_2.hint()
# step_2.solution()
| 0.626924 | 0.928862 |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import tree
import xgboost as xgb
# Get Titanic dataset
data = pd.read_csv("data/titanic_dataset.csv")
data.index = data.PassengerId.values
data.drop('PassengerId',axis=1,inplace=True)
print("dataset shape: " + str(data.shape))
data.head()
# Prepare data (Features engineering)
# 1) transform string values in int values for categorical features (Sex, Embarked)
data['Sex'] = data['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
data['Embarked'] = data['Embarked'].fillna('U').map( {'S': 0, 'C': 1, 'Q': 2, 'U': 3 } ).astype(int)
# 2) Create a new boolean features 'HasCabin' which is False if Cabin is NaN, True otherwise
data['HasCabin'] = data.Cabin.notnull() * 1
# 3) Drop unnused features
data.drop(['Name','Ticket','Cabin'],axis=1,inplace=True)
# 4) Missing values: NaN value in Age: drop it for simplicity
data.dropna(inplace=True)
# Look the data
data.head()
# Split features and labels into X et Y numpy array
X = data.drop('Survived',axis=1).values
Y = data.Survived.values.reshape(X.shape[0],1)
# Split into train and test set (80/20)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
print("Number of entries in the training set : {}".format(X_train.shape[0]))
print("Number of entries in the test set : {}".format(X_test.shape[0]))
print("Number of features in the training set: {}".format(X_train.shape[1]))
# Prepare model comparison table
compModel = pd.DataFrame({'accuracy':0}, index = ['Decision tree','Random forest','AdaBoost','XGBoost']).T
# DECISION TREE
print("DECISION TREE:")
# Prepare DecisionTree model and fit it to the data
dt = DecisionTreeClassifier().fit(X_train, Y_train)
# Make prediction
predictions = dt.predict(X_test)
print('Prediction exemples: ' + str(dt.predict(X_test[:10])))
# Get accuracy of this model
score = dt.score(X_test, Y_test)
compModel['Decision tree'] = score
print("Decision Tree accuracy: {}".format(score))
# RANDOM FOREST
print("RANDOM FOREST:")
# Prepare DecisionTree model and fit it to the data
rf = RandomForestClassifier(n_estimators=50).fit(X_train, Y_train.reshape(Y_train.shape[0],))
# Make prediction
predictions = rf.predict(X_test)
print('Prediction exemples: ' + str(rf.predict(X_test[:10])))
# Get accuracy of this model
score = rf.score(X_test, Y_test)
compModel['Random forest'] = score
print("Random Forest accuracy: {}".format(score))
# AdaBoost
print("AdaBoost:")
# Prepare DecisionTree model and fit it to the data
adaB = AdaBoostClassifier(n_estimators=50).fit(X_train, Y_train.reshape(Y_train.shape[0],))
# Make prediction
predictions = adaB.predict(X_test)
print('Prediction exemples: ' + str(adaB.predict(X_test[:10])))
# Get accuracy of this model
score = adaB.score(X_test, Y_test)
compModel['AdaBoost'] = score
print("AdaBoost accuracy: {}".format(score))
# XGBoost
print("XGBoost")
print()
# Prepare dataset
xgb_train = xgb.DMatrix(X_train, label = Y_train)
xgb_test = xgb.DMatrix(X_test, label = Y_test)
watchlist = [(xgb_train, 'train'), (xgb_test, 'valid')]
# Prepare model (hyperparameters)
xgb_pars = {'min_child_weight': 5, 'eta': 0.9, 'max_depth': 15, 'gamma': 0.5,
'booster' : 'gbtree', 'objective': 'binary:logistic'}
# Train the XGBoost model
xgbModel = xgb.train(xgb_pars, xgb_train, 50, watchlist, early_stopping_rounds=50, maximize=False, verbose_eval=10)
print('Modeling RMSLE %.5f' % xgbModel.best_score)
print()
# Make prediction
predictions = (xgbModel.predict(xgb_test) > 0.5) * 1
print('Prediction exemples: ' + str((xgbModel.predict(xgb_test)[:10] > 0.5)*1))
# Get accuracy of this model
score = (Y_test.reshape(Y_test.shape[0],) == predictions).sum() / Y_test.shape[0]
compModel['XGBoost'] = score
print("XGBoost accuracy: {}".format(score))
# Compare model performances:
compModel.T.sort_values('accuracy',ascending=False)
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn import tree
import xgboost as xgb
# Get Titanic dataset
data = pd.read_csv("data/titanic_dataset.csv")
data.index = data.PassengerId.values
data.drop('PassengerId',axis=1,inplace=True)
print("dataset shape: " + str(data.shape))
data.head()
# Prepare data (Features engineering)
# 1) transform string values in int values for categorical features (Sex, Embarked)
data['Sex'] = data['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
data['Embarked'] = data['Embarked'].fillna('U').map( {'S': 0, 'C': 1, 'Q': 2, 'U': 3 } ).astype(int)
# 2) Create a new boolean features 'HasCabin' which is False if Cabin is NaN, True otherwise
data['HasCabin'] = data.Cabin.notnull() * 1
# 3) Drop unnused features
data.drop(['Name','Ticket','Cabin'],axis=1,inplace=True)
# 4) Missing values: NaN value in Age: drop it for simplicity
data.dropna(inplace=True)
# Look the data
data.head()
# Split features and labels into X et Y numpy array
X = data.drop('Survived',axis=1).values
Y = data.Survived.values.reshape(X.shape[0],1)
# Split into train and test set (80/20)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)
print("Number of entries in the training set : {}".format(X_train.shape[0]))
print("Number of entries in the test set : {}".format(X_test.shape[0]))
print("Number of features in the training set: {}".format(X_train.shape[1]))
# Prepare model comparison table
compModel = pd.DataFrame({'accuracy':0}, index = ['Decision tree','Random forest','AdaBoost','XGBoost']).T
# DECISION TREE
print("DECISION TREE:")
# Prepare DecisionTree model and fit it to the data
dt = DecisionTreeClassifier().fit(X_train, Y_train)
# Make prediction
predictions = dt.predict(X_test)
print('Prediction exemples: ' + str(dt.predict(X_test[:10])))
# Get accuracy of this model
score = dt.score(X_test, Y_test)
compModel['Decision tree'] = score
print("Decision Tree accuracy: {}".format(score))
# RANDOM FOREST
print("RANDOM FOREST:")
# Prepare DecisionTree model and fit it to the data
rf = RandomForestClassifier(n_estimators=50).fit(X_train, Y_train.reshape(Y_train.shape[0],))
# Make prediction
predictions = rf.predict(X_test)
print('Prediction exemples: ' + str(rf.predict(X_test[:10])))
# Get accuracy of this model
score = rf.score(X_test, Y_test)
compModel['Random forest'] = score
print("Random Forest accuracy: {}".format(score))
# AdaBoost
print("AdaBoost:")
# Prepare DecisionTree model and fit it to the data
adaB = AdaBoostClassifier(n_estimators=50).fit(X_train, Y_train.reshape(Y_train.shape[0],))
# Make prediction
predictions = adaB.predict(X_test)
print('Prediction exemples: ' + str(adaB.predict(X_test[:10])))
# Get accuracy of this model
score = adaB.score(X_test, Y_test)
compModel['AdaBoost'] = score
print("AdaBoost accuracy: {}".format(score))
# XGBoost
print("XGBoost")
print()
# Prepare dataset
xgb_train = xgb.DMatrix(X_train, label = Y_train)
xgb_test = xgb.DMatrix(X_test, label = Y_test)
watchlist = [(xgb_train, 'train'), (xgb_test, 'valid')]
# Prepare model (hyperparameters)
xgb_pars = {'min_child_weight': 5, 'eta': 0.9, 'max_depth': 15, 'gamma': 0.5,
'booster' : 'gbtree', 'objective': 'binary:logistic'}
# Train the XGBoost model
xgbModel = xgb.train(xgb_pars, xgb_train, 50, watchlist, early_stopping_rounds=50, maximize=False, verbose_eval=10)
print('Modeling RMSLE %.5f' % xgbModel.best_score)
print()
# Make prediction
predictions = (xgbModel.predict(xgb_test) > 0.5) * 1
print('Prediction exemples: ' + str((xgbModel.predict(xgb_test)[:10] > 0.5)*1))
# Get accuracy of this model
score = (Y_test.reshape(Y_test.shape[0],) == predictions).sum() / Y_test.shape[0]
compModel['XGBoost'] = score
print("XGBoost accuracy: {}".format(score))
# Compare model performances:
compModel.T.sort_values('accuracy',ascending=False)
| 0.669205 | 0.776581 |
# for Loops
A <code>for</code> loop acts as an iterator in Python; it goes through items that are in a *sequence* or any other iterable item. Objects that we've learned about that we can iterate over include strings, lists, tuples, and even built-in iterables for dictionaries, such as keys or values.
We've already seen the <code>for</code> statement a little bit in past lectures but now let's formalize our understanding.
Here's the general format for a <code>for</code> loop in Python:
for item in object:
statements to do stuff
The variable name used for the item is completely up to the coder, so use your best judgment for choosing a name that makes sense and you will be able to understand when revisiting your code. This item name can then be referenced inside your loop, for example if you wanted to use <code>if</code> statements to perform checks.
Let's go ahead and work through several example of <code>for</code> loops using a variety of data object types. We'll start simple and build more complexity later on.
## Example 1
Iterating through a list
```
# We'll learn how to automate this sort of list in the next lecture
list1 = [1,2,3,4,5,6,7,8,9,10]
for num in list1:
print(num)
```
Great! Hopefully this makes sense. Now let's add an <code>if</code> statement to check for even numbers. We'll first introduce a new concept here--the modulo.
### Modulo
The modulo allows us to get the remainder in a division and uses the % symbol. For example:
```
17 % 5
```
This makes sense since 17 divided by 5 is 3 remainder 2. Let's see a few more quick examples:
```
# 3 Remainder 1
10 % 3
# 2 Remainder 4
18 % 7
# 2 no remainder
4 % 2
```
Notice that if a number is fully divisible with no remainder, the result of the modulo call is 0. We can use this to test for even numbers, since if a number modulo 2 is equal to 0, that means it is an even number!
Back to the <code>for</code> loops!
## Example 2
Let's print only the even numbers from that list!
```
for num in list1:
if num % 2 == 0:
print(num)
```
We could have also put an <code>else</code> statement in there:
```
for num in list1:
if num % 2 == 0:
print(num)
else:
print('Odd number')
```
## Example 3
Another common idea during a <code>for</code> loop is keeping some sort of running tally during multiple loops. For example, let's create a <code>for</code> loop that sums up the list:
```
# Start sum at zero
list_sum = 0
for num in list1:
list_sum = list_sum + num
print(list_sum)
```
Great! Read over the above cell and make sure you understand fully what is going on. Also we could have implemented a <code>+=</code> to perform the addition towards the sum. For example:
```
# Start sum at zero
list_sum = 0
for num in list1:
list_sum += num
print(list_sum)
```
## Example 4
We've used <code>for</code> loops with lists, how about with strings? Remember strings are a sequence so when we iterate through them we will be accessing each item in that string.
```
for letter in 'This is a string.':
print(letter)
```
## Example 5
Let's now look at how a <code>for</code> loop can be used with a tuple:
```
tup = (1,2,3,4,5)
for t in tup:
print(t)
```
## Example 6
Tuples have a special quality when it comes to <code>for</code> loops. If you are iterating through a sequence that contains tuples, the item can actually be the tuple itself, this is an example of *tuple unpacking*. During the <code>for</code> loop we will be unpacking the tuple inside of a sequence and we can access the individual items inside that tuple!
```
list2 = [(2,4),(6,8),(10,12)]
for tup in list2:
print(tup)
# Now with unpacking!
for (t1,t2) in list2:
print(t1)
```
Cool! With tuples in a sequence we can access the items inside of them through unpacking! The reason this is important is because many objects will deliver their iterables through tuples. Let's start exploring iterating through Dictionaries to explore this further!
## Example 7
```
d = {'k1':1,'k2':2,'k3':3}
for item in d:
print(item)
```
Notice how this produces only the keys. So how can we get the values? Or both the keys and the values?
We're going to introduce three new Dictionary methods: **.keys()**, **.values()** and **.items()**
In Python each of these methods return a *dictionary view object*. It supports operations like membership test and iteration, but its contents are not independent of the original dictionary – it is only a view. Let's see it in action:
```
# Create a dictionary view object
d.items()
```
Since the .items() method supports iteration, we can perform *dictionary unpacking* to separate keys and values just as we did in the previous examples.
```
# Dictionary unpacking
for k,v in d.items():
print(k)
print(v)
```
If you want to obtain a true list of keys, values, or key/value tuples, you can *cast* the view as a list:
```
list(d.keys())
```
Remember that dictionaries are unordered, and that keys and values come back in arbitrary order. You can obtain a sorted list using sorted():
```
sorted(d.values())
```
## Conclusion
We've learned how to use for loops to iterate through tuples, lists, strings, and dictionaries. It will be an important tool for us, so make sure you know it well and understood the above examples.
[More resources](http://www.tutorialspoint.com/python/python_for_loop.htm)
|
github_jupyter
|
# We'll learn how to automate this sort of list in the next lecture
list1 = [1,2,3,4,5,6,7,8,9,10]
for num in list1:
print(num)
17 % 5
# 3 Remainder 1
10 % 3
# 2 Remainder 4
18 % 7
# 2 no remainder
4 % 2
for num in list1:
if num % 2 == 0:
print(num)
for num in list1:
if num % 2 == 0:
print(num)
else:
print('Odd number')
# Start sum at zero
list_sum = 0
for num in list1:
list_sum = list_sum + num
print(list_sum)
# Start sum at zero
list_sum = 0
for num in list1:
list_sum += num
print(list_sum)
for letter in 'This is a string.':
print(letter)
tup = (1,2,3,4,5)
for t in tup:
print(t)
list2 = [(2,4),(6,8),(10,12)]
for tup in list2:
print(tup)
# Now with unpacking!
for (t1,t2) in list2:
print(t1)
d = {'k1':1,'k2':2,'k3':3}
for item in d:
print(item)
# Create a dictionary view object
d.items()
# Dictionary unpacking
for k,v in d.items():
print(k)
print(v)
list(d.keys())
sorted(d.values())
| 0.320821 | 0.964085 |
```
library(dplyr)
library(ggplot2)
mydir = "/nfs/leia/research/stegle/dseaton/hipsci/singlecell_neuroseq/data/data_processed/pool1_17_D52/"
mysuffix = "pool1_17_D52.scanpy.w_metadata.w_celltype.scanpy.obs_df.groupedby.donor_id-pool_id-time_point-treatment.celltype_counts.tsv"
myfilename = paste0(mydir,mysuffix)
df = read.table(myfilename, header = T)
df$celltype <- as.character(df$celltype)
df$celltype[df$celltype == "CHem"] <- "U_Neur1"
df$celltype[df$celltype == "unknown"] <- "U_Neur3"
head(df)
#### both treated and untreated day 52 cells
df1 = df
df_tot_d52_mid = df[df$celltype %in% c("DA","Sert"),] %>% group_by(donor_id,pool_id) %>%
summarize(total_midbrain_cells = sum(n_cells))
df_tot_d52 = df2 %>% group_by(donor_id,pool_id) %>% summarize(total_cells = sum(n_cells))
nrow(df_tot_d52)
df0 = inner_join(df_tot_d52, df_tot_d52_mid, by = c("donor_id","pool_id"))
nrow(df0)
head(df0)
df0$diff_eff = df0$total_midbrain_cells/df0$total_cells
head(df0)
df_donor = df0 %>% group_by(donor_id) %>% summarize(avg_de1 = mean(diff_eff)) # D52 DA+Sert ROT+NONE
#### untreated day 52 cells only
df2 = df[df$treatment == "NONE",]
df_tot_d52_mid = df2[df2$celltype %in% c("DA","Sert"),] %>% group_by(donor_id,pool_id) %>%
summarize(total_midbrain_cells = sum(n_cells))
nrow(df_tot_d52_mid)
df_tot_d52 = df2 %>% group_by(donor_id,pool_id) %>% summarize(total_cells = sum(n_cells))
nrow(df_tot_d52)
df0 = inner_join(df_tot_d52, df_tot_d52_mid, by = c("donor_id","pool_id"))
nrow(df0)
head(df0)
df0$diff_eff = df0$total_midbrain_cells/df0$total_cells
head(df0)
df_donor2 = df0 %>% group_by(donor_id) %>% summarize(avg_de2 = mean(diff_eff)) # D52 DA+Sert NONE
head(df_donor2)
df_donor_compare = inner_join(df_donor,df_donor2)
head(df_donor_compare)
r = cor(df_donor_compare$avg_de1,df_donor_compare$avg_de2)
options(repr.plot.width=5, repr.plot.height=5)
ggplot(df_donor_compare, aes(x=avg_de1,y=avg_de2)) + geom_point() +
xlab("differentiation efficiency using all D52 cells") +
ylab("differentiation efficiency using only untreated D52 cells") +
annotate("text",x = 0.2,y=0.9,label=paste0("R=",round(r,digits =2)),size =6) +
geom_abline(intercept = 0, alpha = 0.5) + theme_classic()
fig_dir = "/hps/nobackup/stegle/users/acuomo/all_scripts/sc_neuroseq/figures/extended_figures/"
pdf(paste0(fig_dir,"SF_9a.pdf"), width=5, height=5)
ggplot(df_donor_compare, aes(x=avg_de1,y=avg_de2)) + geom_point() +
xlab("differentiation efficiency using all D52 cells") +
ylab("differentiation efficiency using only untreated D52 cells") +
annotate("text",x = 0.2,y=0.9,label=paste0("R=",round(r,digits =2)),size =6) +
geom_abline(intercept = 0, alpha = 0.5) + theme_classic()
dev.off()
```
|
github_jupyter
|
library(dplyr)
library(ggplot2)
mydir = "/nfs/leia/research/stegle/dseaton/hipsci/singlecell_neuroseq/data/data_processed/pool1_17_D52/"
mysuffix = "pool1_17_D52.scanpy.w_metadata.w_celltype.scanpy.obs_df.groupedby.donor_id-pool_id-time_point-treatment.celltype_counts.tsv"
myfilename = paste0(mydir,mysuffix)
df = read.table(myfilename, header = T)
df$celltype <- as.character(df$celltype)
df$celltype[df$celltype == "CHem"] <- "U_Neur1"
df$celltype[df$celltype == "unknown"] <- "U_Neur3"
head(df)
#### both treated and untreated day 52 cells
df1 = df
df_tot_d52_mid = df[df$celltype %in% c("DA","Sert"),] %>% group_by(donor_id,pool_id) %>%
summarize(total_midbrain_cells = sum(n_cells))
df_tot_d52 = df2 %>% group_by(donor_id,pool_id) %>% summarize(total_cells = sum(n_cells))
nrow(df_tot_d52)
df0 = inner_join(df_tot_d52, df_tot_d52_mid, by = c("donor_id","pool_id"))
nrow(df0)
head(df0)
df0$diff_eff = df0$total_midbrain_cells/df0$total_cells
head(df0)
df_donor = df0 %>% group_by(donor_id) %>% summarize(avg_de1 = mean(diff_eff)) # D52 DA+Sert ROT+NONE
#### untreated day 52 cells only
df2 = df[df$treatment == "NONE",]
df_tot_d52_mid = df2[df2$celltype %in% c("DA","Sert"),] %>% group_by(donor_id,pool_id) %>%
summarize(total_midbrain_cells = sum(n_cells))
nrow(df_tot_d52_mid)
df_tot_d52 = df2 %>% group_by(donor_id,pool_id) %>% summarize(total_cells = sum(n_cells))
nrow(df_tot_d52)
df0 = inner_join(df_tot_d52, df_tot_d52_mid, by = c("donor_id","pool_id"))
nrow(df0)
head(df0)
df0$diff_eff = df0$total_midbrain_cells/df0$total_cells
head(df0)
df_donor2 = df0 %>% group_by(donor_id) %>% summarize(avg_de2 = mean(diff_eff)) # D52 DA+Sert NONE
head(df_donor2)
df_donor_compare = inner_join(df_donor,df_donor2)
head(df_donor_compare)
r = cor(df_donor_compare$avg_de1,df_donor_compare$avg_de2)
options(repr.plot.width=5, repr.plot.height=5)
ggplot(df_donor_compare, aes(x=avg_de1,y=avg_de2)) + geom_point() +
xlab("differentiation efficiency using all D52 cells") +
ylab("differentiation efficiency using only untreated D52 cells") +
annotate("text",x = 0.2,y=0.9,label=paste0("R=",round(r,digits =2)),size =6) +
geom_abline(intercept = 0, alpha = 0.5) + theme_classic()
fig_dir = "/hps/nobackup/stegle/users/acuomo/all_scripts/sc_neuroseq/figures/extended_figures/"
pdf(paste0(fig_dir,"SF_9a.pdf"), width=5, height=5)
ggplot(df_donor_compare, aes(x=avg_de1,y=avg_de2)) + geom_point() +
xlab("differentiation efficiency using all D52 cells") +
ylab("differentiation efficiency using only untreated D52 cells") +
annotate("text",x = 0.2,y=0.9,label=paste0("R=",round(r,digits =2)),size =6) +
geom_abline(intercept = 0, alpha = 0.5) + theme_classic()
dev.off()
| 0.372049 | 0.361362 |
# Kaggle MSD Challenge
Here is the sample algorithm for the song recommendation system for Million Song Dataset Challenge on Kaggle
```
DATA_PATH = 'data/'
EVAL_TRIPLETS_TXT = DATA_PATH + 'kaggle_visible_evaluation_triplets.txt'
USERS_TXT = DATA_PATH + 'kaggle_users.txt'
SONGS_TXT = DATA_PATH + 'kaggle_songs.txt'
```
In the following lines of code, we open the file, create a mapping from a song ID to the number
of times this song appears, and close the file.
```
f = open(EVAL_TRIPLETS_TXT, 'r')
song_to_count = dict()
for line in f:
_, song, _ = line.strip().split('\t')
if song in song_to_count:
song_to_count[song] += 1
else:
song_to_count[song] = 1
f.close()
songs_ordered = sorted(song_to_count.keys(), key=lambda s: song_to_count[s],reverse=True)
```
We will recommend the most popular songs to every user, but we must filter out songs already
in the user’s library. Reopening the triplets file, we will create a map from user to songs they
have listened to.
```
f = open(EVAL_TRIPLETS_TXT, 'r')
user_to_songs = dict()
for line in f:
user, song, _ = line.strip().split('\t')
if user in user_to_songs:
user_to_songs[user].add(song)
else:
user_to_songs[user] = set([song])
f.close()
user_to_songs
```
Ok, we now have the songs ordered by popularity, and listening history for each user. To
produce our submission file, we’ll need to load the canonical ordering of users:
```
f = open(USERS_TXT, 'r')
canonical_users = map(lambda line: line.strip(), f.readlines())
f.close()
canonical_users[:2]
```
We are almost there, but we're missing one more thing. To reduce the size of submission files,
we do not submit a list of song IDs such as SOSOUKN12A8C13AB79, but rather their index in
the canonical list of songs.
```
f = open(SONGS_TXT, 'r')
song_to_index = dict(map(lambda line: line.strip().split(' '), f.readlines()))
f.close()
song_to_index
```
Finally, we are ready to create the submission file. For each user in the canonical list,
recommend the songs in order of popularity, except those already in the user’s profile.
```
f = open('submission.txt', 'w')
for user in canonical_users:
songs_to_recommend = []
for song in songs_ordered:
if len(songs_to_recommend) >= 500:
break
if not song in user_to_songs[user]:
songs_to_recommend.append(song)
# Transform song IDs to song indexes
indices = map(lambda s: song_to_index[s], songs_to_recommend)
# Write line for that user
f.write(' '.join(indices) + '\n')
f.close()
```
|
github_jupyter
|
DATA_PATH = 'data/'
EVAL_TRIPLETS_TXT = DATA_PATH + 'kaggle_visible_evaluation_triplets.txt'
USERS_TXT = DATA_PATH + 'kaggle_users.txt'
SONGS_TXT = DATA_PATH + 'kaggle_songs.txt'
f = open(EVAL_TRIPLETS_TXT, 'r')
song_to_count = dict()
for line in f:
_, song, _ = line.strip().split('\t')
if song in song_to_count:
song_to_count[song] += 1
else:
song_to_count[song] = 1
f.close()
songs_ordered = sorted(song_to_count.keys(), key=lambda s: song_to_count[s],reverse=True)
f = open(EVAL_TRIPLETS_TXT, 'r')
user_to_songs = dict()
for line in f:
user, song, _ = line.strip().split('\t')
if user in user_to_songs:
user_to_songs[user].add(song)
else:
user_to_songs[user] = set([song])
f.close()
user_to_songs
f = open(USERS_TXT, 'r')
canonical_users = map(lambda line: line.strip(), f.readlines())
f.close()
canonical_users[:2]
f = open(SONGS_TXT, 'r')
song_to_index = dict(map(lambda line: line.strip().split(' '), f.readlines()))
f.close()
song_to_index
f = open('submission.txt', 'w')
for user in canonical_users:
songs_to_recommend = []
for song in songs_ordered:
if len(songs_to_recommend) >= 500:
break
if not song in user_to_songs[user]:
songs_to_recommend.append(song)
# Transform song IDs to song indexes
indices = map(lambda s: song_to_index[s], songs_to_recommend)
# Write line for that user
f.write(' '.join(indices) + '\n')
f.close()
| 0.127721 | 0.744958 |
### Yield From
In the last video we saw when we had two nested generators that we had to use a nested loop in order to iterate through both iterators:
```
def matrix(n):
gen = ( (i * j for j in range(1, n+1))
for i in range(1, n+1)
)
return gen
m = list(matrix(5))
m
```
Suppose we want an iterator to iterate over all the values of the matrix, element by element.
We could write it this way:
```
def matrix_iterator(n):
for row in matrix(n):
for item in row:
yield item
```
All we have done here is create a generator (iterator) that can be used to iterate over the elements of a nested iterator.
We can then use it this way:
```
for i in matrix_iterator(3):
print(i)
```
But we can avoid using that nested for loop by using a special form of `yield`: `yield from`
```
def matrix_iterator(n):
for row in matrix(n):
yield from row
for i in matrix_iterator(3):
print(i)
```
As you can see we obtain the same result.
We can think of
```
yield from <iterator>
```
as a replacement for the code:
```
for i in <iterator>:
yield i
```
We'll come back to `yield from` in more detail, because there's a **lot** more to it than just a simple replacement for that inner loop!
#### Example
Here's an example where using `yield from` can be quite effective.
In this example we need to read car brands from multiple files to get it as a single collection.
We might do it this way:
```
brands = []
with open('car-brands-1.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
with open('car-brands-2.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
with open('car-brands-3.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
for brand in brands:
print(brand, end=', ')
```
But notice that we had to load up the entire data set in memory.
As we have discussed before this is not very efficient.
Instead we could use a generator approach as follows:
```
def brands(*files):
for f_name in files:
with open(f_name) as f:
for line in f:
yield line.strip('\n')
files = 'car-brands-1.txt', 'car-brands-2.txt', 'car-brands-3.txt'
for brand in brands(*files):
print(brand, end = ', ')
```
We can simplify our function by using `yield from`:
```
def brands(*files):
for f_name in files:
with open(f_name) as f:
yield from f
for brand in brands(*files):
print(brand, end=', ')
```
Now we still have to clean up that trailing `\n` character...
So, we are going to create generators that can read each line of the file, and yield a clean result, and we'll `yield from` that generator:
```
def gen_clean_read(file):
with open(file) as f:
for line in f:
yield line.strip('\n')
```
As you can see, this generator function will clean each line of the file before yielding it. Let's try it with a single file and make sure it works:
```
f1 = gen_clean_read('car-brands-1.txt')
for line in f1:
print(line, end=', ')
```
Ok, that works. So now, we can proceed with our overarching generator function as before, except we'll `yield from` our generators, instead of directly from the file iterator:
```
files = 'car-brands-1.txt', 'car-brands-2.txt', 'car-brands-3.txt'
def brands(*files):
for file in files:
yield from gen_clean_read(file)
for brand in brands(*files):
print(brand, end=', ')
```
I want to point out that in this particular instance, we are using `yield from` as a simple replacement for a `for` loop. We could equally well have written it this way:
Using `yield from`:
```
def brands(*files):
for file in files:
yield from gen_clean_read(file)
```
Without using `yield from`:
```
def brands(*files):
for file in files:
for line in gen_clean_read(file):
yield line
for brand in brands(*files):
print(brand, end=', ')
```
We'll come back to `yield from` in a lot more detail later when we study coroutines - there's a whole lot more to `yield from` than a replacement for a simple loop!
|
github_jupyter
|
def matrix(n):
gen = ( (i * j for j in range(1, n+1))
for i in range(1, n+1)
)
return gen
m = list(matrix(5))
m
def matrix_iterator(n):
for row in matrix(n):
for item in row:
yield item
for i in matrix_iterator(3):
print(i)
def matrix_iterator(n):
for row in matrix(n):
yield from row
for i in matrix_iterator(3):
print(i)
yield from <iterator>
for i in <iterator>:
yield i
brands = []
with open('car-brands-1.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
with open('car-brands-2.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
with open('car-brands-3.txt') as f:
for brand in f:
brands.append(brand.strip('\n'))
for brand in brands:
print(brand, end=', ')
def brands(*files):
for f_name in files:
with open(f_name) as f:
for line in f:
yield line.strip('\n')
files = 'car-brands-1.txt', 'car-brands-2.txt', 'car-brands-3.txt'
for brand in brands(*files):
print(brand, end = ', ')
def brands(*files):
for f_name in files:
with open(f_name) as f:
yield from f
for brand in brands(*files):
print(brand, end=', ')
def gen_clean_read(file):
with open(file) as f:
for line in f:
yield line.strip('\n')
f1 = gen_clean_read('car-brands-1.txt')
for line in f1:
print(line, end=', ')
files = 'car-brands-1.txt', 'car-brands-2.txt', 'car-brands-3.txt'
def brands(*files):
for file in files:
yield from gen_clean_read(file)
for brand in brands(*files):
print(brand, end=', ')
def brands(*files):
for file in files:
yield from gen_clean_read(file)
def brands(*files):
for file in files:
for line in gen_clean_read(file):
yield line
for brand in brands(*files):
print(brand, end=', ')
| 0.308607 | 0.956063 |
# Practice Session 06: PageRank
We will compute PageRank on a graph that represents the web of UK around 2007. Each node is a host, and there is a link between two hosts if there is a web page in one of them pointing to a web page in the other one. This network is weighted: the weight is the number of pages that point from one host to the other one.
The collection we will use, [WEBSPAM-UK2007](http://chato.cl/webspam/datasets/uk2007/), has been used in multiple studies on the effect of web spam. Feel free to decompress these files to inspect them, **but your code must read only these files in compressed form**:
* ``webspam_uk2007-nodes.csv.gz`` contains (``nodeid``, ``hostname``, ``label``) records
* ``webspam_uk2007-edges.csv.gz`` contains (``source``, ``destination``, ``weight``) records
Your task is to compute PageRank twice: first considering all the links, and then ignoring links from or to a known spam host.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
# 1. Read host names
Read the names of the nodes and the labels. For this, you can use [csv.DictReader](https://docs.python.org/3/library/csv.html#csv.DictReader). Suppose ``FILENAME`` points to a file with the following contents:
```
a,b,c,d
1,2,3,4
5,6,7,8
```
The following code:
```python
with gzip.open(FILENAME, "rt", encoding="utf-8") as input_file:
reader = csv.DictReader(input_file, delimiter=',', quotechar='"')
for record in reader:
print(record["b"])
```
Prints:
```
2
6
```
Remember in the `INPUT_NODES_FILENAME` each record contains ``nodeid``, ``hostname``, and ``label``.
Read the id to name mapping into a dictionary `id2name`, the name to id mapping into a dictionary `name2id`, and the id to label mapping into another dictionary `id2label`. They keys (nodeids) in both dictionaries should be converted to integers using ``int(...)``.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
import io
import gzip
import csv
import networkx as nx
import matplotlib.pyplot as plt
INPUT_NODES_FILENAME = "webspam_uk2007-nodes.csv.gz"
INPUT_EDGES_FILENAME = "webspam_uk2007-edges.csv.gz"
```
<font size="+1" color="red">Replace this cell with your code to read the nodes file into id2name, name2id, and id2label.</font>
Verify that you read correctly the file. The following will test two known hosts, *873* should be the BBC, a non-spam site, and *105715* should be a spam website that used to sell mobile phones.
If you get a *key not found* error, most likely you did not convert the ids to integers.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave as-is
print("%s: %s" % (id2name[873], id2label[873]))
print("%s: %s" % (id2name[105715], id2label[105715]))
print("Number of hosts: %s" % len(id2name))
```
Next, print how many hosts have label `spam`, `nonspam`, and `unlabeled`.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+1" color="red">Replace this cell with your code to print how many hosts are spam, how many are nonspam, and how many are unlabeled (this should be the large majority).</font>
Now let's explore a small part of the graph. For this, you will need to open the file `INPUT_EDGES_FILENAME` which contains columns `source`, `destination`, and `weight` indicating that some pages in host id `source` point to pages in host id `destination`. The number of such pages is the `weight`.
The graph is too large so we will focus on two categories that tend to be heavily spammed: adult content and financial services. We will use the following:
```python
spammywords = ['escort', 'xx', 'girl', 'credit', 'mortgage', 'finance', 'debt', 'loan']
```
Now, create a directed graph `g = nx.DiGraph()` containing all the edges that fulfil **all three of the following conditions**:
1. The source contains one of the `spammywords` **or** the destination contains one of the `spammywords`
1. The source is labeled as either `spam` or `nonspam`
1. The destination is labeled as either `spam` or `nonspam`
Your graph should have nodes that are hostnames, so whenever you find such an edge in the input file, you should do:
```python
g.add_edge(id2name[source], id2name[destination])
```
Print the number of nodes in the resulting graph (`g.number_of_nodes()`), it should be less than 100.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+1" color="red">Replace this cell with your code to load a subgraph of the input graph, as described above.</font>
The following code, that you should leave as-is (or modify slightly, if you want), displays this subgraph.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave this code as-is, or modify slightly
colors = []
hostname_converted = {}
for hostname in g.nodes():
# Assign colors to nodes according to spam/nonspam labels
if id2label[name2id[hostname]] == 'spam':
colors.append('red')
elif id2label[name2id[hostname]] == 'nonspam':
colors.append('lightgreen')
else:
colors.append('white')
# Shorten the hostnames to generate labels
label = hostname.replace("www.", "").replace(".uk", "")
hostname_converted[hostname] = label
# Notice that if you re-run this cell the layout will be different every time
plt.figure(figsize=(20, 20))
plt.axis('off')
pos = nx.spring_layout(g)
nx.draw_networkx(g, pos, with_labels=True, node_size=400, node_color=colors, labels=hostname_converted)
```
<font size="+1" color="red">Replace this cell with a brief commentary on what you see in the plot above.</font>
# 2. Compute the degree of each node
Compute the out-degree of each node and store it in the dictionary id2degree. For this, you will need to read the edges file once, without trying to store the graph in main memory.
Remember that this file contains ``source``, ``destination``, ``weight`` records.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave this code as-is
id2degree = {}
N = len(id2name)
for nodeid in range(N):
id2degree[nodeid] = 0
```
<font size="+1" color="red">Replace this cell with your code to read the degrees of nodes into id2degree.</font>
Verify that you are reading correctly the file. The following cell should print:
```
bc1.org.uk: 16
candycaine.skinthesun.co.uk: 22
www.top-mobile-phones.co.uk: 0
```
If you get a key not found error, most likely you did not convert the ids to integers or you did ot initialize the id2degree.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave this cell as-is
for nodeid in [890, 1469, 105715]:
print("%s: degree %d" % (id2name[nodeid], id2degree[nodeid]))
```
# 3. Compute PageRank
Perform `iterations=20` iterations with `alpha=0.85`. In each iteration, you will read the file of the graph, **without loading the entire graph in memory**. This means each iteration involves opening (and implicitly, closing) the edges file.
Your code should do the following:
* At the beginning, initialize the vector `pagerank` as a vector of 1/N and the vector `pagerank_aux` as a vector of 0s.
* For `iterations` iterations:
* Read the graph and for every link from *source* to *destination*:
* Add to `pagerank_aux[destination]` the value `pagerank[source]/degree`, where *degree* is the out-degree of the source node (i.e, its number of out-links).
* Set *pagerank* of every node to *alpha x pagerank_aux + (1.0-alpha) x (1.0/N)*.
* Set `pagerank_aux` to 0.0
Remember: do not keep the graph in memory, because that will limit the size of the graphs your code can handle. At every iteration you must read the file again. You can use the following template:
```python
for iteration in range(ITERATIONS):
print("Iteration %d of %d" % (iteration+1, ITERATIONS))
with gzip.open(INPUT_EDGES_FILENAME, "rt", encoding="utf-8") as input_file:
...
```
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave this cell as-is
ITERATIONS = 20
ALPHA = 0.85
pagerank_aux = [0.0] * N
pagerank = [1.0/N] * N
```
<font size="+1" color="red">Replace this cell with your code to compute PageRank.</font>
# 4. Nodes with largest values of PageRank
Print the top 20 hosts by PageRank, including the host name, and the PageRank value with 6 decimals.
You can use the `enumerate()` function which converts a list `[a, b, c]` into `[(0,a), (1,b), (2,c)]` and then `sort()` as follows. Suppose ``score`` contains ``[0.2, 0.7, 0.4]``:
```python
hosts_by_score = sorted(enumerate(score), key=lambda x: x[1], reverse=True)
```
Will return the list `[(1,0.7), (2,0.4), (0,0.2)]`
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+1" color="red">Replace this cell with code to print the 20 hosts having the largest PageRank. Print the host id, host name, label, and score with 6 decimals.</font>
<font size="+1" color="red">Replace this cell with a brief commentary indicating: (1) why do you think the top site is that one, and (2) what is the percentage of commercial, government, and educational sites you see among the top 20?</font>
# 5. Run non-spam PageRank
Now, write code and run non-spam PageRank. For this, simply ignore any link in which either the source or the destination is a known spam host.
You can query this with ``id2label[source] == "spam" or id2label[destination] == "spam"``.
To run this, first you need to compute the "no-spam degree" of the nodes in a dictionary `id2nsdegree`, and use that degree of the nodes.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+1" color="red">Replace this cell with code to compute id2nsdegree (ns stands for no-spam).</font>
Verify that you are reading correctly the file. The following cell should print:
```
bc1.org.uk: normal degree 16 nospam degree 16
candycaine.skinthesun.co.uk: normal degree 22 nospam degree 20
www.top-mobile-phones.co.uk: normal degree 0 nospam degree 0
```
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
```
# Leave this cell as-is
for nodeid in [890, 1469, 105715]:
print("%s: normal degree %d nospam degree %d" % (id2name[nodeid], id2degree[nodeid], id2nsdegree[nodeid]))
```
<font size="+1" color="red">Replace this cell with code to compute nspagerank (ns stands for no-spam).</font>
<font size="+1" color="red">Replace this cell with code to print the 20 hosts having the largest no-spam PageRank scores. Print the host id, host name, label, and score with 6 decimals.</font>
# 6. Compute spam gain
Finally, compute the gain of every host as *(Normal PageRank) / (No spam PageRank)*.
Among the top 50 hosts you might find many "spam" (business that look ilegitimate or that tend to rely on spam such as gambling, pornography, counterfeits, and scams). You might also find "normal" sites (i.e., websites that look legitimate), because spammers also point to legitimate sites to disguise their actions.
Print the following:
* The hostname
* Its spam/nospam label
* Their gain *(Normal PageRank) / (No spam PageRank)* with two decimals (e.g., "3.22")
* Their PageRank in scientific notation with two significant digits (e.g., "5.8e-06")
* Their no-spam-PageRank in scientific notation with two significant digits (e.g., "5.8e-06")
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+1" color="red">Replace this cell with your code to print the top 50 hosts by spam gain.</font>
<font size="+1" color="red">Replace this cell with a brief commentary on the websites you find on this list. Notice that many of them have a non-spam-PageRank that is the same value, why do you think that happens?</font>
# Deliver (individually)
A .zip file containing:
* This notebook.
## Extra points available
If you would like to go for extra points (+2, so your maximum grade can be a 12 in this assignment), include a Cytoscape drawing of a sample of hosts (e.g., the top ones by PageRank, or the top ones by degree, perhaps restricted to '.co.uk' sites), and painting in one color the nodes that are spam, and in another color the nodes that are nonspam. Exclude the nodes that are *unlabeled*.
Include in your sample at least a few hundred hosts; as many as possible without crashing Cytoscape or having to wait an unreasonable amount of time for the layout to be completed.
Remember that the `subgraph` function in NetworkX allows you to select a sub-graph given a list of nodes.
**Note:** if you go for the extra points, add ``<font size="+2" color="blue">Additional results: spam/nonspam visualization</font>`` at the top of your notebook.
<font size="-1" color="gray">(Remove this cell when delivering.)</font>
<font size="+2" color="#003300">I hereby declare that, except for the code provided by the course instructors, all of my code, report, and figures were produced by myself.</font>
|
github_jupyter
|
a,b,c,d
1,2,3,4
5,6,7,8
with gzip.open(FILENAME, "rt", encoding="utf-8") as input_file:
reader = csv.DictReader(input_file, delimiter=',', quotechar='"')
for record in reader:
print(record["b"])
2
6
import io
import gzip
import csv
import networkx as nx
import matplotlib.pyplot as plt
INPUT_NODES_FILENAME = "webspam_uk2007-nodes.csv.gz"
INPUT_EDGES_FILENAME = "webspam_uk2007-edges.csv.gz"
# Leave as-is
print("%s: %s" % (id2name[873], id2label[873]))
print("%s: %s" % (id2name[105715], id2label[105715]))
print("Number of hosts: %s" % len(id2name))
spammywords = ['escort', 'xx', 'girl', 'credit', 'mortgage', 'finance', 'debt', 'loan']
g.add_edge(id2name[source], id2name[destination])
# Leave this code as-is, or modify slightly
colors = []
hostname_converted = {}
for hostname in g.nodes():
# Assign colors to nodes according to spam/nonspam labels
if id2label[name2id[hostname]] == 'spam':
colors.append('red')
elif id2label[name2id[hostname]] == 'nonspam':
colors.append('lightgreen')
else:
colors.append('white')
# Shorten the hostnames to generate labels
label = hostname.replace("www.", "").replace(".uk", "")
hostname_converted[hostname] = label
# Notice that if you re-run this cell the layout will be different every time
plt.figure(figsize=(20, 20))
plt.axis('off')
pos = nx.spring_layout(g)
nx.draw_networkx(g, pos, with_labels=True, node_size=400, node_color=colors, labels=hostname_converted)
# Leave this code as-is
id2degree = {}
N = len(id2name)
for nodeid in range(N):
id2degree[nodeid] = 0
bc1.org.uk: 16
candycaine.skinthesun.co.uk: 22
www.top-mobile-phones.co.uk: 0
# Leave this cell as-is
for nodeid in [890, 1469, 105715]:
print("%s: degree %d" % (id2name[nodeid], id2degree[nodeid]))
for iteration in range(ITERATIONS):
print("Iteration %d of %d" % (iteration+1, ITERATIONS))
with gzip.open(INPUT_EDGES_FILENAME, "rt", encoding="utf-8") as input_file:
...
# Leave this cell as-is
ITERATIONS = 20
ALPHA = 0.85
pagerank_aux = [0.0] * N
pagerank = [1.0/N] * N
hosts_by_score = sorted(enumerate(score), key=lambda x: x[1], reverse=True)
bc1.org.uk: normal degree 16 nospam degree 16
candycaine.skinthesun.co.uk: normal degree 22 nospam degree 20
www.top-mobile-phones.co.uk: normal degree 0 nospam degree 0
# Leave this cell as-is
for nodeid in [890, 1469, 105715]:
print("%s: normal degree %d nospam degree %d" % (id2name[nodeid], id2degree[nodeid], id2nsdegree[nodeid]))
| 0.139279 | 0.948822 |
```
import numpy as np
import pandas as pd
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from scipy.sparse import hstack
class_names = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
train = pd.read_csv('train.csv').fillna(' ')
test = pd.read_csv('test.csv').fillna(' ')
train_text = train['comment_text']
test_text = test['comment_text']
all_text = pd.concat([train_text, test_text])
char_vectorizer = TfidfVectorizer(
sublinear_tf=True,
strip_accents='unicode',
analyzer='char',
stop_words='english',
ngram_range=(3, 6),
max_features=50000)
char_vectorizer.fit(all_text)
import pickle
f = open('char_vectorizer.pkl', 'wb')
pickle.dump(char_vectorizer, f)
f.close()
feature_labels
train_char_features = char_vectorizer.transform(train_text)
test_char_features = char_vectorizer.transform(test_text)
with open('./identity_hate_train_matrix.pkl', 'rb') as f:
X_train = pickle.load(f)
with open('./identity_hate_test_matrix.pkl', 'rb') as f:
X_test = pickle.load(f)
X_train.shape
scores = []
submission = pd.DataFrame.from_dict({'id': test['id']})
for class_name in class_names:
train_target = train[class_name]
classifier = LogisticRegression(solver='sag')
cv_score = np.mean(cross_val_score(classifier, train_char_features, train_target, cv=3, scoring='roc_auc'))
scores.append(cv_score)
print('CV score for class {} is {}'.format(class_name, cv_score))
classifier.fit(train_char_features, train_target)
submission[class_name] = classifier.predict_proba(test_char_features)[:, 1]
print('Total CV score is {}'.format(np.mean(scores)))
submission.to_csv('submission.csv', index=False)
values = np.r_[[0.1, 1.0, 1.5], np.linspace(2.0, 10.0, 9)]
print('C values in:', values)
for C in values:
train_target = X_train
classifier = LogisticRegression(solver='saga', tol=1e-4, max_iter=200, C=C)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for C = {} is: {}'.format(C, cv_score))
print('Optimal C value is: 7.0')
```
При больших параметрах C результаты на cv несильно отличаются, так что для остальных классов подбор параметров можно не проводить
# ExtraTreesRegression
```
from sklearn.ensemble import ExtraTreesRegressor
for num_f in range(5, 41, 5):
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = ExtraTreesRegressor(max_depth=5, max_features=num_f, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_features is {}'.format(num_f, cv_score))
for depth in range(3, 30, 2):
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = ExtraTreesRegressor(max_depth=depth, max_features=10, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} depth is {}'.format(depth, cv_score))
```
### Roc_auc для n_estimators = 2000: 0.977. Модель показала себя не очень хорошо
# Градиентный бустинг (недоделано)
```
from sklearn.ensemble import GradientBoostingRegressor
for value in [1, 3, 5, 10, 15, 20]:
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=5, max_features=value, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_features is {}'.format(value, cv_score))
for value in [3, 5, 7, 9, 11, 15]:
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=value, max_features=10, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=5,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_depth is {}'.format(value, cv_score))
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=3, n_estimators=100, max_features=10)
%time cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,\
scoring='roc_auc'))
print('CV score for class identity_hate is {}'.format(cv_score))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from scipy.sparse import hstack
class_names = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
train = pd.read_csv('train.csv').fillna(' ')
test = pd.read_csv('test.csv').fillna(' ')
train_text = train['comment_text']
test_text = test['comment_text']
all_text = pd.concat([train_text, test_text])
char_vectorizer = TfidfVectorizer(
sublinear_tf=True,
strip_accents='unicode',
analyzer='char',
stop_words='english',
ngram_range=(3, 6),
max_features=50000)
char_vectorizer.fit(all_text)
import pickle
f = open('char_vectorizer.pkl', 'wb')
pickle.dump(char_vectorizer, f)
f.close()
feature_labels
train_char_features = char_vectorizer.transform(train_text)
test_char_features = char_vectorizer.transform(test_text)
with open('./identity_hate_train_matrix.pkl', 'rb') as f:
X_train = pickle.load(f)
with open('./identity_hate_test_matrix.pkl', 'rb') as f:
X_test = pickle.load(f)
X_train.shape
scores = []
submission = pd.DataFrame.from_dict({'id': test['id']})
for class_name in class_names:
train_target = train[class_name]
classifier = LogisticRegression(solver='sag')
cv_score = np.mean(cross_val_score(classifier, train_char_features, train_target, cv=3, scoring='roc_auc'))
scores.append(cv_score)
print('CV score for class {} is {}'.format(class_name, cv_score))
classifier.fit(train_char_features, train_target)
submission[class_name] = classifier.predict_proba(test_char_features)[:, 1]
print('Total CV score is {}'.format(np.mean(scores)))
submission.to_csv('submission.csv', index=False)
values = np.r_[[0.1, 1.0, 1.5], np.linspace(2.0, 10.0, 9)]
print('C values in:', values)
for C in values:
train_target = X_train
classifier = LogisticRegression(solver='saga', tol=1e-4, max_iter=200, C=C)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for C = {} is: {}'.format(C, cv_score))
print('Optimal C value is: 7.0')
from sklearn.ensemble import ExtraTreesRegressor
for num_f in range(5, 41, 5):
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = ExtraTreesRegressor(max_depth=5, max_features=num_f, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_features is {}'.format(num_f, cv_score))
for depth in range(3, 30, 2):
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = ExtraTreesRegressor(max_depth=depth, max_features=10, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} depth is {}'.format(depth, cv_score))
from sklearn.ensemble import GradientBoostingRegressor
for value in [1, 3, 5, 10, 15, 20]:
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=5, max_features=value, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_features is {}'.format(value, cv_score))
for value in [3, 5, 7, 9, 11, 15]:
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=value, max_features=10, n_estimators=50)
cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=5,
scoring='roc_auc'))
print('CV score for class identity_hate for {} max_depth is {}'.format(value, cv_score))
train_target = X_train
#classifier = LogisticRegression(solver='sag')
classifier = GradientBoostingRegressor(max_depth=3, n_estimators=100, max_features=10)
%time cv_score = np.mean(cross_val_score(classifier, X=train_target, y=train['identity_hate'], cv=3,\
scoring='roc_auc'))
print('CV score for class identity_hate is {}'.format(cv_score))
| 0.445771 | 0.459137 |
<a href="https://colab.research.google.com/github/akmhel/Plate-Number-Classification/blob/master/MLTSK%201.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
!unzip negative_images.zip
!unzip plate_number.zip
import os, cv2, itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
TRAIN_DIR = './plate_number/'
TEST_DIR = './negative_images/'
ROWS = 64
COLS = 64
CHANNELS = 3
train_images = [TRAIN_DIR + i for i in os.listdir(TRAIN_DIR)]
test_images = [TEST_DIR + i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR)
return cv2.resize(img, (ROWS, COLS),interpolation = cv2.INTER_CUBIC)
def prep_data(images):
u = len(images)
v_z = ROWS*COLS*CHANNELS
X = np.ndarray((v_z,u), dtype=np.uint8)
y = np.zeros((1,u))
print("X.shape is {}".format(X.shape))
for i,image_file in enumerate(images):
image = read_image(image_file)
X[:,i] = np.squeeze(image.reshape((v_z,1)))
if '-' in image_file.lower():
y[0,i] = 1
elif 'download' in image_file.lower():
y[0,i] = 0
else :
y[0,i] = image_file.split('/')[-1].split('.')[0]
if i%5000 == 0:
print("Proceed {} of {}".format(i, u))
return X,y
X_img, y_img = prep_data(train_images + test_images)
classes = {0: 'Normal Image',
1: 'License Plate Number'}
def show_images(X, y, idx) :
image = X[idx]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize=(4,2))
plt.imshow(image)
plt.title("This is a {}".format(classes[y[idx,0]]))
plt.show()
show_images(X_train.T, y_train.T, 67)
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV()
X_img_lr, y_img_lr = X_img.T, y_img.T.ravel()
clf.fit(X_img_lr,y_img_lr)
print("Model accuracy: {:.2f}%".format(clf.score(X_img_lr, y_img_lr)*100))
def show_image_prediction(X, idx, model):
image = X[idx].reshape(1,-1)
image_class = classes[model.predict(image).item()]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize = (4,2))
plt.imshow(image)
plt.title("Test {} : I think this is a {}".format(idx, image_class))
plt.show()
X_img_lr, y_img_lr = X_img.T, y_img.T
for i in np.random.randint(0, len(X_img_lr), 10):
show_image_prediction(X_img_lr, i, clf)
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
!unzip negative_images.zip
!unzip plate_number.zip
import os, cv2, itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
TRAIN_DIR = './plate_number/'
TEST_DIR = './negative_images/'
ROWS = 64
COLS = 64
CHANNELS = 3
train_images = [TRAIN_DIR + i for i in os.listdir(TRAIN_DIR)]
test_images = [TEST_DIR + i for i in os.listdir(TEST_DIR)]
def read_image(file_path):
img = cv2.imread(file_path, cv2.IMREAD_COLOR)
return cv2.resize(img, (ROWS, COLS),interpolation = cv2.INTER_CUBIC)
def prep_data(images):
u = len(images)
v_z = ROWS*COLS*CHANNELS
X = np.ndarray((v_z,u), dtype=np.uint8)
y = np.zeros((1,u))
print("X.shape is {}".format(X.shape))
for i,image_file in enumerate(images):
image = read_image(image_file)
X[:,i] = np.squeeze(image.reshape((v_z,1)))
if '-' in image_file.lower():
y[0,i] = 1
elif 'download' in image_file.lower():
y[0,i] = 0
else :
y[0,i] = image_file.split('/')[-1].split('.')[0]
if i%5000 == 0:
print("Proceed {} of {}".format(i, u))
return X,y
X_img, y_img = prep_data(train_images + test_images)
classes = {0: 'Normal Image',
1: 'License Plate Number'}
def show_images(X, y, idx) :
image = X[idx]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize=(4,2))
plt.imshow(image)
plt.title("This is a {}".format(classes[y[idx,0]]))
plt.show()
show_images(X_train.T, y_train.T, 67)
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV()
X_img_lr, y_img_lr = X_img.T, y_img.T.ravel()
clf.fit(X_img_lr,y_img_lr)
print("Model accuracy: {:.2f}%".format(clf.score(X_img_lr, y_img_lr)*100))
def show_image_prediction(X, idx, model):
image = X[idx].reshape(1,-1)
image_class = classes[model.predict(image).item()]
image = image.reshape((ROWS, COLS, CHANNELS))
plt.figure(figsize = (4,2))
plt.imshow(image)
plt.title("Test {} : I think this is a {}".format(idx, image_class))
plt.show()
X_img_lr, y_img_lr = X_img.T, y_img.T
for i in np.random.randint(0, len(X_img_lr), 10):
show_image_prediction(X_img_lr, i, clf)
| 0.358465 | 0.892187 |
```
import tensorflow as tf
from keras.layers import Input, Dense, concatenate, BatchNormalization
from keras.models import Model
from keras.datasets import cifar10
from keras.optimizers import Adam, SGD
from keras.regularizers import l1,l2
from skimage.color import rgb2gray, gray2rgb, rgb2hsv, hsv2rgb
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import threading
import multiprocessing
import math
def rgb2hsl(rgb):
def core(_rgb, _hsl):
irgb = _rgb.astype(np.uint16)
ir, ig, ib = irgb[:, :, 0], irgb[:, :, 1], irgb[:, :, 2]
h, s, l = _hsl[:, :, 0], _hsl[:, :, 1], _hsl[:, :, 2]
imin, imax = irgb.min(2), irgb.max(2)
iadd, isub = imax + imin, imax - imin
ltop = (iadd != 510) * (iadd > 255)
lbot = (iadd != 0) * (ltop == False)
l[:] = iadd.astype(np.float) / 510
fsub = isub.astype(np.float)
s[ltop] = fsub[ltop] / (510 - iadd[ltop])
s[lbot] = fsub[lbot] / iadd[lbot]
not_same = imax != imin
is_b_max = not_same * (imax == ib)
not_same_not_b_max = not_same * (is_b_max == False)
is_g_max = not_same_not_b_max * (imax == ig)
is_r_max = not_same_not_b_max * (is_g_max == False) * (imax == ir)
h[is_r_max] = ((0. + ig[is_r_max] - ib[is_r_max]) / isub[is_r_max])
h[is_g_max] = ((0. + ib[is_g_max] - ir[is_g_max]) / isub[is_g_max]) + 2
h[is_b_max] = ((0. + ir[is_b_max] - ig[is_b_max]) / isub[is_b_max]) + 4
h[h < 0] += 6
h[:] /= 6
hsl = np.zeros(rgb.shape, dtype=np.float)
cpus = multiprocessing.cpu_count()
length = int(math.ceil(float(hsl.shape[0]) / cpus))
line = 0
threads = []
while line < hsl.shape[0]:
line_next = line + length
thread = threading.Thread(target=core, args=(rgb[line:line_next], hsl[line:line_next]))
thread.start()
threads.append(thread)
line = line_next
for thread in threads:
thread.join()
return hsl
def hsl2rgb(hsl):
def core(_hsl, _frgb):
h, s, l = _hsl[:, :, 0], _hsl[:, :, 1], _hsl[:, :, 2]
fr, fg, fb = _frgb[:, :, 0], _frgb[:, :, 1], _frgb[:, :, 2]
q = np.zeros(l.shape, dtype=np.float)
lbot = l < 0.5
q[lbot] = l[lbot] * (1 + s[lbot])
ltop = lbot == False
l_ltop, s_ltop = l[ltop], s[ltop]
q[ltop] = (l_ltop + s_ltop) - (l_ltop * s_ltop)
p = 2 * l - q
q_sub_p = q - p
is_s_zero = s == 0
l_is_s_zero = l[is_s_zero]
per_3 = 1./3
per_6 = 1./6
two_per_3 = 2./3
def calc_channel(channel, t):
t[t < 0] += 1
t[t > 1] -= 1
t_lt_per_6 = t < per_6
t_lt_half = (t_lt_per_6 == False) * (t < 0.5)
t_lt_two_per_3 = (t_lt_half == False) * (t < two_per_3)
t_mul_6 = t * 6
channel[:] = p.copy()
channel[t_lt_two_per_3] = p[t_lt_two_per_3] + q_sub_p[t_lt_two_per_3] * (4 - t_mul_6[t_lt_two_per_3])
channel[t_lt_half] = q[t_lt_half].copy()
channel[t_lt_per_6] = p[t_lt_per_6] + q_sub_p[t_lt_per_6] * t_mul_6[t_lt_per_6]
channel[is_s_zero] = l_is_s_zero.copy()
calc_channel(fr, h + per_3)
calc_channel(fg, h.copy())
calc_channel(fb, h - per_3)
frgb = np.zeros(hsl.shape, dtype=np.float)
cpus = multiprocessing.cpu_count()
length = int(math.ceil(float(hsl.shape[0]) / cpus))
line = 0
threads = []
while line < hsl.shape[0]:
line_next = line + length
thread = threading.Thread(target=core, args=(hsl[line:line_next], frgb[line:line_next]))
thread.start()
threads.append(thread)
line = line_next
for thread in threads:
thread.join()
return (frgb*255).round().astype(np.uint8)
# load the dataset
(x_train, _), (x_test, _) = cifar10.load_data()
xtrain_temp=[]
for i in range(x_train.shape[0]):
xtrain_temp.append(rgb2hsl(x_train[i]))
xtest_temp=[]
for i in range(x_test.shape[0]):
xtest_temp.append(rgb2hsl(x_test[i]))
xtrain = np.asarray(xtrain_temp)
xtest = np.asarray(xtest_temp)
xtrain = xtrain.astype('float32')
xtest = xtest.astype('float32')
xtrain_red = xtrain[:,:,:,0]
xtrain_green = xtrain[:,:,:,1]
xtrain_blue = xtrain[:,:,:,2]
xtest_red = xtest[:,:,:,0]
xtest_green = xtest[:,:,:,1]
xtest_blue = xtest[:,:,:,2]
xtrain_red = xtrain_red.reshape(len(xtrain_red), np.prod(xtrain_red.shape[1:]))
xtrain_green = xtrain_green.reshape(len(xtrain_green), np.prod(xtrain_green.shape[1:]))
xtrain_blue = xtrain_blue.reshape(len(xtrain_blue), np.prod(xtrain_blue.shape[1:]))
xtest_red = xtest_red.reshape(len(xtest_red), np.prod(xtest_red.shape[1:]))
xtest_green = xtest_green.reshape(len(xtest_green), np.prod(xtest_green.shape[1:]))
xtest_blue = xtest_blue.reshape(len(xtest_blue), np.prod(xtest_blue.shape[1:]))
train_dset = []
train_dset.extend(xtrain_red)
train_dset.extend(xtrain_green)
train_dset.extend(xtrain_blue)
dset_train = np.asarray(train_dset)
test_dset = []
test_dset.extend(xtest_red)
test_dset.extend(xtest_green)
test_dset.extend(xtest_blue)
dset_test = np.asarray(test_dset)
input_layer = xtrain_red.shape[1]
hid_layer1 = 576
hid_layer2 = 256
hid_layer3 = 64
hid_layer4 = 10
hid_layer5 = hid_layer3
hid_layer6 = hid_layer2
hid_layer7 = hid_layer1
output_layer = input_layer
print(dset_train.shape)
print(dset_test.shape)
input_img = Input(shape=(input_layer,))
# network architecture
# vanilla autoencoder with fully-connected layer
# ENCODER
x = Dense(units = hid_layer1, activation='relu')(input_img)
x = Dense(units = hid_layer2, activation='relu')(x)
x = Dense(units = hid_layer3, activation='relu')(x)
encoded = Dense(units = hid_layer4, activation='relu',kernel_regularizer = l2(3e-5), activity_regularizer = l1(10e-12))(x)
# DECODER
x = Dense(units = hid_layer5, activation = 'relu')(encoded)
x = Dense(units = hid_layer6, activation='relu')(x)
x = Dense(units = hid_layer7, activation='relu')(x)
decoded = Dense(units = output_layer, activation='sigmoid')(x)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
autoencoder.compile(optimizer= Adam(lr=1e-5), loss='mean_absolute_error')
autoencoder.summary()
# train the model
history = autoencoder.fit(dset_train, dset_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(dset_test, dset_test))
# list all data in history
print(history.history.keys())
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# show the result at the decoder output
decoded_imgs_red = autoencoder.predict(xtest_red)
decoded_imgs_green = autoencoder.predict(xtest_green)
decoded_imgs_blue = autoencoder.predict(xtest_blue)
counter = 0
n = 10
test_imgs = np.zeros((32,32,3))
dec_imgs = np.zeros((32,32,3))
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
test_imgs[:,:,0] = xtest_red[counter + i].reshape(32, 32)
test_imgs[:,:,1] = xtest_green[counter + i].reshape(32, 32)
test_imgs[:,:,2] = xtest_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(test_imgs))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
dec_imgs[:,:,0] = decoded_imgs_red[counter + i].reshape(32, 32)
dec_imgs[:,:,1] = decoded_imgs_green[counter + i].reshape(32, 32)
dec_imgs[:,:,2] = decoded_imgs_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(dec_imgs))
# plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# show the representation at the bottleneck
encoded_imgs_red = encoder.predict(xtest_red)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_red[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
# show the representation at the bottleneck
encoded_imgs_green = encoder.predict(xtest_green)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_green[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
# show the representation at the bottleneck
encoded_imgs_blue = encoder.predict(xtest_blue)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_blue[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
autoencoder.save('autoencoder_baseline_hsl.h5')
from keras.models import load_model
test_model = load_model('autoencoder_baseline_hsl.h5')
encoding = Input(shape = (hid_layer4,))
# DECODER
y = Dense(units = hid_layer5, activation = 'relu')(encoding)
y = Dense(units = hid_layer6, activation='relu')(y)
y = Dense(units = hid_layer7, activation='relu')(y)
decode_avg = Dense(units = output_layer, activation='sigmoid')(y)
newModel = Model(encoding, decode_avg)
newModel.summary()
newModel.layers[1].set_weights(test_model.layers[5].get_weights())
newModel.layers[2].set_weights(test_model.layers[6].get_weights())
newModel.layers[3].set_weights(test_model.layers[7].get_weights())
newModel.layers[4].set_weights(test_model.layers[8].get_weights())
avg_bottleneck = (encoded_imgs_red + encoded_imgs_green + encoded_imgs_blue)/3.
# show the result at the decoder output
decoded_imgs = newModel.predict(avg_bottleneck)
n = 10
test_imgs = np.zeros((32,32,3))
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
test_imgs[:,:,0] = xtest_red[counter + i].reshape(32, 32)
test_imgs[:,:,1] = xtest_green[counter + i].reshape(32, 32)
test_imgs[:,:,2] = xtest_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(test_imgs))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[counter + i].reshape(32,32))
# plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
err_red = np.sqrt(np.square(xtest_red[1] - decoded_imgs_red[1]))
plt.hist(err_red)
err_green = np.sqrt(np.square(xtest_green[1] - decoded_imgs_green[1]))
plt.hist(err_green)
err_blue = np.sqrt(np.square(xtest_blue[1] - decoded_imgs_blue[1]))
plt.hist(err_blue)
```
|
github_jupyter
|
import tensorflow as tf
from keras.layers import Input, Dense, concatenate, BatchNormalization
from keras.models import Model
from keras.datasets import cifar10
from keras.optimizers import Adam, SGD
from keras.regularizers import l1,l2
from skimage.color import rgb2gray, gray2rgb, rgb2hsv, hsv2rgb
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import threading
import multiprocessing
import math
def rgb2hsl(rgb):
def core(_rgb, _hsl):
irgb = _rgb.astype(np.uint16)
ir, ig, ib = irgb[:, :, 0], irgb[:, :, 1], irgb[:, :, 2]
h, s, l = _hsl[:, :, 0], _hsl[:, :, 1], _hsl[:, :, 2]
imin, imax = irgb.min(2), irgb.max(2)
iadd, isub = imax + imin, imax - imin
ltop = (iadd != 510) * (iadd > 255)
lbot = (iadd != 0) * (ltop == False)
l[:] = iadd.astype(np.float) / 510
fsub = isub.astype(np.float)
s[ltop] = fsub[ltop] / (510 - iadd[ltop])
s[lbot] = fsub[lbot] / iadd[lbot]
not_same = imax != imin
is_b_max = not_same * (imax == ib)
not_same_not_b_max = not_same * (is_b_max == False)
is_g_max = not_same_not_b_max * (imax == ig)
is_r_max = not_same_not_b_max * (is_g_max == False) * (imax == ir)
h[is_r_max] = ((0. + ig[is_r_max] - ib[is_r_max]) / isub[is_r_max])
h[is_g_max] = ((0. + ib[is_g_max] - ir[is_g_max]) / isub[is_g_max]) + 2
h[is_b_max] = ((0. + ir[is_b_max] - ig[is_b_max]) / isub[is_b_max]) + 4
h[h < 0] += 6
h[:] /= 6
hsl = np.zeros(rgb.shape, dtype=np.float)
cpus = multiprocessing.cpu_count()
length = int(math.ceil(float(hsl.shape[0]) / cpus))
line = 0
threads = []
while line < hsl.shape[0]:
line_next = line + length
thread = threading.Thread(target=core, args=(rgb[line:line_next], hsl[line:line_next]))
thread.start()
threads.append(thread)
line = line_next
for thread in threads:
thread.join()
return hsl
def hsl2rgb(hsl):
def core(_hsl, _frgb):
h, s, l = _hsl[:, :, 0], _hsl[:, :, 1], _hsl[:, :, 2]
fr, fg, fb = _frgb[:, :, 0], _frgb[:, :, 1], _frgb[:, :, 2]
q = np.zeros(l.shape, dtype=np.float)
lbot = l < 0.5
q[lbot] = l[lbot] * (1 + s[lbot])
ltop = lbot == False
l_ltop, s_ltop = l[ltop], s[ltop]
q[ltop] = (l_ltop + s_ltop) - (l_ltop * s_ltop)
p = 2 * l - q
q_sub_p = q - p
is_s_zero = s == 0
l_is_s_zero = l[is_s_zero]
per_3 = 1./3
per_6 = 1./6
two_per_3 = 2./3
def calc_channel(channel, t):
t[t < 0] += 1
t[t > 1] -= 1
t_lt_per_6 = t < per_6
t_lt_half = (t_lt_per_6 == False) * (t < 0.5)
t_lt_two_per_3 = (t_lt_half == False) * (t < two_per_3)
t_mul_6 = t * 6
channel[:] = p.copy()
channel[t_lt_two_per_3] = p[t_lt_two_per_3] + q_sub_p[t_lt_two_per_3] * (4 - t_mul_6[t_lt_two_per_3])
channel[t_lt_half] = q[t_lt_half].copy()
channel[t_lt_per_6] = p[t_lt_per_6] + q_sub_p[t_lt_per_6] * t_mul_6[t_lt_per_6]
channel[is_s_zero] = l_is_s_zero.copy()
calc_channel(fr, h + per_3)
calc_channel(fg, h.copy())
calc_channel(fb, h - per_3)
frgb = np.zeros(hsl.shape, dtype=np.float)
cpus = multiprocessing.cpu_count()
length = int(math.ceil(float(hsl.shape[0]) / cpus))
line = 0
threads = []
while line < hsl.shape[0]:
line_next = line + length
thread = threading.Thread(target=core, args=(hsl[line:line_next], frgb[line:line_next]))
thread.start()
threads.append(thread)
line = line_next
for thread in threads:
thread.join()
return (frgb*255).round().astype(np.uint8)
# load the dataset
(x_train, _), (x_test, _) = cifar10.load_data()
xtrain_temp=[]
for i in range(x_train.shape[0]):
xtrain_temp.append(rgb2hsl(x_train[i]))
xtest_temp=[]
for i in range(x_test.shape[0]):
xtest_temp.append(rgb2hsl(x_test[i]))
xtrain = np.asarray(xtrain_temp)
xtest = np.asarray(xtest_temp)
xtrain = xtrain.astype('float32')
xtest = xtest.astype('float32')
xtrain_red = xtrain[:,:,:,0]
xtrain_green = xtrain[:,:,:,1]
xtrain_blue = xtrain[:,:,:,2]
xtest_red = xtest[:,:,:,0]
xtest_green = xtest[:,:,:,1]
xtest_blue = xtest[:,:,:,2]
xtrain_red = xtrain_red.reshape(len(xtrain_red), np.prod(xtrain_red.shape[1:]))
xtrain_green = xtrain_green.reshape(len(xtrain_green), np.prod(xtrain_green.shape[1:]))
xtrain_blue = xtrain_blue.reshape(len(xtrain_blue), np.prod(xtrain_blue.shape[1:]))
xtest_red = xtest_red.reshape(len(xtest_red), np.prod(xtest_red.shape[1:]))
xtest_green = xtest_green.reshape(len(xtest_green), np.prod(xtest_green.shape[1:]))
xtest_blue = xtest_blue.reshape(len(xtest_blue), np.prod(xtest_blue.shape[1:]))
train_dset = []
train_dset.extend(xtrain_red)
train_dset.extend(xtrain_green)
train_dset.extend(xtrain_blue)
dset_train = np.asarray(train_dset)
test_dset = []
test_dset.extend(xtest_red)
test_dset.extend(xtest_green)
test_dset.extend(xtest_blue)
dset_test = np.asarray(test_dset)
input_layer = xtrain_red.shape[1]
hid_layer1 = 576
hid_layer2 = 256
hid_layer3 = 64
hid_layer4 = 10
hid_layer5 = hid_layer3
hid_layer6 = hid_layer2
hid_layer7 = hid_layer1
output_layer = input_layer
print(dset_train.shape)
print(dset_test.shape)
input_img = Input(shape=(input_layer,))
# network architecture
# vanilla autoencoder with fully-connected layer
# ENCODER
x = Dense(units = hid_layer1, activation='relu')(input_img)
x = Dense(units = hid_layer2, activation='relu')(x)
x = Dense(units = hid_layer3, activation='relu')(x)
encoded = Dense(units = hid_layer4, activation='relu',kernel_regularizer = l2(3e-5), activity_regularizer = l1(10e-12))(x)
# DECODER
x = Dense(units = hid_layer5, activation = 'relu')(encoded)
x = Dense(units = hid_layer6, activation='relu')(x)
x = Dense(units = hid_layer7, activation='relu')(x)
decoded = Dense(units = output_layer, activation='sigmoid')(x)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
autoencoder.compile(optimizer= Adam(lr=1e-5), loss='mean_absolute_error')
autoencoder.summary()
# train the model
history = autoencoder.fit(dset_train, dset_train,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(dset_test, dset_test))
# list all data in history
print(history.history.keys())
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# show the result at the decoder output
decoded_imgs_red = autoencoder.predict(xtest_red)
decoded_imgs_green = autoencoder.predict(xtest_green)
decoded_imgs_blue = autoencoder.predict(xtest_blue)
counter = 0
n = 10
test_imgs = np.zeros((32,32,3))
dec_imgs = np.zeros((32,32,3))
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
test_imgs[:,:,0] = xtest_red[counter + i].reshape(32, 32)
test_imgs[:,:,1] = xtest_green[counter + i].reshape(32, 32)
test_imgs[:,:,2] = xtest_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(test_imgs))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
dec_imgs[:,:,0] = decoded_imgs_red[counter + i].reshape(32, 32)
dec_imgs[:,:,1] = decoded_imgs_green[counter + i].reshape(32, 32)
dec_imgs[:,:,2] = decoded_imgs_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(dec_imgs))
# plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
# show the representation at the bottleneck
encoded_imgs_red = encoder.predict(xtest_red)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_red[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
# show the representation at the bottleneck
encoded_imgs_green = encoder.predict(xtest_green)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_green[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
# show the representation at the bottleneck
encoded_imgs_blue = encoder.predict(xtest_blue)
n = 10
plt.figure(figsize=(20, 8))
for i in range(n):
ax = plt.subplot(2, 5, i + 1)
plt.plot(encoded_imgs_blue[counter + i])
# plt.gray()
ax.get_xaxis().set_visible(True)
ax.get_yaxis().set_visible(True)
plt.show()
autoencoder.save('autoencoder_baseline_hsl.h5')
from keras.models import load_model
test_model = load_model('autoencoder_baseline_hsl.h5')
encoding = Input(shape = (hid_layer4,))
# DECODER
y = Dense(units = hid_layer5, activation = 'relu')(encoding)
y = Dense(units = hid_layer6, activation='relu')(y)
y = Dense(units = hid_layer7, activation='relu')(y)
decode_avg = Dense(units = output_layer, activation='sigmoid')(y)
newModel = Model(encoding, decode_avg)
newModel.summary()
newModel.layers[1].set_weights(test_model.layers[5].get_weights())
newModel.layers[2].set_weights(test_model.layers[6].get_weights())
newModel.layers[3].set_weights(test_model.layers[7].get_weights())
newModel.layers[4].set_weights(test_model.layers[8].get_weights())
avg_bottleneck = (encoded_imgs_red + encoded_imgs_green + encoded_imgs_blue)/3.
# show the result at the decoder output
decoded_imgs = newModel.predict(avg_bottleneck)
n = 10
test_imgs = np.zeros((32,32,3))
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
test_imgs[:,:,0] = xtest_red[counter + i].reshape(32, 32)
test_imgs[:,:,1] = xtest_green[counter + i].reshape(32, 32)
test_imgs[:,:,2] = xtest_blue[counter + i].reshape(32, 32)
plt.imshow(hsl2rgb(test_imgs))
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + n + 1)
plt.imshow(decoded_imgs[counter + i].reshape(32,32))
# plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
err_red = np.sqrt(np.square(xtest_red[1] - decoded_imgs_red[1]))
plt.hist(err_red)
err_green = np.sqrt(np.square(xtest_green[1] - decoded_imgs_green[1]))
plt.hist(err_green)
err_blue = np.sqrt(np.square(xtest_blue[1] - decoded_imgs_blue[1]))
plt.hist(err_blue)
| 0.599837 | 0.447279 |
```
# 4.6.4. Implementation from Scratch
import torch
from torch import nn
from d2l import torch as d2l
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# In this case, all elements are dropped out
if dropout == 1:
return torch.zeros_like(X)
# In this case, all elements are kept
if dropout == 0:
return X
mask = (torch.rand(X.shape) > dropout).float()
return mask * X / (1.0 - dropout)
X = torch.arange(16, dtype=torch.float32).reshape((2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
# 4.6.4.1. Defining Model Parameters
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256
# 4.6.4.2. Defining the Model
dropout1, dropout2 = 0.2, 0.5
class Net(nn.Module):
def __init__(self, num_inputs, num_outputs, num_hiddens1, num_hiddens2,
is_training=True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.training = is_training
self.lin1 = nn.Linear(num_inputs, num_hiddens1)
self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)
self.lin3 = nn.Linear(num_hiddens2, num_outputs)
self.relu = nn.ReLU()
def forward(self, X):
H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))
# Use dropout only when training the model
if self.training == True:
# Add a dropout layer after the first fully connected layer
H1 = dropout_layer(H1, dropout1)
H2 = self.relu(self.lin2(H1))
if self.training == True:
# Add a dropout layer after the second fully connected layer
H2 = dropout_layer(H2, dropout2)
out = self.lin3(H2)
return out
net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2)
# 4.6.4.3. Training and Testing
num_epochs, lr, batch_size = 10, 0.5, 256
loss = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
# 4.6.5. Concise Implementation
net = nn.Sequential(
nn.Flatten(), nn.Linear(784, 256), nn.ReLU(),
# Add a dropout layer after the first fully connected layer
nn.Dropout(dropout1), nn.Linear(256, 256), nn.ReLU(),
# Add a dropout layer after the second fully connected layer
nn.Dropout(dropout2), nn.Linear(256, 10))
def init_weights(m):
if type(m) == nn.Linear:
nn.init.normal_(m.weight, std=0.01)
net.apply(init_weights);
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
```
|
github_jupyter
|
# 4.6.4. Implementation from Scratch
import torch
from torch import nn
from d2l import torch as d2l
import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# In this case, all elements are dropped out
if dropout == 1:
return torch.zeros_like(X)
# In this case, all elements are kept
if dropout == 0:
return X
mask = (torch.rand(X.shape) > dropout).float()
return mask * X / (1.0 - dropout)
X = torch.arange(16, dtype=torch.float32).reshape((2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
# 4.6.4.1. Defining Model Parameters
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256
# 4.6.4.2. Defining the Model
dropout1, dropout2 = 0.2, 0.5
class Net(nn.Module):
def __init__(self, num_inputs, num_outputs, num_hiddens1, num_hiddens2,
is_training=True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.training = is_training
self.lin1 = nn.Linear(num_inputs, num_hiddens1)
self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)
self.lin3 = nn.Linear(num_hiddens2, num_outputs)
self.relu = nn.ReLU()
def forward(self, X):
H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))
# Use dropout only when training the model
if self.training == True:
# Add a dropout layer after the first fully connected layer
H1 = dropout_layer(H1, dropout1)
H2 = self.relu(self.lin2(H1))
if self.training == True:
# Add a dropout layer after the second fully connected layer
H2 = dropout_layer(H2, dropout2)
out = self.lin3(H2)
return out
net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2)
# 4.6.4.3. Training and Testing
num_epochs, lr, batch_size = 10, 0.5, 256
loss = nn.CrossEntropyLoss()
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
# 4.6.5. Concise Implementation
net = nn.Sequential(
nn.Flatten(), nn.Linear(784, 256), nn.ReLU(),
# Add a dropout layer after the first fully connected layer
nn.Dropout(dropout1), nn.Linear(256, 256), nn.ReLU(),
# Add a dropout layer after the second fully connected layer
nn.Dropout(dropout2), nn.Linear(256, 10))
def init_weights(m):
if type(m) == nn.Linear:
nn.init.normal_(m.weight, std=0.01)
net.apply(init_weights);
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
| 0.92493 | 0.647937 |
### **Import Google Drive**
```
from google.colab import drive
drive.mount('/content/drive')
```
### **Import Library**
```
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
```
### **Load Data**
```
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
```
### **Data Preparation**
```
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
```
### **Model Parameters**
```
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
```
### **Data Generator**
```
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
```
### **Define Model**
```
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.MobileNetV2(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.15)(x)
x =tf.keras.layers.Dense(512, activation='relu')(x)
x =tf.keras.layers.Dropout(0.15)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
```
### **Train Top Layers**
```
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
```
### **Train Fine Tuning**
```
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
```
### **Model Graph**
```
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
```
### **Evaluate Model**
```
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
```
|
github_jupyter
|
from google.colab import drive
drive.mount('/content/drive')
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.MobileNetV2(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.15)(x)
x =tf.keras.layers.Dense(512, activation='relu')(x)
x =tf.keras.layers.Dropout(0.15)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
| 0.388618 | 0.715978 |
___
<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Content Copyright by Pierian Data</em></center>
# Useful Operators
There are a few built-in functions and "operators" in Python that don't fit well into any category, so we will go over them in this lecture, let's begin!
## range
The range function allows you to quickly *generate* a list of integers, this comes in handy a lot, so take note of how to use it! There are 3 parameters you can pass, a start, a stop, and a step size. Let's see some examples:
```
range(0,11)
```
Note that this is a **generator** function, so to actually get a list out of it, we need to cast it to a list with **list()**. What is a generator? Its a special type of function that will generate information and not need to save it to memory. We haven't talked about functions or generators yet, so just keep this in your notes for now, we will discuss this in much more detail in later on in your training!
```
# Notice how 11 is not included, up to but not including 11, just like slice notation!
list(range(0,11))
list(range(0,12))
# Third parameter is step size!
# step size just means how big of a jump/leap/step you
# take from the starting number to get to the next number.
list(range(0,11,2))
list(range(0,101,10))
```
## enumerate
enumerate is a very useful function to use with for loops. Let's imagine the following situation:
```
index_count = 0
for letter in 'abcde':
print("At index {} the letter is {}".format(index_count,letter))
index_count += 1
```
Keeping track of how many loops you've gone through is so common, that enumerate was created so you don't need to worry about creating and updating this index_count or loop_count variable
```
# Notice the tuple unpacking!
for i,letter in enumerate('abcde'):
print("At index {} the letter is {}".format(i,letter))
```
## zip
Notice the format enumerate actually returns, let's take a look by transforming it to a list()
```
list(enumerate('abcde'))
```
It was a list of tuples, meaning we could use tuple unpacking during our for loop. This data structure is actually very common in Python , especially when working with outside libraries. You can use the **zip()** function to quickly create a list of tuples by "zipping" up together two lists.
```
mylist1 = [1,2,3,4,5]
mylist2 = ['a','b','c','d','e']
# This one is also a generator! We will explain this later, but for now let's transform it to a list
zip(mylist1,mylist2)
list(zip(mylist1,mylist2))
```
To use the generator, we could just use a for loop
```
for item1, item2 in zip(mylist1,mylist2):
print('For this tuple, first item was {} and second item was {}'.format(item1,item2))
```
## in operator
We've already seen the **in** keyword during the for loop, but we can also use it to quickly check if an object is in a list
```
'x' in ['x','y','z']
'x' in [1,2,3]
```
## not in
We can combine **in** with a **not** operator, to check if some object or variable is not present in a list.
```
'x' not in ['x','y','z']
'x' not in [1,2,3]
```
## min and max
Quickly check the minimum or maximum of a list with these functions.
```
mylist = [10,20,30,40,100]
min(mylist)
max(mylist)
```
## random
Python comes with a built in random library. There are a lot of functions included in this random library, so we will only show you two useful functions for now.
```
from random import shuffle
# This shuffles the list "in-place" meaning it won't return
# anything, instead it will effect the list passed
shuffle(mylist)
mylist
from random import randint
# Return random integer in range [a, b], including both end points.
randint(0,100)
# Return random integer in range [a, b], including both end points.
randint(0,100)
```
## input
```
input('Enter Something into this box: ')
```
|
github_jupyter
|
range(0,11)
# Notice how 11 is not included, up to but not including 11, just like slice notation!
list(range(0,11))
list(range(0,12))
# Third parameter is step size!
# step size just means how big of a jump/leap/step you
# take from the starting number to get to the next number.
list(range(0,11,2))
list(range(0,101,10))
index_count = 0
for letter in 'abcde':
print("At index {} the letter is {}".format(index_count,letter))
index_count += 1
# Notice the tuple unpacking!
for i,letter in enumerate('abcde'):
print("At index {} the letter is {}".format(i,letter))
list(enumerate('abcde'))
mylist1 = [1,2,3,4,5]
mylist2 = ['a','b','c','d','e']
# This one is also a generator! We will explain this later, but for now let's transform it to a list
zip(mylist1,mylist2)
list(zip(mylist1,mylist2))
for item1, item2 in zip(mylist1,mylist2):
print('For this tuple, first item was {} and second item was {}'.format(item1,item2))
'x' in ['x','y','z']
'x' in [1,2,3]
'x' not in ['x','y','z']
'x' not in [1,2,3]
mylist = [10,20,30,40,100]
min(mylist)
max(mylist)
from random import shuffle
# This shuffles the list "in-place" meaning it won't return
# anything, instead it will effect the list passed
shuffle(mylist)
mylist
from random import randint
# Return random integer in range [a, b], including both end points.
randint(0,100)
# Return random integer in range [a, b], including both end points.
randint(0,100)
input('Enter Something into this box: ')
| 0.332852 | 0.953449 |
```
# default_exp about
```
# About
> Why write a transaction analytics library?
```
#hide
from nbdev.showdoc import *
```
Player behaviour tracking research as an academic discipline is growing fast. As operators provide more data to researchers, new analytical methods are being developed and published by researchers from psychology, computer science, economics, and more.
Until now, no open-source library exists which meets the needs of this growing field - to replicate studies. This means researchers need to implement other's methods themselves, which, on top of being a labour intensive task, increases the risk of bugs being introduced, and their own work not being replicable.
The gamba library aims to provide a collection of methods for reproducing existing work, therefore raising the baseline of the capabilities of researchers in the field - with the ultimate effect of advancing the rate of scientific progress. Although gamba can never be a unified framework for reproducing all work in the field, it can provide new and existing researchers with the opportunity to explore analytical code themselves. New discoveries, approaches, and insights are inevitable taking this approach. By using the library, and sharing your extensions and experience, you will be helping progress our field in a tangible and impactful way, which will help us all contribute to creating more effective consumer protection measures, and better understand new forms of gambling.
On top of this, the open-source nature of the gamba library in the context of player behaviour tracking research has several important benefits;
- ***transparency opposes bias*** - because the library is open source, and because it can replicate studies, researchers who use it inherently promote analytical transparency, decreasing the possibility of hidden bias from funding or stakeholders.
- ***reproducibility promotes learning*** - because the library is open source, all researchers have a lower barrier to entry than ever before to making new discoveries in the field. This means opening the doors for more researchers, more analysts, and better science.
- ***methods are available instantly*** - by open sourcing implementations of existing methods, they can be quickly applied to existing data, decreasing the time-to-impact and time-to-replication of academic research.
- ***methods can be scrutinised*** - by publishing analytical code, it can be scrutinised by experienced researchers and programmers who can then improve it. This means more efficient, more accurate code than can be achieved alone, improving the quality of everyone's analytical capabilities.
|
github_jupyter
|
# default_exp about
#hide
from nbdev.showdoc import *
| 0.207215 | 0.949342 |
# Introduction to if-Statements
## Objectives
At the end of this notebook you should be able to:
- understand the logic behind if-statement
- know the syntax of if-statements (if, elif, else; nested if statements (and or and not))
- use comparison operators and define boolean variables
- integrate users input with the function input()
## Logic
The simplest way to control the flow of your Python program is with an `if` statement. From a high level, an `if` statement allows us to check whether or not a certain condition is true. If it is, certain operations will be performed. Otherwise, those operations will not be executed.
For example, say we're asked to write a program that takes a bunch of numbers and gives back to us those that are even. We would need to write an `if` statement that identifies whether or not a number is even (we'll talk later about how to do this), and then give back only those that meet the even condition. This is a program that will be entirely within our ability to implement at the end of next week; for now, though, let's focus on the `if` statement.
The general syntax of an `if` statement in Python is:
```python
if condition:
if_block_statement
```
Notice how the `if` statement, after the condition, ends in a colon `:`. This is the way that Python declares the start of an indentation block. The purposes of indentation blocks manifest themselves in many different ways. With our `if` statement, just know that they mark a section of code that is run under specific circumstances, when the codition is true. What does it mean, then, for a condition to be true? To understand this, let's look at conditionals.
## Conditionals
Let's tackle this one part at a time. What does it mean to be a condition? Really, all an `if` is checking is whether the conditional evaluates to `True` or `False`. In english, it checks if the condition is true or not. If the condition is true, then the body of the `if` statement is executed. If the condition is false, the `if` block is skipped. Intuitively, true and false are concepts that make perfect sense to us. But, we should take the time to clearly define them in a programming context.
`True` and `False` are what we call booleans in logic, and what Python calls them (`bool` for short). They are a special variable type with many potential uses; mainly they are used as a way to put a label on the truth of a statement. There are two specifically reserved words for bools in Python, `True` and `False`. Note that these begin with capital letters.
```
type(True)
type(False)
```
In addition, a wide variety of statements can evaluate to booleans. The ones that we will focus on today are the equalities, *equal to* and *not equal to*, and the inequalities, *less than*, *greater than*, *less than or equal to* and *greater than or equal to*. These comparisons are available in Python via `==`, `!=`, `<`, `>`, `<=` and `>=`, respectively. Consider the following (in)equality statements. Try changing them to other numerics and see what happens.
```
1 == 2
1 != 2
1 < 2
1 > 2
1 <= 2
1 >= 2
```
## Using the If
Now that we understand conditionals, let's talk about how we can use them with variables to make our programs dynamic. Consider the following code block.
```python
if x > 5:
x += 10
print(x)
```
**Note**: The print function simply pipes the value passed to it to the console.
In the above code, we don't need to know what the value of `x` is, but we can say that if it's greater than 5, it will come out of the code block 10 greater than before the `if` statement.
From what we know so far, this functionality isn't super useful. So, let's quickly go over a way that we can make our Python more flexible. Until now, we've had to hard code any variable or value that we want to use in our program. Python has a built in way to accept input from the user of a program. Let's examine this now. Consider the following code:
```
x = input('Please enter a number: ')
print(x)
```
Try executing the above cell (with Shift-Enter). `input()` will halt anything else from happening, so nothing will happen until you type something and press enter. This is the function that we will be using to get input from a user of our programs. We will use it frequently for the next couple of weeks as you write solutions to the assignment questions. Now that we have a way to get arbitrary input from the user of our programs, we can begin to see the full power of the `if`. Let's combine the last two code blocks from above:
```
x = int(input('Please enter a number: '))
if x > 5:
x += 10
print(x)
```
Try running the above cell, entering different values. Given what we know the `if` statement is supposed to do, are you getting values that you'd expect?
**Note**: `input()` actually interprets the input as strings, so we have to manually tell Python to treat the number we pass as an integer with `int()`. We'll talk about strings more next week.
This may seem like a trivial example, and therefore, not very exciting. Let me assure you, though, that what you have just learned is amazingly powerful! So, congratulations!
## Building on the If
Ok, so, the `if` is cool. But, it seems like there are only so many things you can do with it. Let's summarize this with what's known as a *flow diagram*. (If you don't see an image below, make sure that you started this notebook from the folder for the day, not the preclass folder.)

You can see that there are two branches created by the `if` statement, one when the condition is true, and the other when it is false. In the former case, the conditional code is executed, and in the latter, the conditional code is ignored. But what if we wanted to check more than one thing (i.e. have more than two branches in our flow diagram)?
Python gives us two ways to do this. First, by offering other conditionals, `elif` and `else`, and second by allowing us to combine conditions with logicals `and`, `or` and `not`.
### Elif and Else
In addition to the `if`, Python provides us with two other statements to build out those logical trees, the `elif` and the `else`. The `elif` is just like the `if` - it accepts a condition to check the truth of and has an indented code block that is executed when that condition evaluates to `True`. The `else` is similar, but it doesn't accept a condition. Instead, it mainly acts as a catch all for any other situation that you didn't cover with your `if`s and `elif`s. Note that there can only be a single `if` and up to a single `else`, but any number of `elif`s in an `if`-`elif`-`else` block. Let's take a closer look at this in the following code block:
```
x = int(input('Please enter a number: '))
if x < 0:
print('You entered a negative number.')
elif x > 0:
print('You entered a positive number.')
else:
print('You entered the number 0.')
```
Try running the above cell and enter different numbers. Before you submit the number, think about what you think will be printed based on what you see in the conditionals. Can you explain what this `if`-`elif`-`else` block is doing in english?
Let's specifically talk about how the `if`-`elif`-`else` statements work. The programmers of Python designed these statements so that they would execute highly efficiently. To do this, Python goes through your `if`-`elif`-`else` statements one at a time. When it encounters a condition that evaluates to `True`, it will execute the corresponding conditional code block **and** then skip to the line directly following the last conditional block. Let's examine this in the following code:
```
x = int(input('Please enter a number: '))
if x > 5:
print('You entered a number bigger than 5.')
elif x > 0:
print('You entered a positive number.')
elif x < 0:
print('You entered a negative number.')
else:
print('You entered the number 0.')
```
Try running the above code and enter the number 5, and then again with 6. Before you do so, what do you think will be printed? Try it out. You may get slightly unexpected results. But, they will soon make perfect sense.
Knowing what is going on in the above code will allow you full control over the flow of your programs. In the first case, when you entered 5, you got something unsurprising. The only condition that evaluates to true when `x` is 5 is the second one.
However, the second example prints only "You entered a number bigger than 5." And, even though 6 is greater than 0, 'You entered a positive number.' was not printed. This shows that only one of the conditional blocks in an `if`-`elif`-`else` statement will ever be evaluated. Sounds efficient, doesn't it?
**Note**: The `else` part of the statement is actually optional. If it is not included, then we'd notice that at most one of the conditional blocks in an `if`-`else` statement will be evaluated.
### And, Or and Not
There are plenty of times when we want to execute some specific code when more than one condition is true. Check out the following code snippet where we want to check if a number is between 5 and 10. Change the value of `x` in the next cell so that it is printed.
```
x = 5
if x > 5:
if x < 10:
print(x)
```
We can see that what this **nested** `if` statement is checking is if `x` is greater than 5, and if so, then the conditional block is entered. Next, it checks if `x` is less than 10, and if this is true `x` is printed. We can intuitively guess that there is a better way to check for this condition... And there is!!!
Python gives us full access to what are known as boolean operations. The ones that we will use most often are `and`, `or`, and `not`. Both `and` and `or` take two conditions as inputs, while `not` takes only a single condition. All of these operations return a single boolean. `and` requires both conditions to be `True` to return `True`, otherwise it will return `False`. `or` requires only one of the conditions to be `True` to return `True`; the only time it returns `False` is when both inputs are `False`. The `not` switches the truth of the input condition. These operations are derived from formal logic, and you can find a full discussion of their intricacies [here](https://en.wikipedia.org/wiki/Truth_table).
What this means is that we now have a natural way to combine conditions. The previously nested `if` statement can now be written as a simple `if x > 5 and x < 10`. We can also chain other interesting conditionals together. Change the value of `x` in the next cell so that it isn't printed and in the cell after so that it is.
```
x =
if x > 10 or x < 5:
print(x)
x = 5
if not (x <= 10 and x >= 5):
print(x)
x = 5
not (x <= 10 and x >= 5)
```
Notice how the first `if` in the above code snippet uses an `or`, printing `x` if it is greater than 10 or less than 5. Inherently, this statement is also saying that it will print `x` if `x` is not between 5 and 10, which is expressed in the second `if` statement. This illustrates an important point - there is always more than one way to accomplish the same thing in programming.
You can also combine several statements:
```
country = "US"
age = 21
if (country == "US" and age>=21) or (country != "US" and age >= 18):
print("This person can legally drink alcohol!")
else:
print("You are either too young or in the wrong country!")
```
Change one variable so that the person can drink legally!
## Check your understanding
**If Questions**
1. Write an `if` statement to check if a number is smaller than 100.
2. Write some code that accepts a user inputted number. It checks to see if that number is positive, and if it is it adds 50 to the number. Then print the number.
**Elif Else Questions**
1. Write an `if`-`elif`-`else` statement that does the following:
* Prints "This is a single digit number." if `x` is a single digit number
* Prints "This is a double digit number." if `x` is a double digit number
* Prints "This is a big number." otherwise.
**And Or Not Questions**
1. Write the condition in Python in a single statement: if x is between 20 and 30 print x.
2. Write the following condition in two ways using Python: if x is greater than 10 print x.
3. Write the following condition in Python: if x isn't a positive number print x.
4. Write the following condition in Python: if x is between 1 and 50 except the numbers between 25 to 30 print x.
```
number = 80
if number < 100:
print(number)
x = int(input("Enter number: "))
if x > 0:
x += 50
print(x)
x = input("Enter number: ")
if len(x) == 1:
print("single")
elif len(x) == 2:
print("double")
else:
print("otherwise")
x = int(input("Enter number: "))
if x > 20 and x < 30:
print(x)
x = 10
if x > 10:
print(x)
x = 10
if not x < 10:
print(x)
x = 0
if x < 0:
print(x)
x = 26
if x >1 and x < 50 and not (x > 25 and x < 30):
print(x)
```
|
github_jupyter
|
if condition:
if_block_statement
type(True)
type(False)
1 == 2
1 != 2
1 < 2
1 > 2
1 <= 2
1 >= 2
if x > 5:
x += 10
print(x)
x = input('Please enter a number: ')
print(x)
x = int(input('Please enter a number: '))
if x > 5:
x += 10
print(x)
x = int(input('Please enter a number: '))
if x < 0:
print('You entered a negative number.')
elif x > 0:
print('You entered a positive number.')
else:
print('You entered the number 0.')
x = int(input('Please enter a number: '))
if x > 5:
print('You entered a number bigger than 5.')
elif x > 0:
print('You entered a positive number.')
elif x < 0:
print('You entered a negative number.')
else:
print('You entered the number 0.')
x = 5
if x > 5:
if x < 10:
print(x)
x =
if x > 10 or x < 5:
print(x)
x = 5
if not (x <= 10 and x >= 5):
print(x)
x = 5
not (x <= 10 and x >= 5)
country = "US"
age = 21
if (country == "US" and age>=21) or (country != "US" and age >= 18):
print("This person can legally drink alcohol!")
else:
print("You are either too young or in the wrong country!")
number = 80
if number < 100:
print(number)
x = int(input("Enter number: "))
if x > 0:
x += 50
print(x)
x = input("Enter number: ")
if len(x) == 1:
print("single")
elif len(x) == 2:
print("double")
else:
print("otherwise")
x = int(input("Enter number: "))
if x > 20 and x < 30:
print(x)
x = 10
if x > 10:
print(x)
x = 10
if not x < 10:
print(x)
x = 0
if x < 0:
print(x)
x = 26
if x >1 and x < 50 and not (x > 25 and x < 30):
print(x)
| 0.081147 | 0.988188 |
# Realization of Non-Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
## Introduction
Computing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. The methods for this purpose are typically classified into
* non-recursive and
* recursive
techniques. This section focuses on the realization of non-recursive filters.
### Non-Recursive Filters
An LTI system can be characterized completely by its impulse response $h[k]$

The output signal $y[k]$ is given by (linear) convolution of the input signal $x[k]$ with the impulse response $h[k]$
\begin{equation}
y[k] = x[k] * h[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \; h[k-\kappa]
\end{equation}
Two aspects of this representation become evident when inspecting above equation:
1. The output signal $y[k]$ is a linear combination of the input signal $x[k]$. There is no feedback of the output signal of past time-instants. Therefore, such filters are termed as *non-recursive* filters.
2. In order to compute the output signal at one particular time-instant $k$, the input signal needs to be known for all past and future time-instants.
The second aspect prohibits a practical realization. In order to be able to realize a non-recursive filter by convolution, the output at time-instant $k$ should only depend on the input signal $x[k]$ up to time-index $k$
\begin{equation}
y[k] = \sum_{\kappa = -\infty}^{k} x[\kappa] \; h[k-\kappa]
\end{equation}
This is the case when the impulse response is causal, hence when $h[k] = 0$ for $k<0$. However, this still requires knowledge of the input signal for all past time-instants. If we further assume that the input signal is causal, $x[k] = 0$ for $k<0$, we get
\begin{equation}
y[k] = \sum_{\kappa = 0}^{k} x[\kappa] \; h[k-\kappa]
\end{equation}
### Finite Impulse Response
Many practical systems have an impulse response of finite length $N$ or can be approximated by an impulse response of finite length
\begin{equation}
h_N[k] = \begin{cases} h[k] & \text{ for } 0 \leq k < N \\ 0 & \text{ otherwise} \end{cases}
\end{equation}
Such an impulse response is denoted as [*finite impulse response*](https://en.wikipedia.org/wiki/Finite_impulse_response) (FIR). Introducing $h_N[k]$ into above sum and rearranging terms yields
\begin{equation}
y[k] = \sum_{\kappa = 0}^{k} x[\kappa] \; h_N[k-\kappa] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \; x[k-\kappa]
\end{equation}
Hence for a causal input signal $x[k]$ and a FIR the output of the system can be computed by a finite number of operations.
The evaluation of the convolution for a FIR of length $N$ requires $N$ multiplications and $N-1$ additions per time index $k$. For the real-time convolution of an audio signal with a sampling rate of $f_\text{S} = 48$ kHz with a FIR of length $N = 48000$ we have to compute around $2 \times 2.3 \cdot 10^9$ numerical operations per second. This is a considerable numerical complexity, especially on embedded or mobile platforms. Therefore, various techniques have been developed to lower the computational complexity.
|
github_jupyter
|
# Realization of Non-Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
## Introduction
Computing the output $y[k] = \mathcal{H} \{ x[k] \}$ of a [linear time-invariant](https://en.wikipedia.org/wiki/LTI_system_theory) (LTI) system is of central importance in digital signal processing. This is often referred to as [*filtering*](https://en.wikipedia.org/wiki/Digital_filter) of the input signal $x[k]$. The methods for this purpose are typically classified into
* non-recursive and
* recursive
techniques. This section focuses on the realization of non-recursive filters.
### Non-Recursive Filters
An LTI system can be characterized completely by its impulse response $h[k]$

The output signal $y[k]$ is given by (linear) convolution of the input signal $x[k]$ with the impulse response $h[k]$
\begin{equation}
y[k] = x[k] * h[k] = \sum_{\kappa = -\infty}^{\infty} x[\kappa] \; h[k-\kappa]
\end{equation}
Two aspects of this representation become evident when inspecting above equation:
1. The output signal $y[k]$ is a linear combination of the input signal $x[k]$. There is no feedback of the output signal of past time-instants. Therefore, such filters are termed as *non-recursive* filters.
2. In order to compute the output signal at one particular time-instant $k$, the input signal needs to be known for all past and future time-instants.
The second aspect prohibits a practical realization. In order to be able to realize a non-recursive filter by convolution, the output at time-instant $k$ should only depend on the input signal $x[k]$ up to time-index $k$
\begin{equation}
y[k] = \sum_{\kappa = -\infty}^{k} x[\kappa] \; h[k-\kappa]
\end{equation}
This is the case when the impulse response is causal, hence when $h[k] = 0$ for $k<0$. However, this still requires knowledge of the input signal for all past time-instants. If we further assume that the input signal is causal, $x[k] = 0$ for $k<0$, we get
\begin{equation}
y[k] = \sum_{\kappa = 0}^{k} x[\kappa] \; h[k-\kappa]
\end{equation}
### Finite Impulse Response
Many practical systems have an impulse response of finite length $N$ or can be approximated by an impulse response of finite length
\begin{equation}
h_N[k] = \begin{cases} h[k] & \text{ for } 0 \leq k < N \\ 0 & \text{ otherwise} \end{cases}
\end{equation}
Such an impulse response is denoted as [*finite impulse response*](https://en.wikipedia.org/wiki/Finite_impulse_response) (FIR). Introducing $h_N[k]$ into above sum and rearranging terms yields
\begin{equation}
y[k] = \sum_{\kappa = 0}^{k} x[\kappa] \; h_N[k-\kappa] = \sum_{\kappa = 0}^{N-1} h_N[\kappa] \; x[k-\kappa]
\end{equation}
Hence for a causal input signal $x[k]$ and a FIR the output of the system can be computed by a finite number of operations.
The evaluation of the convolution for a FIR of length $N$ requires $N$ multiplications and $N-1$ additions per time index $k$. For the real-time convolution of an audio signal with a sampling rate of $f_\text{S} = 48$ kHz with a FIR of length $N = 48000$ we have to compute around $2 \times 2.3 \cdot 10^9$ numerical operations per second. This is a considerable numerical complexity, especially on embedded or mobile platforms. Therefore, various techniques have been developed to lower the computational complexity.
| 0.944357 | 0.980375 |
# Hyper-parameter tuning
**Learning Objectives**
1. Learn how to use `cloudml-hypertune` to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the `.yaml` file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
## Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set *prior* to training a model, as opposed to parameters which are learned *during* training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
**1. Manual**
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
**2. Grid Search**
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
**3. Random Search**
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
**4. Bayesian Optimization**
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here [here](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization).
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
**AI Platform HyperTune**
AI Platform HyperTune, powered by [Google Vizier](https://ai.google/research/pubs/pub46180), uses Bayesian Optimization by default, but [also supports](https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms) Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
```
BUCKET = "qwiklabs-gcp-04-8722038efd75"
PROJECT = "qwiklabs-gcp-04-8722038efd75"
REGION = "us-west1"
TFVERSION = "2.1" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
```
## Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
1. Upload data to Google Cloud Storage
2. Move code into a trainer Python package
4. Submit training job with `gcloud` to train on AI Platform
### Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook [0_export_data_from_bq_to_gcs.ipynb](./0_export_data_from_bq_to_gcs.ipynb), which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
```
!gsutil ls gs://$BUCKET/taxifare/data
```
## Move code into python package
In the [previous lab](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb), we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the `taxifare/trainer` directory:
- `__init__.py`
- `model.py`
- `task.py`
```
!ls -la taxifare/trainer
```
To use hyperparameter tuning in your training job you must perform the following steps:
1. Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.
2. Include the following code in your training application:
- Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.
Add your hyperparameter metric to the summary for your graph.
- To submit a hyperparameter tuning job, we must modify `model.py` and `task.py` to expose any variables we want to tune as command line arguments.
### Modify model.py
## Exercise.
Complete the TODOs in the `train_and_evaluate` functin below.
- Define the hyperparameter tuning metric `hp_metric`
- Set up cloudml-hypertune to report the results of each trial by calling its helper function, `report_hyperparameter_tuning_metric`
```
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = history.history['val_rmse'][num_evals-1]
# TODO 1
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=num_evals
)
return history
```
### Modify task.py
```
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
```
### Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:
## Exercise.
Complete the TODOs below.
- Specify the hypertuning cofiguration for the learning rate, the batch size and the number of buckets using one of the available [hyperparameter types](https://cloud.google.com/ai-platform/training/docs/hyperparameter-tuning-overview#hyperparameter_types).
- Specify the hyperparameter tuning metric tag
- Set the maximum number of parallel trial and the max number of trials
```
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 2
hyperparameterMetricTag: rmse
enableTrialEarlyStopping: True
params:
- parameterName: lr
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
type: INTEGER
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterName: batch_size
type: DISCRETE
discreteValues:
- 15
- 30
- 50
```
#### Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
```
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--config=hptuning_config.yaml \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
```
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
|
github_jupyter
|
BUCKET = "qwiklabs-gcp-04-8722038efd75"
PROJECT = "qwiklabs-gcp-04-8722038efd75"
REGION = "us-west1"
TFVERSION = "2.1" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
!gsutil ls gs://$BUCKET/taxifare/data
!ls -la taxifare/trainer
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
@tf.function
def fare_thresh(x):
return 60 * activations.relu(x)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = history.history['val_rmse'][num_evals-1]
# TODO 1
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=num_evals
)
return history
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 2
hyperparameterMetricTag: rmse
enableTrialEarlyStopping: True
params:
- parameterName: lr
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
type: INTEGER
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterName: batch_size
type: DISCRETE
discreteValues:
- 15
- 30
- 50
%%bash
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--config=hptuning_config.yaml \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
| 0.57081 | 0.981595 |
```
# Imports
import numpy as np
from PIL import Image
import requests
from io import BytesIO
from keras import backend
from keras.models import Model
from keras.applications.vgg16 import VGG16
from scipy.optimize import fmin_l_bfgs_b
# Hyperparams
ITERATIONS = 10
CHANNELS = 3
IMAGE_SIZE = 350
IMAGE_WIDTH = IMAGE_SIZE
IMAGE_HEIGHT = IMAGE_SIZE
IMAGENET_MEAN_RGB_VALUES = [123.68, 116.779, 103.939]
CONTENT_WEIGHT = 0.02
STYLE_WEIGHT = 4.5
TOTAL_VARIATION_WEIGHT = 0.995
TOTAL_VARIATION_LOSS_FACTOR = 1.25
# Paths
input_image_path = "input.png"
style_image_path = "style.png"
output_image_path = "output.png"
combined_image_path = "combined.png"
# San Francisco
# san_francisco_image_path = "https://upload.wikimedia.org/wikipedia/commons/f/f9/Beijing_West_Railway_Station_%2820180628184009%29.jpg"
san_francisco_image_path = "http://n.sinaimg.cn/sinakd20210718s/786/w786h800/20210718/3a50-a5849a627e3307835e6806003a2e45e0.jpg"
# Warsaw by Tytus Brzozowski, http://t-b.pl
tytus_image_path = "http://meetingbenches.com/wp-content/flagallery/tytus-brzozowski-polish-architect-and-watercolorist-a-fairy-tale-in-warsaw/tytus_brzozowski_13.jpg"
#Input visualization
input_image = Image.open(BytesIO(requests.get(san_francisco_image_path).content))
input_image = input_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
input_image.save(input_image_path)
input_image
# Style visualization
style_image = Image.open(BytesIO(requests.get(tytus_image_path).content))
style_image = style_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
style_image.save(style_image_path)
style_image
# Data normalization and reshaping from RGB to BGR
input_image_array = np.asarray(input_image, dtype="float32")
input_image_array = np.expand_dims(input_image_array, axis=0)
input_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
input_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
input_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
input_image_array = input_image_array[:, :, :, ::-1]
style_image_array = np.asarray(style_image, dtype="float32")
style_image_array = np.expand_dims(style_image_array, axis=0)
style_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
style_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
style_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
style_image_array = style_image_array[:, :, :, ::-1]
# Model
input_image = backend.variable(input_image_array)
style_image = backend.variable(style_image_array)
combination_image = backend.placeholder((1, IMAGE_HEIGHT, IMAGE_SIZE, 3))
input_tensor = backend.concatenate([input_image,style_image,combination_image], axis=0)
model = VGG16(input_tensor=input_tensor, include_top=False)
def content_loss(content, combination):
return backend.sum(backend.square(combination - content))
layers = dict([(layer.name, layer.output) for layer in model.layers])
content_layer = "block2_conv2"
layer_features = layers[content_layer]
content_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = backend.variable(0.)
loss = loss + CONTENT_WEIGHT * content_loss(content_image_features,
combination_features)
def gram_matrix(x):
features = backend.batch_flatten(backend.permute_dimensions(x, (2, 0, 1)))
gram = backend.dot(features, backend.transpose(features))
return gram
def compute_style_loss(style, combination):
style = gram_matrix(style)
combination = gram_matrix(combination)
size = IMAGE_HEIGHT * IMAGE_WIDTH
return backend.sum(backend.square(style - combination)) / (4. * (CHANNELS ** 2) * (size ** 2))
style_layers = ["block1_conv2", "block2_conv2", "block3_conv3", "block4_conv3", "block5_conv3"]
for layer_name in style_layers:
layer_features = layers[layer_name]
style_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
style_loss = compute_style_loss(style_features, combination_features)
loss += (STYLE_WEIGHT / len(style_layers)) * style_loss
def total_variation_loss(x):
a = backend.square(x[:, :IMAGE_HEIGHT-1, :IMAGE_WIDTH-1, :] - x[:, 1:, :IMAGE_WIDTH-1, :])
b = backend.square(x[:, :IMAGE_HEIGHT-1, :IMAGE_WIDTH-1, :] - x[:, :IMAGE_HEIGHT-1, 1:, :])
return backend.sum(backend.pow(a + b, TOTAL_VARIATION_LOSS_FACTOR))
loss += TOTAL_VARIATION_WEIGHT * total_variation_loss(combination_image)
outputs = [loss]
outputs += backend.gradients(loss, combination_image)
def evaluate_loss_and_gradients(x):
x = x.reshape((1, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))
outs = backend.function([combination_image], outputs)([x])
loss = outs[0]
gradients = outs[1].flatten().astype("float64")
return loss, gradients
class Evaluator:
def loss(self, x):
loss, gradients = evaluate_loss_and_gradients(x)
self._gradients = gradients
return loss
def gradients(self, x):
return self._gradients
evaluator = Evaluator()
x = np.random.uniform(0, 255, (1, IMAGE_HEIGHT, IMAGE_WIDTH, 3)) - 128.
for i in range(ITERATIONS):
x, loss, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.gradients, maxfun=20)
print("Iteration %d completed with loss %d" % (i, loss))
xo = x.copy()
xo = xo.reshape((IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))
xo = xo[:, :, ::-1]
xo[:, :, 0] += IMAGENET_MEAN_RGB_VALUES[2]
xo[:, :, 1] += IMAGENET_MEAN_RGB_VALUES[1]
xo[:, :, 2] += IMAGENET_MEAN_RGB_VALUES[0]
xo = np.clip(xo, 0, 255).astype("uint8")
output_image = Image.fromarray(xo)
output_image.save(f'output_review{i}.png')
output_image.save(output_image_path)
output_image
# Visualizing combined results
combined = Image.new("RGB", (IMAGE_WIDTH*3, IMAGE_HEIGHT))
x_offset = 0
for image in map(Image.open, [input_image_path, style_image_path, output_image_path]):
combined.paste(image, (x_offset, 0))
x_offset += IMAGE_WIDTH
combined.save(combined_image_path)
combined
# 显示10轮迭代的中间输出预览
combined_output = Image.new("RGB", (IMAGE_WIDTH, IMAGE_HEIGHT*11))
x_offset = 0
y_offset = 0
for i in range(ITERATIONS):
image = Image.open(f'output_review{i}.png')
combined_output.paste(image, (x_offset, y_offset))
y_offset += IMAGE_WIDTH + 10
combined_output
```
|
github_jupyter
|
# Imports
import numpy as np
from PIL import Image
import requests
from io import BytesIO
from keras import backend
from keras.models import Model
from keras.applications.vgg16 import VGG16
from scipy.optimize import fmin_l_bfgs_b
# Hyperparams
ITERATIONS = 10
CHANNELS = 3
IMAGE_SIZE = 350
IMAGE_WIDTH = IMAGE_SIZE
IMAGE_HEIGHT = IMAGE_SIZE
IMAGENET_MEAN_RGB_VALUES = [123.68, 116.779, 103.939]
CONTENT_WEIGHT = 0.02
STYLE_WEIGHT = 4.5
TOTAL_VARIATION_WEIGHT = 0.995
TOTAL_VARIATION_LOSS_FACTOR = 1.25
# Paths
input_image_path = "input.png"
style_image_path = "style.png"
output_image_path = "output.png"
combined_image_path = "combined.png"
# San Francisco
# san_francisco_image_path = "https://upload.wikimedia.org/wikipedia/commons/f/f9/Beijing_West_Railway_Station_%2820180628184009%29.jpg"
san_francisco_image_path = "http://n.sinaimg.cn/sinakd20210718s/786/w786h800/20210718/3a50-a5849a627e3307835e6806003a2e45e0.jpg"
# Warsaw by Tytus Brzozowski, http://t-b.pl
tytus_image_path = "http://meetingbenches.com/wp-content/flagallery/tytus-brzozowski-polish-architect-and-watercolorist-a-fairy-tale-in-warsaw/tytus_brzozowski_13.jpg"
#Input visualization
input_image = Image.open(BytesIO(requests.get(san_francisco_image_path).content))
input_image = input_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
input_image.save(input_image_path)
input_image
# Style visualization
style_image = Image.open(BytesIO(requests.get(tytus_image_path).content))
style_image = style_image.resize((IMAGE_WIDTH, IMAGE_HEIGHT))
style_image.save(style_image_path)
style_image
# Data normalization and reshaping from RGB to BGR
input_image_array = np.asarray(input_image, dtype="float32")
input_image_array = np.expand_dims(input_image_array, axis=0)
input_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
input_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
input_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
input_image_array = input_image_array[:, :, :, ::-1]
style_image_array = np.asarray(style_image, dtype="float32")
style_image_array = np.expand_dims(style_image_array, axis=0)
style_image_array[:, :, :, 0] -= IMAGENET_MEAN_RGB_VALUES[2]
style_image_array[:, :, :, 1] -= IMAGENET_MEAN_RGB_VALUES[1]
style_image_array[:, :, :, 2] -= IMAGENET_MEAN_RGB_VALUES[0]
style_image_array = style_image_array[:, :, :, ::-1]
# Model
input_image = backend.variable(input_image_array)
style_image = backend.variable(style_image_array)
combination_image = backend.placeholder((1, IMAGE_HEIGHT, IMAGE_SIZE, 3))
input_tensor = backend.concatenate([input_image,style_image,combination_image], axis=0)
model = VGG16(input_tensor=input_tensor, include_top=False)
def content_loss(content, combination):
return backend.sum(backend.square(combination - content))
layers = dict([(layer.name, layer.output) for layer in model.layers])
content_layer = "block2_conv2"
layer_features = layers[content_layer]
content_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = backend.variable(0.)
loss = loss + CONTENT_WEIGHT * content_loss(content_image_features,
combination_features)
def gram_matrix(x):
features = backend.batch_flatten(backend.permute_dimensions(x, (2, 0, 1)))
gram = backend.dot(features, backend.transpose(features))
return gram
def compute_style_loss(style, combination):
style = gram_matrix(style)
combination = gram_matrix(combination)
size = IMAGE_HEIGHT * IMAGE_WIDTH
return backend.sum(backend.square(style - combination)) / (4. * (CHANNELS ** 2) * (size ** 2))
style_layers = ["block1_conv2", "block2_conv2", "block3_conv3", "block4_conv3", "block5_conv3"]
for layer_name in style_layers:
layer_features = layers[layer_name]
style_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
style_loss = compute_style_loss(style_features, combination_features)
loss += (STYLE_WEIGHT / len(style_layers)) * style_loss
def total_variation_loss(x):
a = backend.square(x[:, :IMAGE_HEIGHT-1, :IMAGE_WIDTH-1, :] - x[:, 1:, :IMAGE_WIDTH-1, :])
b = backend.square(x[:, :IMAGE_HEIGHT-1, :IMAGE_WIDTH-1, :] - x[:, :IMAGE_HEIGHT-1, 1:, :])
return backend.sum(backend.pow(a + b, TOTAL_VARIATION_LOSS_FACTOR))
loss += TOTAL_VARIATION_WEIGHT * total_variation_loss(combination_image)
outputs = [loss]
outputs += backend.gradients(loss, combination_image)
def evaluate_loss_and_gradients(x):
x = x.reshape((1, IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))
outs = backend.function([combination_image], outputs)([x])
loss = outs[0]
gradients = outs[1].flatten().astype("float64")
return loss, gradients
class Evaluator:
def loss(self, x):
loss, gradients = evaluate_loss_and_gradients(x)
self._gradients = gradients
return loss
def gradients(self, x):
return self._gradients
evaluator = Evaluator()
x = np.random.uniform(0, 255, (1, IMAGE_HEIGHT, IMAGE_WIDTH, 3)) - 128.
for i in range(ITERATIONS):
x, loss, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(), fprime=evaluator.gradients, maxfun=20)
print("Iteration %d completed with loss %d" % (i, loss))
xo = x.copy()
xo = xo.reshape((IMAGE_HEIGHT, IMAGE_WIDTH, CHANNELS))
xo = xo[:, :, ::-1]
xo[:, :, 0] += IMAGENET_MEAN_RGB_VALUES[2]
xo[:, :, 1] += IMAGENET_MEAN_RGB_VALUES[1]
xo[:, :, 2] += IMAGENET_MEAN_RGB_VALUES[0]
xo = np.clip(xo, 0, 255).astype("uint8")
output_image = Image.fromarray(xo)
output_image.save(f'output_review{i}.png')
output_image.save(output_image_path)
output_image
# Visualizing combined results
combined = Image.new("RGB", (IMAGE_WIDTH*3, IMAGE_HEIGHT))
x_offset = 0
for image in map(Image.open, [input_image_path, style_image_path, output_image_path]):
combined.paste(image, (x_offset, 0))
x_offset += IMAGE_WIDTH
combined.save(combined_image_path)
combined
# 显示10轮迭代的中间输出预览
combined_output = Image.new("RGB", (IMAGE_WIDTH, IMAGE_HEIGHT*11))
x_offset = 0
y_offset = 0
for i in range(ITERATIONS):
image = Image.open(f'output_review{i}.png')
combined_output.paste(image, (x_offset, y_offset))
y_offset += IMAGE_WIDTH + 10
combined_output
| 0.660063 | 0.401717 |
# Starbucks Capstone Challenge
### Introduction
This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer.
### Example
To give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.
However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer.
### Cleaning
This makes data cleaning especially important and tricky.
You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers.
### Final Advice
Because this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A).
# Data Sets
The data is contained in three files:
* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers viewed, and offers completed
Here is the schema and explanation of each variable in the files:
**portfolio.json**
* id (string) - offer id
* offer_type (string) - type of offer ie BOGO, discount, informational
* difficulty (int) - minimum required spend to complete an offer
* reward (int) - reward given for completing an offer
* duration (int) - time for offer to be open, in days
* channels (list of strings)
**profile.json**
* age (int) - age of the customer
* became_member_on (int) - date when customer created an app account
* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
* id (str) - customer id
* income (float) - customer's income
**transcript.json**
* event (str) - record description (ie transaction, offer received, offer viewed, etc.)
* person (str) - customer id
* time (int) - time in hours since start of test. The data begins at time t=0
* value - (dict of strings) - either an offer id or transaction amount depending on the record
## Load Data sets
```
import datetime
import pickle
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor,MultiOutputClassifier
from xgboost import XGBClassifier,XGBRegressor
from sklearn.metrics import precision_score,recall_score,f1_score,confusion_matrix,r2_score,accuracy_score,precision_recall_curve,classification_report, make_scorer, fbeta_score
import seaborn as sns
from datetime import datetime
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import LabelBinarizer, MultiLabelBinarizer
from sklearn.model_selection import RandomizedSearchCV
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
#Importing necessary libraries
from sklearn.metrics import mean_squared_error, r2_score
import xgboost as xgb
import progressbar
import pandas as pd
import numpy as np
import math
import json
import re
import os
from joblib import dump, load
#% matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile_raw = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
portfolio.head()
profile_raw.head()
transcript.head()
portfolio.isna().sum()
profile_raw.isna().sum()
profile_raw.info()
transcript.isna().sum()
```
## Cleaning datasets
### Clean portfolio data
```
def clean_portfolio (data):
# change the duration from day to hour
cleaned_portfolio = portfolio.copy()
cleaned_portfolio['duration'] = cleaned_portfolio['duration']
# apply one hot encoding to channels column
cleaned_portfolio['web'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'web' in x else 0)
cleaned_portfolio['email'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'email' in x else 0)
cleaned_portfolio['mobile'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'mobile' in x else 0)
cleaned_portfolio['social'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'social' in x else 0)
# apply one hot encoding to offer_type column
offer_type = pd.get_dummies(cleaned_portfolio['offer_type'])
# drop the channels and offer_type column
cleaned_portfolio.drop(['channels', 'offer_type'], axis=1, inplace=True)
# combine the portfolio and offer_type dataframe to form a cleaned dataframe
cleaned_portfolio = pd.concat([cleaned_portfolio, offer_type], axis=1, sort=False)
return(cleaned_portfolio)
cleaned_portfolio = clean_portfolio(portfolio)
cleaned_portfolio.head()
```
### Customer profile data EDA
```
profile = profile_raw
```
__Detrmine %age of missing values in profile data__
```
profile.isnull().sum(axis=0) * 100 / profile.shape[0]
```
__Compute gender attribute distribution__
```
gender_counts = profile['gender'].value_counts()
gender_counts *= 100 / gender_counts.sum()
gender_counts
profile[profile['income'].notnull()].describe()
```
__Evaluate what year customer became reward member__
```
def convert_to_datetime(elem):
"""Converts a string to a datetime object
INPUT:
elem: String that stores a date in the %Y%m%d format
OUTPUT:
datetimeobj: Datetime object"""
return datetime.strptime(str(elem), '%Y%m%d')
became_member_on = profile['became_member_on'].apply(convert_to_datetime)
start_year = became_member_on.apply(lambda elem: elem.year).value_counts()
start_year *= 100 / start_year.sum()
start_year
```
__Evaluate what month customer became reward program member__
```
start_month = became_member_on.apply(lambda elem: elem.month).value_counts()
start_month *= 100 / start_month.sum()
start_month
def update_column_name(dataframe,
old_column_name,
new_column_name):
""" Updates a Pandas DataFrame column name
INPUT:
dataframe: Pandas DataFrame object
old_column_name: String that stores the old column name
new_column_name: String that stores the new column name
OUTPUT:
column_names: np.array that stores the updated Pandas DataFrame
column names"""
column_names = dataframe.columns.values
select_data = np.array([elem == old_column_name for elem in column_names])
column_names[select_data] = new_column_name
return column_names
```
__Clean the customer profile data__:
1. Remove customers with missing income data
2. Remove customer profiles where the gender attribute is missing
3. Change the name of the 'id' column to 'customerid'
4. Transform the 'became_member_on' column to a datetime object
5. One hot encode a customer's membership start year
6. One hot encode a customer's age range
7. Transform a customer's gender from a character to a number
```
def clean_profile(profile):
""" Transforms a DataFrame that contains demographic data for each
customer
INPUT:
(Optional) data_dir: String that stores the full path to the
data directory
OUTPUT:
profile: DataFrame that contains demographic data for each
customer
"""
# Remove customers with N/A income data
profile = profile[profile['income'].notnull()]
# Remove customers with unspecified gender
profile = profile[profile['gender'] != 'O']
profile = profile.reset_index(drop=True)
# Change the name of the 'id' column to 'customerid'
profile.columns = update_column_name(profile,
'id',
'customerid')
# Initialize a list that describes the desired DataFrame column
# ordering
column_ordering = ['customerid',
'gender',
'income']
# Transform the 'became_member_on' column to a datetime object
profile['became_member_on'] =\
profile['became_member_on'].apply(convert_to_datetime)
# One hot encode a customer's membership start year
profile['membershipstartyear'] =\
profile['became_member_on'].apply(lambda elem: elem.year)
membershipstartyear_df = pd.get_dummies(profile['membershipstartyear'])
column_ordering.extend(membershipstartyear_df.columns.values)
# One hot encode a customer's age range
min_age_limit = np.int(np.floor(np.min(profile['age'])/10)*10)
max_age_limit = np.int(np.ceil(np.max(profile['age'])/10)*10)
profile['agerange'] =\
pd.cut(profile['age'],
(range(min_age_limit,max_age_limit + 10, 10)),
right=False)
profile['agerange'] = profile['agerange'].astype('str')
agerange_df = pd.get_dummies(profile['agerange'])
column_ordering.extend(agerange_df.columns.values)
# Transform a customer's gender from a character to a number
binarizerobj = LabelBinarizer()
profile['gender'] = binarizerobj.fit_transform(profile['gender'])
gender_integer_map = {}
for elem in binarizerobj.classes_:
gender_integer_map[elem] = binarizerobj.transform([elem])[0,0]
# Appened one hot encoded age range and membership start year variables
profile = pd.concat([profile,
agerange_df,
membershipstartyear_df], axis=1)
# Drop depcreated columns
profile = profile.drop(columns=['age',
'agerange',
'became_member_on',
'membershipstartyear'])
# Return a DataFrame with "clean" customer profile data
return profile[column_ordering], gender_integer_map
(profile,
gender_integer_map) = clean_profile(profile)
print("Number of user profiles: %d" % (profile.shape[0]))
profile.describe()
print(gender_integer_map)
```
__Distribution of age and income of customer__
```
def display_customer_profile(data):
'''Display customer profile with histograms'''
# Display Histogram of Customer Age
user_age = data['age'].plot(kind='hist', bins=20, title='Distribution of Customer Age')
user_age.set_xlabel("Customer Age")
# Display Histogram of User Income
plt.figure()
user_income = data['income'].plot(kind='hist', bins=20, title='Distribution of Customer Income')
user_income.set_xlabel("Income")
display_customer_profile(profile_raw)
```
__Plot income distribution as a function of gender__
```
plt.figure(figsize=(12,5))
plt.title("The box plot of Income")
sns.boxplot(y="gender", x="income", data = profile
,orient="h", palette = 'inferno')
```
Average income among male gender is higher than female, however minimum and maximum income for both male and female gender is approximately the same
__Compute custome rgender distribution__
```
profile['gender'].value_counts()
```
### Transcript Data EDA
```
#Clean transcript data
event_counts = transcript['event'].value_counts()
event_counts = pd.DataFrame(list(zip(event_counts.index.values, event_counts)),
columns=['event', 'count'])
event_counts
total_transactions = event_counts['count'].sum()
percentage_transactions = 100 * event_counts.iloc[0]['count'] / total_transactions
percentage_offers = 100 * event_counts.iloc[1:]['count'].sum() / total_transactions
print("Percentage of customer transaction events: %.1f %%" % (percentage_transactions))
print("Percentage of customer offer events: %.1f %%" % (percentage_offers))
transcript.head()
profile.head()
```
__Clean Transcript data__:
1. Change the name of the 'person' column to 'customerid'
2. Remove customer id's that are not in the customer profile DataFrame
3. Convert time variable units from hours to days
4. Change the name of the 'time' column to 'timedays'
5. Create a DataFrame that describes offers
6. Create an offerid column
7. Parse the offer event type (i.e. 'received', 'viewed', or 'completed')
8. One hot encode customer offer events
9. Create a DataFrame that describes customer transaction events
10. Parse customer transaction values
```
def clean_transcript(profile,transcript):
""" Transforms a DataFrame that contains records for transactions, offers
received, offers viewed, and offers completed
INPUT:
profile: DataFrame that contains demographic data for each
customer
OUTPUT:
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
"""
# Change the name of the 'person' column to 'customerid'
transcript.columns = update_column_name(transcript,
'person',
'customerid')
profile.columns = update_column_name(profile,
'id',
'customerid')
# Remove customer id's that are not in the customer profile DataFrame
select_data = transcript['customerid'].isin(profile['customerid'])
transcript = transcript[select_data]
percent_removed = 100 * (1 - select_data.sum() / select_data.shape[0])
print("Percentage of transactions removed: %.2f %%" % percent_removed)
# Convert from hours to days
transcript['time'] /= 24.0
# Change the name of the 'time' column to 'timedays'
transcript.columns = update_column_name(transcript,'time','timedays')
# Select customer offers
pattern_obj = re.compile('^offer (?:received|viewed|completed)')
h_is_offer = lambda elem: pattern_obj.match(elem) != None
is_offer = transcript['event'].apply(h_is_offer)
offer_data = transcript[is_offer].copy()
offer_data = offer_data.reset_index(drop=True)
# Initialize a list that describes the desired output DataFrame
# column ordering
column_order = ['offerid', 'customerid', 'timedays']
# Create an offerid column
offer_data['offerid'] = offer_data['value'].apply(lambda elem: list(elem.values())[0])
# Transform a column that describes a customer offer event
pattern_obj = re.compile('^offer ([a-z]+$)')
h_transform = lambda elem: pattern_obj.match(elem).groups(1)[0]
offer_data['event'] = offer_data['event'].apply(h_transform)
# One hot encode customer offer events
event_df = pd.get_dummies(offer_data['event'])
column_order.extend(event_df.columns.values)
# Create a DataFrame that describes customer offer events
offer_data = pd.concat([offer_data, event_df], axis=1)
offer_data.drop(columns=['event', 'value'])
offer_data = offer_data[column_order]
# Select customer transaction events
transaction = transcript[is_offer == False]
transaction = transaction.reset_index(drop=True)
# Transform customer transaction event values
transaction['amount'] = transaction['value'].apply(lambda elem: list(elem.values())[0])
# Create a DataFrame that describes customer transactions
transaction = transaction.drop(columns=['event', 'value'])
column_order = ['customerid', 'timedays', 'amount']
transaction = transaction[column_order]
return offer_data, transaction
offer_data, transaction = clean_transcript(profile,transcript)
transaction.head()
offer_data.head()
profile.head()
cleaned_portfolio.head()
cleaned_portfolio.columns = update_column_name(cleaned_portfolio,
'id',
'offerid')
cleaned_portfolio.columns = update_column_name(cleaned_portfolio,
'duration',
'durationdays')
```
## Combining Transaction, Demographic and Offer data
```
def create_offeranalysis_dataset(profile,
portfolio,
offer_data,
transaction):
""" Creates an analytic dataset from the following Starbucks challenge
datasets:
* portfolio.json - Contains offer ids and meta data (duration, type,
etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers
viewed, and offers completed
INPUT:
profile: DataFrame that contains demographic data for each
customer
portfolio: Contains offer ids and meta data (duration, type, etc.)
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
OUTPUT:
clean_data: DataFrame that characterizes the effectiveness of
customer offers"""
clean_data = []
customerid_list = offer_data['customerid'].unique()
widgets=[' [',
progressbar.Timer(), '] ',
progressbar.Bar(),
' (',
progressbar.ETA(),
') ']
for idx in range(len(customerid_list)):
clean_data.extend(create_combined_records(customerid_list[idx],
portfolio,
profile,
offer_data,
transaction))
clean_data = pd.DataFrame(clean_data)
clean_data = clean_data.sort_values('time')
return clean_data.reset_index(drop=True)
def create_combined_records(customer_id,
portfolio,
profile,
offer_data,
transaction):
"""
Creates a list of dictionaries that describes the effectiveness of
offers to a specific customer
INPUT:
customer_id: String that refers to a specific customer
profile: DataFrame that contains demographic data for each
customer
portfolio: DataFrame containing offer ids and meta data about
each offer (duration, type, etc.)
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
OUTPUT:
rows: List of dictionaries that describes the effectiveness of
offers to a specific customer
"""
# Select a customer's profile
cur_customer = profile[profile['customerid'] == customer_id]
# Select offer data for a specific customer
select_offer_data = offer_data['customerid'] == customer_id
customer_offer_data = offer_data[select_offer_data]
customer_offer_data = customer_offer_data.drop(columns='customerid')
customer_offer_data = customer_offer_data.reset_index(drop=True)
# Select transactions for a specific customer
select_transaction = transaction['customerid'] == customer_id
customer_transaction_data = transaction[select_transaction]
customer_transaction_data =\
customer_transaction_data.drop(columns='customerid')
customer_transaction_data =\
customer_transaction_data.reset_index(drop=True)
# Initialize DataFrames that describe when a customer receives,
# views, and completes an offer
event_type = ['completed',
'received',
'viewed']
offer_received =\
customer_offer_data[customer_offer_data['received'] == 1]
offer_received = offer_received.drop(columns=event_type)
offer_received = offer_received.reset_index(drop=True)
offer_viewed =\
customer_offer_data[customer_offer_data['viewed'] == 1]
offer_viewed = offer_viewed.drop(columns=event_type)
offer_viewed = offer_viewed.reset_index(drop=True)
offer_completed =\
customer_offer_data[customer_offer_data['completed'] == 1]
offer_completed = offer_completed.drop(columns=event_type)
offer_completed = offer_completed.reset_index(drop=True)
# Iterate over each offer a customer receives
rows = []
for idx in range(offer_received.shape[0]):
# Initialize the current offer id
cur_offer_id = offer_received.iloc[idx]['offerid']
# Look-up a description of the current offer
cur_offer = portfolio.loc[portfolio['offerid'] == cur_offer_id]
durationdays = cur_offer['durationdays'].values[0]
# Initialize the time period when an offer is valid
cur_offer_startime = offer_received.iloc[idx]['timedays']
cur_offer_endtime =\
offer_received.iloc[idx]['timedays'] + durationdays
# Initialize a boolean array that select customer transcations that
# fall within the valid offer time window
select_transaction =\
np.logical_and(customer_transaction_data['timedays'] >=
cur_offer_startime,
customer_transaction_data['timedays'] <=
cur_offer_endtime)
# Initialize a boolean array that selects a description of when a
# customer completes an offer (this array may not contain any True
# values)
select_offer_completed =\
np.logical_and(offer_completed['timedays'] >= cur_offer_startime,
offer_completed['timedays'] <= cur_offer_endtime)
# Initialize a boolean array that selects a description of when a
# customer views an offer (this array may not contain any True
# values)
select_offer_viewed =\
np.logical_and(offer_viewed['timedays'] >= cur_offer_startime,
offer_viewed['timedays'] <= cur_offer_endtime)
# Determine whether the current offer was successful
cur_offer_successful =\
select_offer_completed.sum() > 0 and select_offer_viewed.sum() > 0
# Select customer transcations that occurred within the current offer
# valid time window
cur_offer_transactions = customer_transaction_data[select_transaction]
# Initialize a dictionary that describes the current customer offer
cur_row = {'offerid': cur_offer_id,
'customerid': customer_id,
'time': cur_offer_startime,
'offersuccessful': int(cur_offer_successful),
'totalamount': cur_offer_transactions['amount'].sum()}
cur_row.update(cur_offer.iloc[0,1:].to_dict())
cur_row.update(cur_customer.iloc[0,1:].to_dict())
# Update a list of dictionaries that describes the effectiveness of
# offers to a specific customer
rows.append(cur_row)
return rows
clean_data_csvfile = "./data/clean_data.csv"
if os.path.exists(clean_data_csvfile):
clean_data = pd.read_csv(clean_data_csvfile)
else:
clean_data = create_offeranalysis_dataset(profile,
cleaned_portfolio,
offer_data,
transaction)
clean_data.to_csv(clean_data_csvfile, index=False)
clean_data = clean_data.drop(columns=['time',
'customerid',
'email',
'informational'])
column_ordering = ['offerid', 'totalamount']
column_ordering.extend([elem for elem in clean_data.columns if elem not in column_ordering])
clean_data = clean_data[column_ordering]
clean_data.head()
clean_data.columns
clean_data.isna().sum()
```
# Modeling
**Objective:** Objective of this modeling experscise is to build a machine learning model to predict whether or not someone will respond to an offer
## Defning features for model
```
data_y = clean_data['offersuccessful']
data_x = clean_data.drop(['offersuccessful','offerid','totalamount'], axis = 1)
variable_names = data_x.columns
variable_names
```
## Creating train, validation and test set
```
from sklearn.model_selection import train_test_split, GridSearchCV
(X_train,
X_test,
y_train,
y_test) = train_test_split(data_x,
data_y,
test_size=0.3,
random_state=45)
```
### Evaluate baseline performace
```
baseline_accuracy = np.sum(y_train)/len(y_train)
baseline_precision
```
### Construct simple logistic regression model
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(random_state=45).fit(X_train, y_train)
Y_pred_train=lr.predict(X_train)
Y_pred_test=lr.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
```
### Construct a naive bayesian model
```
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
Y_pred_train=gnb.predict(X_train)
Y_pred_test=gnb.predict(X_test)
accuracy_score(Y_pred_train,y_train)
```
### Construct a Random forest model
```
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
rf = RandomForestClassifier(n_estimators = 80, random_state = 42, max_depth= 3)
rf.fit(X_train, y_train)
Y_pred_train=rf.predict(X_train)
Y_pred_test=rf.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
```
### Construct a adaboost model
```
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
ada = AdaBoostClassifier(n_estimators = 100, random_state = 42)
ada.fit(X_train, y_train)
Y_pred_train=ada.predict(X_train)
Y_pred_test=ada.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
```
__Conclusion__:We are currently similar performance from both random forest and adaboost model with random forest giving slighly better performance and hence we will go ahead with it
```
relative_importance = rf.feature_importances_
relative_importance = relative_importance / np.sum(relative_importance)
feature_importance =\
pd.DataFrame(list(zip(variable_names,
relative_importance)),
columns=['feature', 'relativeimportance'])
feature_importance = feature_importance.sort_values('relativeimportance',
ascending=False)
feature_importance = feature_importance.reset_index(drop=True)
palette = sns.color_palette("Blues_r", feature_importance.shape[0])
plt.figure(figsize=(8, 8))
sns.barplot(x='relativeimportance',
y='feature',
data=feature_importance,
palette=palette)
plt.xlabel('Relative Importance')
plt.ylabel('Feature')
plt.title('Random Forest Estimated Feature Importance')
```
|
github_jupyter
|
import datetime
import pickle
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor,MultiOutputClassifier
from xgboost import XGBClassifier,XGBRegressor
from sklearn.metrics import precision_score,recall_score,f1_score,confusion_matrix,r2_score,accuracy_score,precision_recall_curve,classification_report, make_scorer, fbeta_score
import seaborn as sns
from datetime import datetime
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import LabelBinarizer, MultiLabelBinarizer
from sklearn.model_selection import RandomizedSearchCV
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
#Importing necessary libraries
from sklearn.metrics import mean_squared_error, r2_score
import xgboost as xgb
import progressbar
import pandas as pd
import numpy as np
import math
import json
import re
import os
from joblib import dump, load
#% matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile_raw = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
portfolio.head()
profile_raw.head()
transcript.head()
portfolio.isna().sum()
profile_raw.isna().sum()
profile_raw.info()
transcript.isna().sum()
def clean_portfolio (data):
# change the duration from day to hour
cleaned_portfolio = portfolio.copy()
cleaned_portfolio['duration'] = cleaned_portfolio['duration']
# apply one hot encoding to channels column
cleaned_portfolio['web'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'web' in x else 0)
cleaned_portfolio['email'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'email' in x else 0)
cleaned_portfolio['mobile'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'mobile' in x else 0)
cleaned_portfolio['social'] = cleaned_portfolio['channels'].apply(lambda x: 1 if 'social' in x else 0)
# apply one hot encoding to offer_type column
offer_type = pd.get_dummies(cleaned_portfolio['offer_type'])
# drop the channels and offer_type column
cleaned_portfolio.drop(['channels', 'offer_type'], axis=1, inplace=True)
# combine the portfolio and offer_type dataframe to form a cleaned dataframe
cleaned_portfolio = pd.concat([cleaned_portfolio, offer_type], axis=1, sort=False)
return(cleaned_portfolio)
cleaned_portfolio = clean_portfolio(portfolio)
cleaned_portfolio.head()
profile = profile_raw
profile.isnull().sum(axis=0) * 100 / profile.shape[0]
gender_counts = profile['gender'].value_counts()
gender_counts *= 100 / gender_counts.sum()
gender_counts
profile[profile['income'].notnull()].describe()
def convert_to_datetime(elem):
"""Converts a string to a datetime object
INPUT:
elem: String that stores a date in the %Y%m%d format
OUTPUT:
datetimeobj: Datetime object"""
return datetime.strptime(str(elem), '%Y%m%d')
became_member_on = profile['became_member_on'].apply(convert_to_datetime)
start_year = became_member_on.apply(lambda elem: elem.year).value_counts()
start_year *= 100 / start_year.sum()
start_year
start_month = became_member_on.apply(lambda elem: elem.month).value_counts()
start_month *= 100 / start_month.sum()
start_month
def update_column_name(dataframe,
old_column_name,
new_column_name):
""" Updates a Pandas DataFrame column name
INPUT:
dataframe: Pandas DataFrame object
old_column_name: String that stores the old column name
new_column_name: String that stores the new column name
OUTPUT:
column_names: np.array that stores the updated Pandas DataFrame
column names"""
column_names = dataframe.columns.values
select_data = np.array([elem == old_column_name for elem in column_names])
column_names[select_data] = new_column_name
return column_names
def clean_profile(profile):
""" Transforms a DataFrame that contains demographic data for each
customer
INPUT:
(Optional) data_dir: String that stores the full path to the
data directory
OUTPUT:
profile: DataFrame that contains demographic data for each
customer
"""
# Remove customers with N/A income data
profile = profile[profile['income'].notnull()]
# Remove customers with unspecified gender
profile = profile[profile['gender'] != 'O']
profile = profile.reset_index(drop=True)
# Change the name of the 'id' column to 'customerid'
profile.columns = update_column_name(profile,
'id',
'customerid')
# Initialize a list that describes the desired DataFrame column
# ordering
column_ordering = ['customerid',
'gender',
'income']
# Transform the 'became_member_on' column to a datetime object
profile['became_member_on'] =\
profile['became_member_on'].apply(convert_to_datetime)
# One hot encode a customer's membership start year
profile['membershipstartyear'] =\
profile['became_member_on'].apply(lambda elem: elem.year)
membershipstartyear_df = pd.get_dummies(profile['membershipstartyear'])
column_ordering.extend(membershipstartyear_df.columns.values)
# One hot encode a customer's age range
min_age_limit = np.int(np.floor(np.min(profile['age'])/10)*10)
max_age_limit = np.int(np.ceil(np.max(profile['age'])/10)*10)
profile['agerange'] =\
pd.cut(profile['age'],
(range(min_age_limit,max_age_limit + 10, 10)),
right=False)
profile['agerange'] = profile['agerange'].astype('str')
agerange_df = pd.get_dummies(profile['agerange'])
column_ordering.extend(agerange_df.columns.values)
# Transform a customer's gender from a character to a number
binarizerobj = LabelBinarizer()
profile['gender'] = binarizerobj.fit_transform(profile['gender'])
gender_integer_map = {}
for elem in binarizerobj.classes_:
gender_integer_map[elem] = binarizerobj.transform([elem])[0,0]
# Appened one hot encoded age range and membership start year variables
profile = pd.concat([profile,
agerange_df,
membershipstartyear_df], axis=1)
# Drop depcreated columns
profile = profile.drop(columns=['age',
'agerange',
'became_member_on',
'membershipstartyear'])
# Return a DataFrame with "clean" customer profile data
return profile[column_ordering], gender_integer_map
(profile,
gender_integer_map) = clean_profile(profile)
print("Number of user profiles: %d" % (profile.shape[0]))
profile.describe()
print(gender_integer_map)
def display_customer_profile(data):
'''Display customer profile with histograms'''
# Display Histogram of Customer Age
user_age = data['age'].plot(kind='hist', bins=20, title='Distribution of Customer Age')
user_age.set_xlabel("Customer Age")
# Display Histogram of User Income
plt.figure()
user_income = data['income'].plot(kind='hist', bins=20, title='Distribution of Customer Income')
user_income.set_xlabel("Income")
display_customer_profile(profile_raw)
plt.figure(figsize=(12,5))
plt.title("The box plot of Income")
sns.boxplot(y="gender", x="income", data = profile
,orient="h", palette = 'inferno')
profile['gender'].value_counts()
#Clean transcript data
event_counts = transcript['event'].value_counts()
event_counts = pd.DataFrame(list(zip(event_counts.index.values, event_counts)),
columns=['event', 'count'])
event_counts
total_transactions = event_counts['count'].sum()
percentage_transactions = 100 * event_counts.iloc[0]['count'] / total_transactions
percentage_offers = 100 * event_counts.iloc[1:]['count'].sum() / total_transactions
print("Percentage of customer transaction events: %.1f %%" % (percentage_transactions))
print("Percentage of customer offer events: %.1f %%" % (percentage_offers))
transcript.head()
profile.head()
def clean_transcript(profile,transcript):
""" Transforms a DataFrame that contains records for transactions, offers
received, offers viewed, and offers completed
INPUT:
profile: DataFrame that contains demographic data for each
customer
OUTPUT:
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
"""
# Change the name of the 'person' column to 'customerid'
transcript.columns = update_column_name(transcript,
'person',
'customerid')
profile.columns = update_column_name(profile,
'id',
'customerid')
# Remove customer id's that are not in the customer profile DataFrame
select_data = transcript['customerid'].isin(profile['customerid'])
transcript = transcript[select_data]
percent_removed = 100 * (1 - select_data.sum() / select_data.shape[0])
print("Percentage of transactions removed: %.2f %%" % percent_removed)
# Convert from hours to days
transcript['time'] /= 24.0
# Change the name of the 'time' column to 'timedays'
transcript.columns = update_column_name(transcript,'time','timedays')
# Select customer offers
pattern_obj = re.compile('^offer (?:received|viewed|completed)')
h_is_offer = lambda elem: pattern_obj.match(elem) != None
is_offer = transcript['event'].apply(h_is_offer)
offer_data = transcript[is_offer].copy()
offer_data = offer_data.reset_index(drop=True)
# Initialize a list that describes the desired output DataFrame
# column ordering
column_order = ['offerid', 'customerid', 'timedays']
# Create an offerid column
offer_data['offerid'] = offer_data['value'].apply(lambda elem: list(elem.values())[0])
# Transform a column that describes a customer offer event
pattern_obj = re.compile('^offer ([a-z]+$)')
h_transform = lambda elem: pattern_obj.match(elem).groups(1)[0]
offer_data['event'] = offer_data['event'].apply(h_transform)
# One hot encode customer offer events
event_df = pd.get_dummies(offer_data['event'])
column_order.extend(event_df.columns.values)
# Create a DataFrame that describes customer offer events
offer_data = pd.concat([offer_data, event_df], axis=1)
offer_data.drop(columns=['event', 'value'])
offer_data = offer_data[column_order]
# Select customer transaction events
transaction = transcript[is_offer == False]
transaction = transaction.reset_index(drop=True)
# Transform customer transaction event values
transaction['amount'] = transaction['value'].apply(lambda elem: list(elem.values())[0])
# Create a DataFrame that describes customer transactions
transaction = transaction.drop(columns=['event', 'value'])
column_order = ['customerid', 'timedays', 'amount']
transaction = transaction[column_order]
return offer_data, transaction
offer_data, transaction = clean_transcript(profile,transcript)
transaction.head()
offer_data.head()
profile.head()
cleaned_portfolio.head()
cleaned_portfolio.columns = update_column_name(cleaned_portfolio,
'id',
'offerid')
cleaned_portfolio.columns = update_column_name(cleaned_portfolio,
'duration',
'durationdays')
def create_offeranalysis_dataset(profile,
portfolio,
offer_data,
transaction):
""" Creates an analytic dataset from the following Starbucks challenge
datasets:
* portfolio.json - Contains offer ids and meta data (duration, type,
etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers
viewed, and offers completed
INPUT:
profile: DataFrame that contains demographic data for each
customer
portfolio: Contains offer ids and meta data (duration, type, etc.)
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
OUTPUT:
clean_data: DataFrame that characterizes the effectiveness of
customer offers"""
clean_data = []
customerid_list = offer_data['customerid'].unique()
widgets=[' [',
progressbar.Timer(), '] ',
progressbar.Bar(),
' (',
progressbar.ETA(),
') ']
for idx in range(len(customerid_list)):
clean_data.extend(create_combined_records(customerid_list[idx],
portfolio,
profile,
offer_data,
transaction))
clean_data = pd.DataFrame(clean_data)
clean_data = clean_data.sort_values('time')
return clean_data.reset_index(drop=True)
def create_combined_records(customer_id,
portfolio,
profile,
offer_data,
transaction):
"""
Creates a list of dictionaries that describes the effectiveness of
offers to a specific customer
INPUT:
customer_id: String that refers to a specific customer
profile: DataFrame that contains demographic data for each
customer
portfolio: DataFrame containing offer ids and meta data about
each offer (duration, type, etc.)
offer_data: DataFrame that describes customer offer data
transaction: DataFrame that describes customer transactions
OUTPUT:
rows: List of dictionaries that describes the effectiveness of
offers to a specific customer
"""
# Select a customer's profile
cur_customer = profile[profile['customerid'] == customer_id]
# Select offer data for a specific customer
select_offer_data = offer_data['customerid'] == customer_id
customer_offer_data = offer_data[select_offer_data]
customer_offer_data = customer_offer_data.drop(columns='customerid')
customer_offer_data = customer_offer_data.reset_index(drop=True)
# Select transactions for a specific customer
select_transaction = transaction['customerid'] == customer_id
customer_transaction_data = transaction[select_transaction]
customer_transaction_data =\
customer_transaction_data.drop(columns='customerid')
customer_transaction_data =\
customer_transaction_data.reset_index(drop=True)
# Initialize DataFrames that describe when a customer receives,
# views, and completes an offer
event_type = ['completed',
'received',
'viewed']
offer_received =\
customer_offer_data[customer_offer_data['received'] == 1]
offer_received = offer_received.drop(columns=event_type)
offer_received = offer_received.reset_index(drop=True)
offer_viewed =\
customer_offer_data[customer_offer_data['viewed'] == 1]
offer_viewed = offer_viewed.drop(columns=event_type)
offer_viewed = offer_viewed.reset_index(drop=True)
offer_completed =\
customer_offer_data[customer_offer_data['completed'] == 1]
offer_completed = offer_completed.drop(columns=event_type)
offer_completed = offer_completed.reset_index(drop=True)
# Iterate over each offer a customer receives
rows = []
for idx in range(offer_received.shape[0]):
# Initialize the current offer id
cur_offer_id = offer_received.iloc[idx]['offerid']
# Look-up a description of the current offer
cur_offer = portfolio.loc[portfolio['offerid'] == cur_offer_id]
durationdays = cur_offer['durationdays'].values[0]
# Initialize the time period when an offer is valid
cur_offer_startime = offer_received.iloc[idx]['timedays']
cur_offer_endtime =\
offer_received.iloc[idx]['timedays'] + durationdays
# Initialize a boolean array that select customer transcations that
# fall within the valid offer time window
select_transaction =\
np.logical_and(customer_transaction_data['timedays'] >=
cur_offer_startime,
customer_transaction_data['timedays'] <=
cur_offer_endtime)
# Initialize a boolean array that selects a description of when a
# customer completes an offer (this array may not contain any True
# values)
select_offer_completed =\
np.logical_and(offer_completed['timedays'] >= cur_offer_startime,
offer_completed['timedays'] <= cur_offer_endtime)
# Initialize a boolean array that selects a description of when a
# customer views an offer (this array may not contain any True
# values)
select_offer_viewed =\
np.logical_and(offer_viewed['timedays'] >= cur_offer_startime,
offer_viewed['timedays'] <= cur_offer_endtime)
# Determine whether the current offer was successful
cur_offer_successful =\
select_offer_completed.sum() > 0 and select_offer_viewed.sum() > 0
# Select customer transcations that occurred within the current offer
# valid time window
cur_offer_transactions = customer_transaction_data[select_transaction]
# Initialize a dictionary that describes the current customer offer
cur_row = {'offerid': cur_offer_id,
'customerid': customer_id,
'time': cur_offer_startime,
'offersuccessful': int(cur_offer_successful),
'totalamount': cur_offer_transactions['amount'].sum()}
cur_row.update(cur_offer.iloc[0,1:].to_dict())
cur_row.update(cur_customer.iloc[0,1:].to_dict())
# Update a list of dictionaries that describes the effectiveness of
# offers to a specific customer
rows.append(cur_row)
return rows
clean_data_csvfile = "./data/clean_data.csv"
if os.path.exists(clean_data_csvfile):
clean_data = pd.read_csv(clean_data_csvfile)
else:
clean_data = create_offeranalysis_dataset(profile,
cleaned_portfolio,
offer_data,
transaction)
clean_data.to_csv(clean_data_csvfile, index=False)
clean_data = clean_data.drop(columns=['time',
'customerid',
'email',
'informational'])
column_ordering = ['offerid', 'totalamount']
column_ordering.extend([elem for elem in clean_data.columns if elem not in column_ordering])
clean_data = clean_data[column_ordering]
clean_data.head()
clean_data.columns
clean_data.isna().sum()
data_y = clean_data['offersuccessful']
data_x = clean_data.drop(['offersuccessful','offerid','totalamount'], axis = 1)
variable_names = data_x.columns
variable_names
from sklearn.model_selection import train_test_split, GridSearchCV
(X_train,
X_test,
y_train,
y_test) = train_test_split(data_x,
data_y,
test_size=0.3,
random_state=45)
baseline_accuracy = np.sum(y_train)/len(y_train)
baseline_precision
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(random_state=45).fit(X_train, y_train)
Y_pred_train=lr.predict(X_train)
Y_pred_test=lr.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train, y_train)
Y_pred_train=gnb.predict(X_train)
Y_pred_test=gnb.predict(X_test)
accuracy_score(Y_pred_train,y_train)
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
rf = RandomForestClassifier(n_estimators = 80, random_state = 42, max_depth= 3)
rf.fit(X_train, y_train)
Y_pred_train=rf.predict(X_train)
Y_pred_test=rf.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
ada = AdaBoostClassifier(n_estimators = 100, random_state = 42)
ada.fit(X_train, y_train)
Y_pred_train=ada.predict(X_train)
Y_pred_test=ada.predict(X_test)
# Creating a dictionary with results of train,validation and test
model_results=dict({'F1_train':f1_score(y_train,Y_pred_train),'Precision_train':precision_score(y_train,Y_pred_train),'Recall_train':recall_score(y_train,Y_pred_train),'F1_test':f1_score(y_test,Y_pred_test),'Precision_test':precision_score(y_test,Y_pred_test),'Recall_test':recall_score(y_test,Y_pred_test)})
# Convert dictionary to dataframe and transpose
model_results=pd.DataFrame.from_dict(model_results , orient ='index')
model_results=model_results.transpose()
model_results
accuracy_score(Y_pred_train,y_train)
relative_importance = rf.feature_importances_
relative_importance = relative_importance / np.sum(relative_importance)
feature_importance =\
pd.DataFrame(list(zip(variable_names,
relative_importance)),
columns=['feature', 'relativeimportance'])
feature_importance = feature_importance.sort_values('relativeimportance',
ascending=False)
feature_importance = feature_importance.reset_index(drop=True)
palette = sns.color_palette("Blues_r", feature_importance.shape[0])
plt.figure(figsize=(8, 8))
sns.barplot(x='relativeimportance',
y='feature',
data=feature_importance,
palette=palette)
plt.xlabel('Relative Importance')
plt.ylabel('Feature')
plt.title('Random Forest Estimated Feature Importance')
| 0.590661 | 0.978794 |
# Topic Modeling
As with other forms of natural language processing, *topic modeling* allows us to quickly and systematically analyze vast quantities of unstructured text. With topic modelling, we are able to uncover some of the more abstract, underlying themes contained within large collections of text without the need to assign pre-determined categories or tags to the corpus.
This lesson will go over what, exactly, topic modeling does, walk through how to run Latent Dirichlet allocation (LDA) topic models in Python, and introduce a number of fit statistics to help us better understand the topic models we'll be generating. We'll end with a discussion of several useful visualization tools for topic modeling.
# What is Topic Modeling?
As a form of unsupervised machine learning, *topic modeling* allows for the classification of large collections of textual documents into natural groups, without the need for extensive human supervision. Employed in text mining and natural language processing, topic models can uncover the hidden, or *latent*, meanings of language patterns within texts.
[Source](https://www.tidytextmining.com/topicmodeling.html)
## Setup
In addition to `matplotlib inline` and `pandas`, we'll also be importing `CountVectorizer` from [scikit-learn](https://scikit-learn.org/stable/), a machine learning library for Python. For more information on the usage and relavent parameters taken by `CountVectorizer`, see the chapter on Text Classification.
We'll also want to use `pandas` to increase our maximum column width to 120 characters, over the default 50-character column width. This will help us more easily glance over text in the dataframe.
```
%matplotlib inline
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
pd.set_option('display.max_colwidth', 120)
```
In this chapter we'll be looking at [transcriptions](https://www.kaggle.com/unitednations/un-general-debates) of United Nations General Debates, from 1980 to 1999. We've previously looked at the same dataset when disucssing word frequencies.
```
un_df = pd.read_json('un-general-debates.json')
print(len(un_df))
```
With 3,214 complete transcriptions of UN members' statements to work with, this is an ideal dataset to use as we begin playing around with topic models.
Let's look at a small random sample of 5 texts from the dataframe to get a feel for what we're dealing with:
```
un_df.sample(5)
```
### Topic Modeling Exercise 1
Take a look at the text of the UN speeches. When delivering an address, what are the different topics that are covered? Make a list of four topics and provide three example words from each topic.

## Latent Dirichlet allocation (LDA)
We'll narrow our focus for the time being to one particular form of topic modeling, *Latent Dirichlet allocation*, or *LDA* for short. With LDA topic modeling, we'll be able to treat every document in our corpus as a mixture of topics. Each document in our corpus can contain words associated with any number of topics in varying proportions. At the same time, each topic can be treated as a mixture of words. Any given word can be associated with any number of topics. Considering our documents and topics as these sorts of "mixtures" helps us to mimic the thematic subtleties inherent in natural language.
[Source](https://www.tidytextmining.com/topicmodeling.html#latent-dirichlet-allocation)

We can import `LatentDirichletAllocation` from scikit-learn to run our own LDA topic models in Python.
```
from sklearn.decomposition import LatentDirichletAllocation
```
### Converting Documents to Vectors
In order to run our topic models, we'll need to convert each document in the corpus into a fixed-length vector of token counts. We can accomplish this using the [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=vectorizer#sklearn.feature_extraction.text.CountVectorizer) function we imported earlier in the chapter.
For now, let's set our parameters so that we convert all text to lower case, only look at unigrams, only look at terms with a document frequency of .90 or below, use the default 'english' stopwords list, and only consider the top 1,000 terms in our corpus.
```
vectorizer = CountVectorizer(lowercase = True,
ngram_range = (1,1),
max_df = .90,
stop_words = 'english',
max_features = 1000)
```
After setting our parameters, we can `fit` the vectorizer to the `speech_text` key in our UN General Debate dataframe to build a vocabulary out of the raw documents.
*Note*: You'll run into an Attribute Error if the key you plan to fit the vectorizer to contains any missing values. While we don't have to worry about this with our UN dataframe, if you encounter such an error in the future, be sure to [clean your dataframe](https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html#missing-data) before attempting to fit the vectorizer.
```
vectorizer.fit(un_df['speech_text'])
```
We can use the `len` function, along with `get_feature_names`, to ensure we're dealing with a vocabulary composed of the top 1,000 highest-frequency terms in our corpus.
```
len(vectorizer.get_feature_names())
```
Now we'll want to use the vectorizor to `transform` the raw documents into a document-term matrix.
```
un_word_counts = vectorizer.transform(un_df['speech_text'])
```
### Running the LDA Model
Now that we've vectorized our dataset, we're just about ready to run our first LDA model in Python. Before we do, though, we'll want to set our parameters. Below is some information on the parameters we'll set.
#### `LatentDirichletAllocation` Parameters
- **n_components**: Sets the number of topics generated. We can set this as high or as low as we like, depending on the size and character of the texts in our corpus.
- **max_iter**: Sets the maximum number of iterations. By default, `max_iter` is set to 10.
> - ***Note***: In almost every case, we'll want to set `max_iter` above 10. It's highly unlikely our models will converge in 10 iterations or less. In the following example, we'll set `max_iter` to 50, a more reasonable maximum iteration threshold.
- **evaluate_every**: Lets us adjust how frequently we gauge the perplexity of our model across iterations. By default, `evaluate_every` is set to 0.
> - ***Note***: Leaving `evaluate_every` at 0 leaves our model without any built-in goodness of fit measure. We'll discuss other measurements of model fit later in the chapter, but for now it's useful to set `evaluate_every` to some positive number. In the following example, we'll `evaluate_every` 5 iterations.
- **n_jobs**: Sets the number of concurrently running processes. When set to -1, we'll use all processors. To use all processors but one, set `n_jobs` to -2.
- **verbose**: Lets us determine whether or not every step of the process is logged. If > 0, we'll be able to see what's going on with our LDA model through the output in real time.
```
lda_model = LatentDirichletAllocation(n_components = 10,
max_iter = 50,
evaluate_every = 5,
n_jobs = -1,
verbose = 1)
```
With our parameters set, we can `fit` the LDA model to our document-term matrix of the UN General Debate transcripts.
*Note*: It's going to take a while to work our way through up to 50 iterations. That's alright.
```
lda_model.fit(un_word_counts)
```
Congrats, you've run your first topic model on Python!
## Some fit statistics
While we can intuitively "eyeball" topic quality as a first step, it's hard to do so objectively. Calculating some fit statistics can help us to evaluate our topics' quality numerically.
`LatentDirichletAllocation` includes a few handy methods for calculating fit statistics:
- **`score()`** lets us calculate the approximate logged likelihood of the model parameters we've set, given our data. The higher this number is, the better our topic fit.
- **`perplexity()`**, another (normalized) form of logged likelihood, calculates the amount of "surprise" our model experiences if we introduce some previously unseen data. The lower this number is, the better our topic fit.
To learn more about evaluating fit for LDA topic models in Python, see [here](https://towardsdatascience.com/evaluate-topic-model-in-python-latent-dirichlet-allocation-lda-7d57484bb5d0).
```
print("Log Likelihood: ", lda_model.score(un_word_counts))
print("Perplexity: ", lda_model.perplexity(un_word_counts))
```
### Guidelines on topic fit
1. Low perplexity on test data.
- Remeber that the lower our perplexity score, the better our topic fit to our data.
2. Topical coherence
- On the other hand, we want our models to receive the highest possible scores for topical coherence.
3. Best fit in a classification task.
- We'll discuss classification tasks in more detail in another chapter. These tasks provide an additional means of determining our models' goodness of fit.
4. Extract more and then bin them yourself.
- It's also possible to just extract more topic models, and use your human intuition to bin them based on contextual or thematic similarity.
```
print(lda_model.get_params())
```
## Visualizing Topics
In this section we'll discuss some of the ways we can visualize the topics we've created with our LDA model.
### pdtext
First, we can import `topic_words` from the `pdtext` package in order to create a easily interpretable matrix of our topics. Below, we'll look at the first 10:
```
from pdtext.tm import topic_words
topic_words(lda_model, vectorizer).head(10)
```
Also from `pdtext`, `topic_pred` will let us see which documents are associated with which topics.
```
from pdtext.tm import topic_pred
un_topics = topic_pred(lda_model, un_word_counts, vectorizer)
un_topics
```
We can now use our topics as features in order to get a better handle on topic patterns across texts.
One way to do this is to generate a new key in our United Nations dataframe. For exmaple, we can create a `post_soviet` key to divide our general debate speeches between those that occured prior to the fall of the Soviet Union (where `post_soviet` = False), and those that occured after the fall of the Soviet Union (where `post_soviet` = True).
```
un_df['post_soviet'] = un_df['speech_year'] > 1991
un_topics.groupby(un_df['post_soviet']).mean()
un_topics.groupby(un_df['post_soviet']).mean().T
```
### pyLDAvis
The [pyLDAvis](https://pyldavis.readthedocs.io/en/latest/readme.html) library, a port of the [LDAvis](https://github.com/cpsievert/LDAvis) package for R, provides a variety of tools for interactive topic model visualization.
```
pip install pyldavis
```
`pyLDAvis` is conveniently compatible with `LatentDirichletAllocation` from scikit-learn.
```
import pyLDAvis
import pyLDAvis.sklearn
```
We'll also import `pyplot` from matplotlib to allow for the generation of interactive plots in Python.
```
import matplotlib.pyplot as plt
```
Now we'll want to use the `enable_notebook()` function to allow us to display visualizations in our notebook.
```
pyLDAvis.enable_notebook()
```
Finally, we can use the `prepare()` function to transform the data from our LDA model into interactive visualizations!
```
pyLDAvis.display(pyLDAvis.sklearn.prepare(lda_model, un_word_counts, vectorizer, mds='tsne'))
```
### Topic Modeling Exercise 2
In your group, do 1 and 2 in 10_Topic_Modeling_group
#### Text Classification and Sentiment Analysis
We'll be using the following to apply classify texts and analyze sentiment within our corpus:
- [seaborn](https://seaborn.pydata.org/), a data visualization library based on `matplotlib`. `seaborn` is also discussed in a previous chapter on text classification
- The `SentimentIntensityAnalyzer` available through [vaderSentiment](https://github.com/cjhutto/vaderSentiment), a lexicon for sentiment analysis discussed in the previous chapter on Word Lists and Sentiment Analysis. With this, we'll be able to determine the intensity of positive, negative, and neutral sentiments contained within the documents in our corpus.
- [Afinn](https://pypi.org/project/afinn/), another sentiment analysis tool discussed in the previous chapter on Word Lists and Sentiment Analysis. `Afinn` will produce a single numerical sentiment score, from -5 (negative sentiment) to 5 (positive sentiment).
```
%matplotlib inline
import pandas as pd
import seaborn as sns
pip install afinn
pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from afinn import Afinn
```
In addition to `Afinn` and the `SentimentIntensityAnalyzer`, we'll also import a number of new functions from `sklearn`.
```
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import LatentDirichletAllocation
```
First among these imported functions is [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html?highlight=logisticregression#sklearn.linear_model.LogisticRegression), scikit-learn's logicstic regression classifier. While we've used `LogisticRegression` earlier in the chapter on Classification, at that point we didn't specify any parameters. In this exercise, we'll specify two:
- **solver**: In the following example we'll be setting this to `lbfgs`, or [Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm](http://aria42.com/blog/2014/12/understanding-lbfgs). If not otherwise specified, `solver` will default to `lbfgs`. There are a number of other solvers available, discussed in more detail in the `LogisticRegression`[User Guide](https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression).
- **max_iter**: Similar to what we saw earlier with `LatentDirichletAllocation`, here `max_iter` allows us to set a maximum threshold on the number of iterations in our regression before the solvers converge.
We'll set the maximum iterations in our own `lr_classifier` to 5,000.
```
lr_classifier = LogisticRegression(solver = 'lbfgs', max_iter= 5000)
```
Let's `fit` the model to our data based on the `post_soviet` key we've created.
```
lr_classifier.fit(un_topics, un_df['post_soviet'])
```
Our classifier can now be used to `predict` an accuracy score for our topics.
```
prediction = lr_classifier.predict(un_topics)
print(accuracy_score(un_df['post_soviet'], prediction))
```
We're also now able to produce a classification report:
```
print(classification_report(un_df['post_soviet'], prediction))
```
Finally, we can use `seaborn` to generate a confusion matrix using the `heatmap` function.
```
import seaborn as sns
cm = confusion_matrix(un_df['post_soviet'], prediction)
sns.heatmap(cm, annot=True, cmap="Greens", fmt='g')
```
### Topic Modeling Exercise 3
Hopefully, you feel a bit more comfortable running LDA models in Python. In the next chapter, we'll be moving beyond LDA models to cover addtional forms of topic modeling.
#### References:
Finn Årup Nielsen. 2011. “A New ANEW: Evaluation of a Word List for Sentiment Analysis in Microblogs.” *Proceedings of the ESWC2011 Workshop on ‘Making Sense of Microposts’: Big Things Come in Small Packages.* Volume 718 in CEUR Workshop Proceedings: 93-98. Matthew Rowe, Milan Stankovic, Aba-Sah Dadzie, Mariann Hardey (editors).
Hutto, C.J. and Eric Gilbert. 2014. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014.
Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.
Sievert, Carson and Kenneth Shirley. 2014. "LDAvis: A Method for Visualizing and Interpreting Topics." In *Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces*, pp. 63-70.
Silge, Julia. and David Robinson. 2017. *Text Mining with R: A Tidy Approach.* "O'Reilly Media, Inc.".
|
github_jupyter
|
%matplotlib inline
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
pd.set_option('display.max_colwidth', 120)
un_df = pd.read_json('un-general-debates.json')
print(len(un_df))
un_df.sample(5)
from sklearn.decomposition import LatentDirichletAllocation
vectorizer = CountVectorizer(lowercase = True,
ngram_range = (1,1),
max_df = .90,
stop_words = 'english',
max_features = 1000)
vectorizer.fit(un_df['speech_text'])
len(vectorizer.get_feature_names())
un_word_counts = vectorizer.transform(un_df['speech_text'])
lda_model = LatentDirichletAllocation(n_components = 10,
max_iter = 50,
evaluate_every = 5,
n_jobs = -1,
verbose = 1)
lda_model.fit(un_word_counts)
print("Log Likelihood: ", lda_model.score(un_word_counts))
print("Perplexity: ", lda_model.perplexity(un_word_counts))
print(lda_model.get_params())
from pdtext.tm import topic_words
topic_words(lda_model, vectorizer).head(10)
from pdtext.tm import topic_pred
un_topics = topic_pred(lda_model, un_word_counts, vectorizer)
un_topics
un_df['post_soviet'] = un_df['speech_year'] > 1991
un_topics.groupby(un_df['post_soviet']).mean()
un_topics.groupby(un_df['post_soviet']).mean().T
pip install pyldavis
import pyLDAvis
import pyLDAvis.sklearn
import matplotlib.pyplot as plt
pyLDAvis.enable_notebook()
pyLDAvis.display(pyLDAvis.sklearn.prepare(lda_model, un_word_counts, vectorizer, mds='tsne'))
%matplotlib inline
import pandas as pd
import seaborn as sns
pip install afinn
pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
from afinn import Afinn
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import LatentDirichletAllocation
lr_classifier = LogisticRegression(solver = 'lbfgs', max_iter= 5000)
lr_classifier.fit(un_topics, un_df['post_soviet'])
prediction = lr_classifier.predict(un_topics)
print(accuracy_score(un_df['post_soviet'], prediction))
print(classification_report(un_df['post_soviet'], prediction))
import seaborn as sns
cm = confusion_matrix(un_df['post_soviet'], prediction)
sns.heatmap(cm, annot=True, cmap="Greens", fmt='g')
| 0.582847 | 0.990821 |
<a href="https://colab.research.google.com/github/kaiser1711/Subnanosecond-Fluctuations-in-Low-Barrier-Nanomagnets/blob/main/Fluctuations_in_LBMs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
from numba import njit
from scipy import constants as constants; from scipy import signal as sig
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme()
sns.set_context("poster")
#mx=sin(theta) cos(phi)
#my=sin(theta) sin(phi)
#mz=cos(theta)
@njit #nopython compiler for faster execution time
def llg_dt(v,alpha,hk,hp,hx,hy,hz):
theta=v[0]
phi=v[1]
dtheta_hk_z=hk*(-alpha*np.sin(theta)*np.cos(theta)) #uniaxial anisotropy in z-direction
dtheta_hp_x=-hp*(-np.sin(phi)+alpha*np.cos(theta)*np.cos(phi))*np.sin(theta)*np.cos(phi) #demagnetization field in x-axis (Note: this is different as in the PRApplied paper where the demag. field on the z-axis)
dtheta_ex=alpha*np.cos(theta)*np.sin(phi)*hy+np.cos(phi)*hy-np.sin(phi)*hx-alpha*np.sin(theta)*hz+alpha*np.cos(theta)*np.cos(phi)*hx #external field
dphi_hk_z=np.cos(theta) #uniaxial anisotropy in z-direction
dphi_hp_x=hp*(np.cos(phi)**2*np.cos(theta)+alpha*np.cos(phi)*np.sin(phi)) #demagnetization field in x-axis
dphi_ex=hz+(-alpha*hx*np.sin(phi)-hx*np.cos(theta)*np.cos(phi))/np.sin(theta)+(alpha*hy*np.cos(phi)-hy*np.cos(theta)*np.sin(phi))/np.sin(theta) #external field
dtheta=(gamma/(1+alpha**2))*(dtheta_hk_z+dtheta_hp_x+dtheta_ex)
dphi=(gamma/(1+alpha**2))*(dphi_hk_z+dphi_hp_x+dphi_ex)
return np.array([dtheta,dphi])
@njit
def timestep(theta,phi,alpha,hk,hp,hx,hy,hz,dt):
v=np.array([theta,phi])
k1=dt*llg_dt(v,alpha,hk,hp,hx,hy,hz)
k2=dt*llg_dt(v+k1/2,alpha,hk,hp,hx,hy,hz)
k3=dt*llg_dt(v+k2/2,alpha,hk,hp,hx,hy,hz)
k4=dt*llg_dt(v+k3,alpha,hk,hp,hx,hy,hz)
v=v+(k1+2*k2+2*k3+k4)/6 #Runge-Kutta method
theta=v[0]
phi=v[1]
return theta,phi
@njit
def simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hx,hy,hz):
kB=constants.k*1e7
theta=theta_init
phi=phi_init
theta_arr=np.zeros(NT,)
phi_arr=np.zeros(NT,)
for n in range(0,NT):
theta,phi=timestep(theta,phi,alpha,hk,hp,hx[n],hy[n],hz[n],dt)
theta_arr[n]=theta
phi_arr[n]=phi
return theta_arr,phi_arr
```
**Parameters** (in cgs units)
```
#material parameters
T=300 #temperature (needed for noise)
Ms=1100 #saturation magnetization
Vol=(10e-7/2)**2*np.pi*1e-7 #volume of magnet (circular shape 10nm diameter, 1nm thickness)
gamma=constants.value('electron gyromag. ratio')/1e4 #gyromagnetic ratio
kB=constants.k*1e7 # Boltzmann constant
alpha=0.01 #Gilbert damping
hk=0 # uniaxial field
hp=4*np.pi*Ms #demag. field
#simulation parameters
NT=int(1e7) #total timesteps
dt=1e-6/NT #delta timestep
```
**External field** (here just Langevin field)
```
sigma=np.sqrt(2*alpha*kB*T/(Ms*Vol*gamma*dt))
hxL=sigma*np.random.normal(loc=0,scale=1,size=NT)
hyL=sigma*np.random.normal(loc=0,scale=1,size=NT)
hzL=sigma*np.random.normal(loc=0,scale=1,size=NT)
```
**Running simulation**
```
#initial conditions
theta_init=0.01
phi_init=np.pi/2
#simulation
theta_arr,phi_arr=simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hxL,hyL,hzL)
#time
time=np.linspace(0,dt*NT,NT)
```
**Time sequence**
```
plt.plot(time*1e9,np.cos(theta_arr))
plt.xlim([1,11])
plt.xlabel('time (ns)')
plt.ylabel('$m_x$')
```
**Autocorrelation (Fig. 2 a):** The autocorrelation quantifies the mean reversal time of the magnet in a thermal bath.
```
ctime=np.linspace(-NT*dt,NT*dt,NT*2-1)
cX=sig.fftconvolve(np.cos(theta_arr), np.flip(np.cos(theta_arr)), mode='full')
plt.plot(ctime*1e12,cX/np.max(cX),label='numerical')
plt.plot(ctime*1e12,np.exp(-gamma**2*hp*kB*T/Ms/Vol*ctime**2/2),label='analytical') #Eqn. 7
plt.xlabel('time (ps)')
plt.ylabel('ACF')
plt.xlim([0,200])
plt.legend()
```
**Ensemble simulation**
```
ensemble=500
#simulation parameters
NT=int(1e5) #total timesteps
dt=1e-9/NT #delta timestep
#time
time=np.linspace(0,dt*NT,NT)
#initialize ensemble angles
phi_en=np.zeros((NT,ensemble))
theta_en=np.zeros((NT,ensemble))
#Langevin fields
sigma=np.sqrt(2*alpha*kB*T/(Ms*Vol*gamma*dt))
hxL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
hyL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
hzL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
#simulation for every ensemble
for ii in range(0,ensemble):
#print(ii)
theta_arr,phi_arr=simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hxL[:,ii],hyL[:,ii],hzL[:,ii])
phi_en[:,ii]=phi_arr
theta_en[:,ii]=theta_arr
```
**Memory loss (Fig. 3a):** Memory loss quantifies how long it takes for the magnetization to become unpredicatable.
```
plt.plot(time*1e12,np.mean(np.cos(theta_en),axis=1),label='numerical')
plt.plot(time*1e12,np.exp(-alpha*gamma**3*hp**2*kB*T/Ms/Vol*time**3/3),label='analytical') #Eqn. 12
plt.xlabel('time (ps)')
plt.ylabel('avg. $m_x$')
plt.xlim([0,400])
plt.ylim([-0.2,1.1])
plt.legend()
```
|
github_jupyter
|
import numpy as np
from numba import njit
from scipy import constants as constants; from scipy import signal as sig
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_theme()
sns.set_context("poster")
#mx=sin(theta) cos(phi)
#my=sin(theta) sin(phi)
#mz=cos(theta)
@njit #nopython compiler for faster execution time
def llg_dt(v,alpha,hk,hp,hx,hy,hz):
theta=v[0]
phi=v[1]
dtheta_hk_z=hk*(-alpha*np.sin(theta)*np.cos(theta)) #uniaxial anisotropy in z-direction
dtheta_hp_x=-hp*(-np.sin(phi)+alpha*np.cos(theta)*np.cos(phi))*np.sin(theta)*np.cos(phi) #demagnetization field in x-axis (Note: this is different as in the PRApplied paper where the demag. field on the z-axis)
dtheta_ex=alpha*np.cos(theta)*np.sin(phi)*hy+np.cos(phi)*hy-np.sin(phi)*hx-alpha*np.sin(theta)*hz+alpha*np.cos(theta)*np.cos(phi)*hx #external field
dphi_hk_z=np.cos(theta) #uniaxial anisotropy in z-direction
dphi_hp_x=hp*(np.cos(phi)**2*np.cos(theta)+alpha*np.cos(phi)*np.sin(phi)) #demagnetization field in x-axis
dphi_ex=hz+(-alpha*hx*np.sin(phi)-hx*np.cos(theta)*np.cos(phi))/np.sin(theta)+(alpha*hy*np.cos(phi)-hy*np.cos(theta)*np.sin(phi))/np.sin(theta) #external field
dtheta=(gamma/(1+alpha**2))*(dtheta_hk_z+dtheta_hp_x+dtheta_ex)
dphi=(gamma/(1+alpha**2))*(dphi_hk_z+dphi_hp_x+dphi_ex)
return np.array([dtheta,dphi])
@njit
def timestep(theta,phi,alpha,hk,hp,hx,hy,hz,dt):
v=np.array([theta,phi])
k1=dt*llg_dt(v,alpha,hk,hp,hx,hy,hz)
k2=dt*llg_dt(v+k1/2,alpha,hk,hp,hx,hy,hz)
k3=dt*llg_dt(v+k2/2,alpha,hk,hp,hx,hy,hz)
k4=dt*llg_dt(v+k3,alpha,hk,hp,hx,hy,hz)
v=v+(k1+2*k2+2*k3+k4)/6 #Runge-Kutta method
theta=v[0]
phi=v[1]
return theta,phi
@njit
def simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hx,hy,hz):
kB=constants.k*1e7
theta=theta_init
phi=phi_init
theta_arr=np.zeros(NT,)
phi_arr=np.zeros(NT,)
for n in range(0,NT):
theta,phi=timestep(theta,phi,alpha,hk,hp,hx[n],hy[n],hz[n],dt)
theta_arr[n]=theta
phi_arr[n]=phi
return theta_arr,phi_arr
#material parameters
T=300 #temperature (needed for noise)
Ms=1100 #saturation magnetization
Vol=(10e-7/2)**2*np.pi*1e-7 #volume of magnet (circular shape 10nm diameter, 1nm thickness)
gamma=constants.value('electron gyromag. ratio')/1e4 #gyromagnetic ratio
kB=constants.k*1e7 # Boltzmann constant
alpha=0.01 #Gilbert damping
hk=0 # uniaxial field
hp=4*np.pi*Ms #demag. field
#simulation parameters
NT=int(1e7) #total timesteps
dt=1e-6/NT #delta timestep
sigma=np.sqrt(2*alpha*kB*T/(Ms*Vol*gamma*dt))
hxL=sigma*np.random.normal(loc=0,scale=1,size=NT)
hyL=sigma*np.random.normal(loc=0,scale=1,size=NT)
hzL=sigma*np.random.normal(loc=0,scale=1,size=NT)
#initial conditions
theta_init=0.01
phi_init=np.pi/2
#simulation
theta_arr,phi_arr=simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hxL,hyL,hzL)
#time
time=np.linspace(0,dt*NT,NT)
plt.plot(time*1e9,np.cos(theta_arr))
plt.xlim([1,11])
plt.xlabel('time (ns)')
plt.ylabel('$m_x$')
ctime=np.linspace(-NT*dt,NT*dt,NT*2-1)
cX=sig.fftconvolve(np.cos(theta_arr), np.flip(np.cos(theta_arr)), mode='full')
plt.plot(ctime*1e12,cX/np.max(cX),label='numerical')
plt.plot(ctime*1e12,np.exp(-gamma**2*hp*kB*T/Ms/Vol*ctime**2/2),label='analytical') #Eqn. 7
plt.xlabel('time (ps)')
plt.ylabel('ACF')
plt.xlim([0,200])
plt.legend()
ensemble=500
#simulation parameters
NT=int(1e5) #total timesteps
dt=1e-9/NT #delta timestep
#time
time=np.linspace(0,dt*NT,NT)
#initialize ensemble angles
phi_en=np.zeros((NT,ensemble))
theta_en=np.zeros((NT,ensemble))
#Langevin fields
sigma=np.sqrt(2*alpha*kB*T/(Ms*Vol*gamma*dt))
hxL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
hyL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
hzL=sigma*np.random.normal(loc=0,scale=1,size=(NT,ensemble))
#simulation for every ensemble
for ii in range(0,ensemble):
#print(ii)
theta_arr,phi_arr=simulate(dt,NT,Ms,Vol,alpha,gamma,theta_init,phi_init,hk,hp,hxL[:,ii],hyL[:,ii],hzL[:,ii])
phi_en[:,ii]=phi_arr
theta_en[:,ii]=theta_arr
plt.plot(time*1e12,np.mean(np.cos(theta_en),axis=1),label='numerical')
plt.plot(time*1e12,np.exp(-alpha*gamma**3*hp**2*kB*T/Ms/Vol*time**3/3),label='analytical') #Eqn. 12
plt.xlabel('time (ps)')
plt.ylabel('avg. $m_x$')
plt.xlim([0,400])
plt.ylim([-0.2,1.1])
plt.legend()
| 0.304145 | 0.910227 |
# ORF recognition by CNN
Use variable number of bases between START and STOP. Thus, ncRNA will have its STOP out-of-frame or too close to the START, and pcRNA will have its STOP in-frame and far from the START.
```
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=10000 # how many protein-coding sequences
NC_SEQUENCES=10000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=32 # how long is each sequence
CDS_LEN=16 # min CDS len to be coding
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.2
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=100 # how many times to train on all the data
SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=3 # train the model this many times (range 1 to SPLITS)
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
from RNA_describe import Random_Base_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import prepare_inputs_len_x_alphabet
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle
from SimTools.RNA_prep import prepare_inputs_len_x_alphabet
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
rbo=Random_Base_Oracle(RNA_LEN,True)
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
```
|
github_jupyter
|
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=10000 # how many protein-coding sequences
NC_SEQUENCES=10000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=32 # how long is each sequence
CDS_LEN=16 # min CDS len to be coding
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.2
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=100 # how many times to train on all the data
SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=3 # train the model this many times (range 1 to SPLITS)
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
from RNA_describe import Random_Base_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import prepare_inputs_len_x_alphabet
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle
from SimTools.RNA_prep import prepare_inputs_len_x_alphabet
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
rbo=Random_Base_Oracle(RNA_LEN,True)
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
| 0.536556 | 0.704357 |
# LAB 5b: Deploy and predict with Keras model on Cloud AI Platform.
**Learning Objectives**
1. Setup up the environment
1. Deploy trained Keras model to Cloud AI Platform
1. Online predict from model on Cloud AI Platform
1. Batch predict from model on Cloud AI Platform
## Introduction
In this notebook, we'll deploying our Keras model to Cloud AI Platform and creating predictions.
We will set up the environment, deploy a trained Keras model to Cloud AI Platform, online predict from deployed model on Cloud AI Platform, and batch predict from deployed model on Cloud AI Platform.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/5b_deploy_keras_ai_platform_babyweight.ipynb).
## Set up environment variables and load necessary libraries
Import necessary libraries.
```
import os
```
### Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
```
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "your-project-name-here" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
```
## Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported the model to in our last [lab](../solutions/10_train_keras_ai_platform_babyweight.ipynb). We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
```
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
```
## Lab Task #2: Deploy trained model
Deploying the trained model to act as a REST web service is a simple gcloud call.
```
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1 | tr -d '[:space:]')
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
```
## Use model to make online prediction.
### Python API
We can use the Python API to send a JSON request to the endpoint of the service to make it predict a baby's weight. The order of the responses are the order of the instances.
```
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = "babyweight"
MODEL_VERSION = "ml_on_gcp"
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
{
"is_male": "Unknown",
"mother_age": 29.0,
"plurality": "Multiple(2+)",
"gestation_weeks": 38
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
```
The predictions for the four instances were: 5.33, 6.09, 2.50, and 5.86 pounds respectively when I ran it (your results might be different).
### gcloud shell API
Instead we could use the gcloud shell API. Create a newline delimited JSON file with one instance per line and submit using gcloud.
```
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
```
Now call `gcloud ai-platform predict` using the JSON we just created and point to our deployed `model` and `version`.
```
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=ml_on_gcp
```
## Use model to make batch prediction.
Batch prediction is commonly used when you have thousands to millions of predictions. It will create an actual Cloud AI Platform job for prediction.
```
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=ml_on_gcp
```
## Lab Summary:
In this lab, we set up the environment, deployed a trained Keras model to Cloud AI Platform, online predicted from deployed model on Cloud AI Platform, and batch predicted from deployed model on Cloud AI Platform.
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
github_jupyter
|
import os
%%bash
PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
# Change these to try this notebook out
PROJECT = "your-project-name-here" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.1"
%%bash
gcloud config set compute/region $REGION
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1 | tr -d '[:space:]')
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.1 \
--python-version=3.7
from oauth2client.client import GoogleCredentials
import requests
import json
MODEL_NAME = "babyweight"
MODEL_VERSION = "ml_on_gcp"
token = GoogleCredentials.get_application_default().get_access_token().access_token
api = "https://ml.googleapis.com/v1/projects/{}/models/{}/versions/{}:predict" \
.format(PROJECT, MODEL_NAME, MODEL_VERSION)
headers = {"Authorization": "Bearer " + token }
data = {
"instances": [
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Single(1)",
"gestation_weeks": 39
},
{
"is_male": "False",
"mother_age": 29.0,
"plurality": "Single(1)",
"gestation_weeks": 38
},
{
"is_male": "True",
"mother_age": 26.0,
"plurality": "Triplets(3)",
"gestation_weeks": 39
},
{
"is_male": "Unknown",
"mother_age": 29.0,
"plurality": "Multiple(2+)",
"gestation_weeks": 38
},
]
}
response = requests.post(api, json=data, headers=headers)
print(response.content)
%%writefile inputs.json
{"is_male": "True", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
{"is_male": "False", "mother_age": 26.0, "plurality": "Single(1)", "gestation_weeks": 39}
%%bash
gcloud ai-platform predict \
--model=babyweight \
--json-instances=inputs.json \
--version=ml_on_gcp
%%bash
INPUT=gs://${BUCKET}/babyweight/batchpred/inputs.json
OUTPUT=gs://${BUCKET}/babyweight/batchpred/outputs
gsutil cp inputs.json $INPUT
gsutil -m rm -rf $OUTPUT
gcloud ai-platform jobs submit prediction babypred_$(date -u +%y%m%d_%H%M%S) \
--data-format=TEXT \
--region ${REGION} \
--input-paths=$INPUT \
--output-path=$OUTPUT \
--model=babyweight \
--version=ml_on_gcp
| 0.268845 | 0.945197 |
# Model Serving Architecture
## Documentation on model servers
---
The video lecture covered some of the most popular model servers: TensorFlow Serving, TorchServer, KubeFlow Serving and the NVidia Triton inference server. Here are the links to relevant documentation for each of these options:
- <a href = "https://www.tensorflow.org/tfx/serving/architecture">TensorFlow Serving </a>
- <a href = "https://github.com/pytorch/serve">TorchServe</a>
- <a href = "https://www.kubeflow.org/docs/external-add-ons/serving/">KubeFlow Serving</a>
- <a href = "https://developer.nvidia.com/nvidia-triton-inference-server">NVIDIA Triton</a>
## Ungraded Lab - Deploy a ML model with FastAPI and Docker
---
During this lab you will work with FastAPI and Docker to deploy a Dockerized version of your model while learning important concepts for container-based applications.
Follow this <a href = "https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_1_FastAPI_Docker/README.md">link</a> to start the lab!
# Scaling Infrastructure
## Learn about scaling with boy bands
---
In the next few minutes you’ll learn about horizontal and vertical scaling. Before going into that, here’s a fun case study on managing scale.
In this extreme case a famous boy band called ‘One Direction’ hosted a 10-hour live stream on YouTube, where they instructed fans to go visit a web site with a quiz on it every 10 minutes. This led to a really interesting pattern in scalability where the application would have zero usage for the vast majority of the time, but then, every 10 minutes may have hundreds of thousands of people hitting it.
It’s a complex problem to solve when it comes to scaling. It could be very expensive to operate. Using smart scaling strategies, Sony Music and Google solved this problem very inexpensively. Laurence isn’t allowed to share how much it cost for the cloud services, but, when he and several of the other engineers went out for celebration drinks after the success of the project, the bar bill was more expensive than the cloud bill. (And they didn’t drink a lot!)
Check out the talk about how scaling worked for this system here: https://www.youtube.com/watch?v=aIxNm5Eed_8
Learn about the event and the app here: https://www.computerweekly.com/news/2240228060/Sony-Music-Google-cloud-One-Directions-1D-Day-event-platform-services
## Ungraded Lab: Intro to Kubernetes
---
In this lab, you will get more hands-on practice with Kubernetes in preparation for this week's graded assignment. If you haven't already, please clone the public repo. You can do so with the following commands:
git clone https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public
If you've already cloned this before, please do a git pull to make sure that you have the latest version of the files.
After that, please navigate to course4/week2-ungraded-labs/C4_W2_Lab_2_Intro_to_Kubernetes/ then read the root README.md with your favorite Markdown reader. Alternatively, you can just clone the repo then just go here to use Github's built-in Markdown viewer. Either way, that README file will contain the instructions on how to run the lab in your machine.
In case you run into any issues, remember to post it in Discourse so mentors and course staff can assist.
Happy learning!
# Online Inference
## Ungraded Lab - Latency testing with Docker Compose and Locust
---
During this lab you will work with Docker Compose and Locust to perform load testing on the servers you coded in the previous ungraded lab.
Follow this <a href = "https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_3_Latency_Test_Compose/README.md">link</a> to start the lab!
# Data preprocessing
## Data preprocessing
---
Apache Beam is a product that gives you a unified programming model that lets you implement batch and streaming data processing jobs on any execution engine. It’s ideally suited for data preprocessing!
Go to https://beam.apache.org/get-started/try-apache-beam/ to try Apache Beam in a Colab so you can get a handle on how the APIs work. Make sure you try it in Python as well as Java by using the tabs at the top.
Note: You can click the Run in Colab button below the code snippet to launch Colab. In the Colab menu bar, click Runtime > Change Runtime type then select Python 3 before running the code cells. You can get more explanations on the WordCount example here and you can use the Beam Programming Guide as well to look up any of the concepts.
You can learn about TensorFlow Transform here: https://www.tensorflow.org/tfx/transform/get_started . It also uses Beam style pipelines but has modules optimized for preprocessing Tensorflow datasets.
# Batch Processing with ETL
## Ungraded Lab (Optional): Machine Learning with Apache Beam and TensorFlow
---
This optional lab will show you how to preprocess, train, and make batch predictions on a machine learning model using Apache Beam and Tensorflow Transform. To prevent costs of using Cloud resources, you will just run the entire pipeline in Colab. We linked the original article which gives the option to run in GCP in case you want to give it a shot afterward.
Click <a href = "https://colab.research.google.com/github/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_4_ETL_Beam/C4_W2_Lab_4_Apache_Beam_and_Tensorflow.ipynb">here</a> to launch Colab!
|
github_jupyter
|
# Model Serving Architecture
## Documentation on model servers
---
The video lecture covered some of the most popular model servers: TensorFlow Serving, TorchServer, KubeFlow Serving and the NVidia Triton inference server. Here are the links to relevant documentation for each of these options:
- <a href = "https://www.tensorflow.org/tfx/serving/architecture">TensorFlow Serving </a>
- <a href = "https://github.com/pytorch/serve">TorchServe</a>
- <a href = "https://www.kubeflow.org/docs/external-add-ons/serving/">KubeFlow Serving</a>
- <a href = "https://developer.nvidia.com/nvidia-triton-inference-server">NVIDIA Triton</a>
## Ungraded Lab - Deploy a ML model with FastAPI and Docker
---
During this lab you will work with FastAPI and Docker to deploy a Dockerized version of your model while learning important concepts for container-based applications.
Follow this <a href = "https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_1_FastAPI_Docker/README.md">link</a> to start the lab!
# Scaling Infrastructure
## Learn about scaling with boy bands
---
In the next few minutes you’ll learn about horizontal and vertical scaling. Before going into that, here’s a fun case study on managing scale.
In this extreme case a famous boy band called ‘One Direction’ hosted a 10-hour live stream on YouTube, where they instructed fans to go visit a web site with a quiz on it every 10 minutes. This led to a really interesting pattern in scalability where the application would have zero usage for the vast majority of the time, but then, every 10 minutes may have hundreds of thousands of people hitting it.
It’s a complex problem to solve when it comes to scaling. It could be very expensive to operate. Using smart scaling strategies, Sony Music and Google solved this problem very inexpensively. Laurence isn’t allowed to share how much it cost for the cloud services, but, when he and several of the other engineers went out for celebration drinks after the success of the project, the bar bill was more expensive than the cloud bill. (And they didn’t drink a lot!)
Check out the talk about how scaling worked for this system here: https://www.youtube.com/watch?v=aIxNm5Eed_8
Learn about the event and the app here: https://www.computerweekly.com/news/2240228060/Sony-Music-Google-cloud-One-Directions-1D-Day-event-platform-services
## Ungraded Lab: Intro to Kubernetes
---
In this lab, you will get more hands-on practice with Kubernetes in preparation for this week's graded assignment. If you haven't already, please clone the public repo. You can do so with the following commands:
git clone https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public
If you've already cloned this before, please do a git pull to make sure that you have the latest version of the files.
After that, please navigate to course4/week2-ungraded-labs/C4_W2_Lab_2_Intro_to_Kubernetes/ then read the root README.md with your favorite Markdown reader. Alternatively, you can just clone the repo then just go here to use Github's built-in Markdown viewer. Either way, that README file will contain the instructions on how to run the lab in your machine.
In case you run into any issues, remember to post it in Discourse so mentors and course staff can assist.
Happy learning!
# Online Inference
## Ungraded Lab - Latency testing with Docker Compose and Locust
---
During this lab you will work with Docker Compose and Locust to perform load testing on the servers you coded in the previous ungraded lab.
Follow this <a href = "https://github.com/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_3_Latency_Test_Compose/README.md">link</a> to start the lab!
# Data preprocessing
## Data preprocessing
---
Apache Beam is a product that gives you a unified programming model that lets you implement batch and streaming data processing jobs on any execution engine. It’s ideally suited for data preprocessing!
Go to https://beam.apache.org/get-started/try-apache-beam/ to try Apache Beam in a Colab so you can get a handle on how the APIs work. Make sure you try it in Python as well as Java by using the tabs at the top.
Note: You can click the Run in Colab button below the code snippet to launch Colab. In the Colab menu bar, click Runtime > Change Runtime type then select Python 3 before running the code cells. You can get more explanations on the WordCount example here and you can use the Beam Programming Guide as well to look up any of the concepts.
You can learn about TensorFlow Transform here: https://www.tensorflow.org/tfx/transform/get_started . It also uses Beam style pipelines but has modules optimized for preprocessing Tensorflow datasets.
# Batch Processing with ETL
## Ungraded Lab (Optional): Machine Learning with Apache Beam and TensorFlow
---
This optional lab will show you how to preprocess, train, and make batch predictions on a machine learning model using Apache Beam and Tensorflow Transform. To prevent costs of using Cloud resources, you will just run the entire pipeline in Colab. We linked the original article which gives the option to run in GCP in case you want to give it a shot afterward.
Click <a href = "https://colab.research.google.com/github/https-deeplearning-ai/machine-learning-engineering-for-production-public/blob/main/course4/week2-ungraded-labs/C4_W2_Lab_4_ETL_Beam/C4_W2_Lab_4_Apache_Beam_and_Tensorflow.ipynb">here</a> to launch Colab!
| 0.793466 | 0.852014 |
# Example 2: Spectrograms of detected vocalizations
In this notebook we will
* Load vocalization intervals detected in Example 1
* Compute spectrograms for all intervals detected and temporally align them to center of mass
```
import sys
sys.path.append("../code/soundsep")
import time
import hdbscan
import numpy as np
import matplotlib.pyplot as plt
import umap
from IPython.display import clear_output, Audio, display
from sklearn.decomposition import PCA
from soundsig.sound import plot_spectrogram, spectrogram
from soundsig.signal import bandpass_filter
from audio_utils import get_amplitude_envelope
from interfaces.audio import LazyWavInterface
from plotting_utils import MultiChannelPlotter, MultiSpecPlotter
%load_ext autoreload
%autoreload 2
```
## 1. Load the data
We will be using the same audio file used in the notebook Example 1, `example.wav`. In addition, we will load the vocalization intervals we found, saved in `example_intervals.npy`.
```
audio_signal = LazyWavInterface("example.wav", dtype=np.float64)
intervals = np.load("example_intervals.npy")[()]
```
### Let's check a couple just to make sure they are okay.
```
NUM_EXAMPLES = 5
random_indexes = np.random.choice(np.arange(len(intervals)), size=NUM_EXAMPLES, replace=False)
for randind in random_indexes:
t1, t2 = intervals[randind]
t_arr, sig = audio_signal.time_slice(t1, t2)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
specs = []
for ch in range(sig.shape[1]):
t_spec, f_spec, spec, _ = spectrogram(
sig[:, ch],
audio_signal.sampling_rate,
1000,
50,
min_freq=500,
max_freq=8000,
cmplx=False
)
specs.append((t_spec, f_spec, spec))
width = (t2 - t1) * 16
plotter = MultiSpecPlotter(
specs,
panel_size=(width, 3),
layout="horizontal",
colorbar=False,
dBNoise=30,
)
for ax_idx in range(len(plotter.axes)):
plotter.axes[ax_idx].set_title("Ch{}".format(ax_idx))
plotter.plot()
# Play the audio with a 10ms buffer on each side
t_arr, sig = audio_signal.time_slice(t1 - 0.01, t2 + 0.01)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
display(Audio(sig[:, 0], rate=audio_signal.sampling_rate, normalize=False))
display(Audio(sig[:, 1], rate=audio_signal.sampling_rate, normalize=False))
```
## 2. Collect all the spectrograms to cluster on
Each call can be a different length. This makes it hard to cluster when you usually need data that is the same shape every time.
To find the initial embedding and do a rough clustering, we will first compute amplitude envelopes for EVERY INTERVAL, and find their "center of mass" - what time the power in that vocalization or sound is centered at.
Then we will take a 0.5s time period around the center of mass and use that as the representative datapoint for clustering.
```
centers_of_mass = []
all_call_spectrograms = []
all_calls = []
_time = time.time()
for idx, (t1, t2) in enumerate(intervals):
print("Working on {}/{} ({:.2f}s elapsed)".format(idx + 1, len(intervals), time.time() - _time), end="\r")
# Recentered signal with a small buffer of 40ms on either side
buffer = 0.01
t_arr, sig = audio_signal.time_slice(t1 - buffer, t2 + buffer)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
amp_env = get_amplitude_envelope(sig, fs=audio_signal.sampling_rate,
lowpass=8000, highpass=1000)
# Compute the temporal center of mass of the signal
center_of_mass = t1 - buffer + np.sum((t_arr * np.sum(amp_env, axis=1))) / np.sum(amp_env)
# Recentered signal with a small buffer of 40ms on either side
buffer = 0.04
t_arr, sig = audio_signal.time_slice(center_of_mass - buffer, center_of_mass + buffer)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
specs = []
all_calls.append(sig)
for ch in range(sig.shape[1]):
# Sligtly lower resolution on the spectrograms can make this go faster
# Can increase the params to 1000, 50 for a higher resolution spectrogram
_, _, spec, _ = spectrogram(
sig[:, ch],
audio_signal.sampling_rate,
1000,
50,
min_freq=1000,
max_freq=8000,
cmplx=False
)
specs.append(spec)
all_call_spectrograms.append(np.array(specs))
all_call_spectrograms = np.array(all_call_spectrograms)
all_calls = np.array(all_calls)
```
## 3. Save the spectrograms to a file
```
np.save("example_spectrograms.npy", all_call_spectrograms)
np.save("example_calls.npy", all_calls)
```
|
github_jupyter
|
import sys
sys.path.append("../code/soundsep")
import time
import hdbscan
import numpy as np
import matplotlib.pyplot as plt
import umap
from IPython.display import clear_output, Audio, display
from sklearn.decomposition import PCA
from soundsig.sound import plot_spectrogram, spectrogram
from soundsig.signal import bandpass_filter
from audio_utils import get_amplitude_envelope
from interfaces.audio import LazyWavInterface
from plotting_utils import MultiChannelPlotter, MultiSpecPlotter
%load_ext autoreload
%autoreload 2
audio_signal = LazyWavInterface("example.wav", dtype=np.float64)
intervals = np.load("example_intervals.npy")[()]
NUM_EXAMPLES = 5
random_indexes = np.random.choice(np.arange(len(intervals)), size=NUM_EXAMPLES, replace=False)
for randind in random_indexes:
t1, t2 = intervals[randind]
t_arr, sig = audio_signal.time_slice(t1, t2)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
specs = []
for ch in range(sig.shape[1]):
t_spec, f_spec, spec, _ = spectrogram(
sig[:, ch],
audio_signal.sampling_rate,
1000,
50,
min_freq=500,
max_freq=8000,
cmplx=False
)
specs.append((t_spec, f_spec, spec))
width = (t2 - t1) * 16
plotter = MultiSpecPlotter(
specs,
panel_size=(width, 3),
layout="horizontal",
colorbar=False,
dBNoise=30,
)
for ax_idx in range(len(plotter.axes)):
plotter.axes[ax_idx].set_title("Ch{}".format(ax_idx))
plotter.plot()
# Play the audio with a 10ms buffer on each side
t_arr, sig = audio_signal.time_slice(t1 - 0.01, t2 + 0.01)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
display(Audio(sig[:, 0], rate=audio_signal.sampling_rate, normalize=False))
display(Audio(sig[:, 1], rate=audio_signal.sampling_rate, normalize=False))
centers_of_mass = []
all_call_spectrograms = []
all_calls = []
_time = time.time()
for idx, (t1, t2) in enumerate(intervals):
print("Working on {}/{} ({:.2f}s elapsed)".format(idx + 1, len(intervals), time.time() - _time), end="\r")
# Recentered signal with a small buffer of 40ms on either side
buffer = 0.01
t_arr, sig = audio_signal.time_slice(t1 - buffer, t2 + buffer)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
amp_env = get_amplitude_envelope(sig, fs=audio_signal.sampling_rate,
lowpass=8000, highpass=1000)
# Compute the temporal center of mass of the signal
center_of_mass = t1 - buffer + np.sum((t_arr * np.sum(amp_env, axis=1))) / np.sum(amp_env)
# Recentered signal with a small buffer of 40ms on either side
buffer = 0.04
t_arr, sig = audio_signal.time_slice(center_of_mass - buffer, center_of_mass + buffer)
sig = sig - np.mean(sig, axis=0)
sig = bandpass_filter(sig.T, audio_signal.sampling_rate, 1000, 8000).T
specs = []
all_calls.append(sig)
for ch in range(sig.shape[1]):
# Sligtly lower resolution on the spectrograms can make this go faster
# Can increase the params to 1000, 50 for a higher resolution spectrogram
_, _, spec, _ = spectrogram(
sig[:, ch],
audio_signal.sampling_rate,
1000,
50,
min_freq=1000,
max_freq=8000,
cmplx=False
)
specs.append(spec)
all_call_spectrograms.append(np.array(specs))
all_call_spectrograms = np.array(all_call_spectrograms)
all_calls = np.array(all_calls)
np.save("example_spectrograms.npy", all_call_spectrograms)
np.save("example_calls.npy", all_calls)
| 0.45302 | 0.926503 |
## RUN: Atlas Construction
This notebook runs the atlas prediction pipeline across the different datasets.
Model selection and hyperparameter optimization was done in `DEV_Atlas.ipynb`.
### Prep
```
### Imports
# Generic
from __future__ import division
import os, sys, pickle
import numpy as np
import matplotlib.pyplot as plt
# Modules
from katachi.pipelines import atlas_construction as ac
### Function to parse relevant IDs from IDR bulk data
def parse_from_IDR(dir_path, target):
# Get all samples
samples = [d for d in os.listdir(dir_path) if len(d)==10
and os.path.isdir(os.path.join(dir_path, d))]
# Select relevant samples
relevant_samples = []
for d in samples:
# Get image files
images = [i for i in os.listdir(os.path.join(dir_path, d))
if i.startswith(d) and i.endswith('.tif')]
# Special case for membranes only
if target=='membranes_only':
if all(['lynEGFP' in img for img in images]):
relevant_samples.append(d)
# All other cases
else:
if any([img.endswith(target+'.tif') for img in images]):
relevant_samples.append(d)
return relevant_samples
```
### Atlas Construction based on TFOR
**tagRFPtUtrCH**
```
### Predict tagRFPtUtrCH channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'tagRFPtUtrCH')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'tagRFPtUtrCH_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 14
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**NLStdTomato**
```
### Predict NLStdTomato channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'NLStdTomato')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'NLStdTomato_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 10.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.1 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**b4galT1tagRFPt**
```
### Predict b4galT1tagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'b4galT1tagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'b4galT1tagRFPt_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**CDMPRtagRFPt**
```
### Predict CDMPRtagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'CDMPRtagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'CDMPRtagRFPt_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2GM130**
```
### Predict mKate2GM130 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2GM130')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2GM130_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**lysotrackerdeepred**
```
### Predict lysotrackerdeepred channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'lysotrackerdeepred')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'lysotrackerdeepred_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2rab5**
```
### Predict mKate2rab5 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab5')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2rab5_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2rab11**
```
### Predict mKate2rab11 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab11')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2rab11_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
### Atlas Construction based on CFOR
**tagRFPtUtrCH**
```
### Predict tagRFPtUtrCH channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'tagRFPtUtrCH')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'tagRFPtUtrCH_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**NLStdTomato**
```
### Predict NLStdTomato channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'NLStdTomato')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'NLStdTomato_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**b4galT1tagRFPt**
```
### Predict b4galT1tagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'b4galT1tagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'b4galT1tagRFPt_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**CDMPRtagRFPt**
```
### Predict CDMPRtagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'CDMPRtagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'CDMPRtagRFPt_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2GM130**
```
### Predict mKate2GM130 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2GM130')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2GM130_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**lysotrackerdeepred**
```
### Predict lysotrackerdeepred channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'lysotrackerdeepred')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'lysotrackerdeepred_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2rab5**
```
### Predict mKate2rab5 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab5')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2rab5_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
**mKate2rab11**
```
### Predict mKate2rab11 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab11')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2rab11_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
```
|
github_jupyter
|
### Imports
# Generic
from __future__ import division
import os, sys, pickle
import numpy as np
import matplotlib.pyplot as plt
# Modules
from katachi.pipelines import atlas_construction as ac
### Function to parse relevant IDs from IDR bulk data
def parse_from_IDR(dir_path, target):
# Get all samples
samples = [d for d in os.listdir(dir_path) if len(d)==10
and os.path.isdir(os.path.join(dir_path, d))]
# Select relevant samples
relevant_samples = []
for d in samples:
# Get image files
images = [i for i in os.listdir(os.path.join(dir_path, d))
if i.startswith(d) and i.endswith('.tif')]
# Special case for membranes only
if target=='membranes_only':
if all(['lynEGFP' in img for img in images]):
relevant_samples.append(d)
# All other cases
else:
if any([img.endswith(target+'.tif') for img in images]):
relevant_samples.append(d)
return relevant_samples
### Predict tagRFPtUtrCH channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'tagRFPtUtrCH')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'tagRFPtUtrCH_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 14
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict NLStdTomato channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'NLStdTomato')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'NLStdTomato_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 10.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.1 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict b4galT1tagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'b4galT1tagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'b4galT1tagRFPt_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict CDMPRtagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'CDMPRtagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'CDMPRtagRFPt_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2GM130 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2GM130')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2GM130_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict lysotrackerdeepred channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'lysotrackerdeepred')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'lysotrackerdeepred_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2rab5 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab5')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2rab5_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2rab11 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab11')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_TFOR_kmeansPRES_DDDS_CBEmanh']
sec_channel = 'mKate2rab11_LMs_TFOR_kmeansPRES_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MO-SVR'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { 'kernel' : 'rbf',
'C' : 20.0,
'epsilon' : 0.5,
'gamma' : 1.0 / 20.0 * 0.01 }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict tagRFPtUtrCH channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'tagRFPtUtrCH')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'tagRFPtUtrCH_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict NLStdTomato channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'NLStdTomato')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'NLStdTomato_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict b4galT1tagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'b4galT1tagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'b4galT1tagRFPt_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict CDMPRtagRFPt channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'CDMPRtagRFPt')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'CDMPRtagRFPt_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2GM130 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2GM130')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2GM130_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict lysotrackerdeepred channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'lysotrackerdeepred')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'lysotrackerdeepred_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2rab5 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab5')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2rab5_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
### Predict mKate2rab11 channel for all prims that do not have it
# Target directories
train_dirpath = r'data\experimentA\image_data'
predict_dirpath = r'data\experimentA\image_data'
# Target IDs
train_IDs = parse_from_IDR(train_dirpath, 'mKate2rab11')
print "Found %i training IDs!" % len(train_IDs)
# Channels
ref_channel = ['lynEGFP_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh',
'lynEGFP_linUnmix_seg_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh']
sec_channel = 'mKate2rab11_LMs_kmeansPRES_pdCFOR_DDDS_CBEmanh'
# Core settings
outlier_removal_ref = 'isolation_forest'
outlier_removal_sec = 'isolation_forest'
outlier_removal_cov = 'percentile_thresh'
covariates_to_use = 'img.cell.'+sec_channel.split('_')[0]+'.mean_total'
regressor = 'MT-ENetCV'
# Additional parameters
outlier_params_ref = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_sec = { 'isoforest_params' : {'contamination':0.05}}
outlier_params_cov = { 'bounds' : 'lower',
'percentile' : 33 }
regressor_params = { }
atlas_params = { 'zscore_X' : True,
'zscore_y' : True,
'pca_X' : True,
'pca_y' : True,
'rezscore_X' : False,
'rezscore_y' : False,
'subselect_X' : 20,
'subselect_y' : 20,
'add_covariates' : None }
# Additional arguments
recurse = True
ignore_self = False
processes = 10
profiling = True
verbose = True
# Run prediction pipeline
ac.atlas_construction(train_dirpath, predict_dirpath,
ref_channel, sec_channel,
train_IDs=train_IDs, predict_IDs=None,
recurse=recurse, ignore_self=ignore_self,
processes=processes, profiling=profiling, verbose=verbose,
outlier_removal_ref=outlier_removal_ref,
outlier_removal_sec=outlier_removal_sec,
outlier_removal_cov=outlier_removal_cov,
covariates_to_use=covariates_to_use,
regressor=regressor,
outlier_params_ref=outlier_params_ref,
outlier_params_sec=outlier_params_sec,
outlier_params_cov=outlier_params_cov,
regressor_params=regressor_params,
atlas_params=atlas_params)
| 0.445771 | 0.775243 |
### Detecting Parkinson’s Disease – Python Machine Learning Project
#### What is Parkinson’s Disease?
Parkinson’s disease is a progressive disorder of the central nervous system affecting movement and inducing tremors and stiffness. It has 5 stages to it and affects more than 1 million individuals every year in India. This is chronic and has no cure yet. It is a neurodegenerative disorder affecting dopamine-producing neurons in the brain.
#### What is XGBoost?
XGBoost is a new Machine Learning algorithm designed with speed and performance in mind. XGBoost stands for eXtreme Gradient Boosting and is based on decision trees. In this project, we will import the XGBClassifier from the xgboost library; this is an implementation of the scikit-learn API for XGBoost classification.
In this Python machine learning project, using the Python libraries scikit-learn, numpy, pandas, and xgboost, we will build a model using an XGBClassifier. We’ll load the data, get the features and labels, scale the features, then split the dataset, build an XGBClassifier, and then calculate the accuracy of our model
#### Prerequisites
pip install numpy pandas sklearn xgboost
#### Steps for Detecting Parkinson’s Disease with XGBoost
#### 1. Make necessary imports:
```
import numpy as np
import pandas as pd
import os, sys
from sklearn.preprocessing import MinMaxScaler
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
#### 2. Now, let’s read the data into a DataFrame and get the first 5 records
```
df=pd.read_csv('../input/datasetparkinsons/parkinsons.data')
df.head()
```
#### 3. Get the features and labels from the DataFrame (dataset). The features are all the columns except ‘status’, and the labels are those in the ‘status’ column.
```
features=df.loc[:,df.columns!='status'].values[:,1:]
labels=df.loc[:,'status'].values
```
#### 4. The ‘status’ column has values 0 and 1 as labels; let’s get the counts of these labels for both- 0 and 1. We have 147 ones and 48 zeros in the status column in our dataset.
```
print(labels[labels==1].shape[0], labels[labels==0].shape[0])
```
#### 5. Initialize a MinMaxScaler and scale the features to between -1 and 1 to normalize them. The MinMaxScaler transforms features by scaling them to a given range. The fit_transform() method fits to the data and then transforms it. We don’t need to scale the labels.
```
scaler=MinMaxScaler((-1,1))
x=scaler.fit_transform(features)
y=labels
```
#### 6. Now, split the dataset into training and testing sets keeping 20% of the data for testing.
```
x_train,x_test,y_train,y_test=train_test_split(x, y, test_size=0.2, random_state=7)
```
#### 7. Initialize an XGBClassifier and train the model. This classifies using eXtreme Gradient Boosting- using gradient boosting algorithms for modern data science problems. It falls under the category of Ensemble Learning in ML, where we train and predict using many models to produce one superior output.
```
model=XGBClassifier()
model.fit(x_train,y_train)
```
#### 8. Finally, generate y_pred (predicted values for x_test) and calculate the accuracy for the model. Print it out.
```
y_pred=model.predict(x_test)
print(accuracy_score(y_test, y_pred)*100)
```
## Summary
In this Python machine learning project, we learned to detect the presence of Parkinson’s Disease in individuals using various factors. We used an XGBClassifier for this and made use of the sklearn library to prepare the dataset. This gives us an accuracy of 94.87%, which is great considering the number of lines of code in this python project.
|
github_jupyter
|
import numpy as np
import pandas as pd
import os, sys
from sklearn.preprocessing import MinMaxScaler
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
df=pd.read_csv('../input/datasetparkinsons/parkinsons.data')
df.head()
features=df.loc[:,df.columns!='status'].values[:,1:]
labels=df.loc[:,'status'].values
print(labels[labels==1].shape[0], labels[labels==0].shape[0])
scaler=MinMaxScaler((-1,1))
x=scaler.fit_transform(features)
y=labels
x_train,x_test,y_train,y_test=train_test_split(x, y, test_size=0.2, random_state=7)
model=XGBClassifier()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
print(accuracy_score(y_test, y_pred)*100)
| 0.238196 | 0.992177 |
# Computer Vision SS 2021
## Exercise Sheet 5: Correlation-based Stereo Vision
### Erhardt Barth / Philipp Gruening / Christoph Linse / Manuel Laufer
Universität zu Lübeck, Institut für Neuro- und Bioinformatik
In case of questions, contact us via email: *{barth, gruening, linse, laufer} @inb.uni-luebeck.de*
## Note: Please insert the names of all participating students:
1.
2.
3.
4.
5.
```
import sys, os
if 'google.colab' in sys.modules:
if os.getcwd() == '/content':
!git clone 'https://github.com/inb-luebeck/cs4250.git'
os.chdir('cs4250')
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Cross-correlation and Autocorrelation
*Cross correlation* is a standard way to estimate the degree of similarity (correlation) between two signals. For a discrete series it is defined as
$$\rho(dt)=\frac{\sum_{i}{[(x_i-\mu_{x})(y_{i+dt}-\mu_{y})]}}{\sqrt{\sum_{i}{(x_i-\mu_{x})^2}}\sqrt{\sum_{i}{(y_{i+dt}-\mu_{y})^2}}}$$
where $\rho$ denotes the *correlation coefficient*; $dt$ is the time shift, and $\mu_{x}$ and $\mu_{y}$ are the means of the two signals $x$ and $y$. The denominator normalizes the correlation coefficient such that $\rho \in [-1,1]$, the bounds indicate maximum correlation and $0$ means no correlation at all. The sums are only evaluated for indices where both $x_i$ and $y_{i+dt}$ exist.
When the correlation of a signal is computed against a temporally shifted version of itself, we call it *autocorrelation*, and define it as
$$\rho(dt)=\frac{\sum_{i}{[(x_i-\mu_{x})(x_{i+dt}-\mu_{x})]}}{(x_i-\mu_{x})^2}.$$
Cross-correlation can be used to determine the delay between two signals. In order to do this, we shift the second signal across a range of time shifts $[-dt, dt]$ and cross-correlate it with the first signal. The point of maximum correlation corresponds to the signal delay.
Write a Python function to calculate the cross-correlation between two signals for the range of delays $[-dt,dt]$.
The `ultrasound.npy` data file from the archive contains two ultrasound signals. Plot the two signals in one plot. Using the above algorithm, cross-correlate them to find the signal delay (using a maximum time shift of 100). Plot the values of the correlation coefficient for the given delay range.
```
def cross_corr_seq(X, Y, dt):
# returns the cross-correlation sequence over
# the time shift range [-dt,dt]
rho_seq=[]
for t in np.arange(-dt, dt):
rho = cross_corr(X, Y, t)
rho_seq.append(rho)
return np.array(rho_seq)
def cross_corr(X, Y, dt):
# TODO: compute the correlation coeff. for time shift dt
rho = -2.
return rho
X =np.load('data/exercise_5/ultrasound.npy', allow_pickle=True)
Y = X.tolist()['Y'][0]
X = X.tolist()['X'][0]
_, ax = plt.subplots(1)
plt.plot(X, '-b')
plt.plot(Y, '-r')
ax.set_title('The two ultrasound signals')
# TODO: compute cross correlation
dt=42
rho_seq=cross_corr_seq(X, Y, dt)
max_idx = np.argmax(rho_seq)
max_rho = rho_seq.max()
# TODO: compute lag value
lag = 42
print('The signal lag is {}.'.format(lag))
_, ax = plt.subplots(1)
ax.plot(np.arange(-dt,dt),rho_seq)
ax.set_title(
'cross-correlation sequence over the lag range [{},{}]'.format(
-dt, dt)
)
_, ax = plt.subplots(1)
plt.plot(X, '-b')
plt.plot(Y[lag:], '-r')
ax.set_title('Two signals with adjusted lag')
print('Signal difference after lag-adjustment:')
print((X[:-lag]-Y[lag:]).sum())
```
#### Correlation-based stereo algorithms
In this exercise, we will deal with the first problem of stereo vision: the *correspondence problem*. For each image point in the left image, we want to find the corresponding point in the right image which is the projection of the same 3D-point.
**Keep in mind**: x denotes the horizontal axis of an image, in a numpy matrix this is the second axis (column x). Accordingly, y is the vertical axis in an image, which is the first axis in a numpy matrix (row y).
Our basic assumption is that corresponding image regions are similar, i.e. correlated. For each image pixel in the left image we are searching for its best match in the right image (or vice versa). Matching only single pixels results in too many false positives, so we choose a neighborhood window around the pixel and correlate it with all candidate blocks in the right image to find its best match (*block matching*). We assume rectified images, i.e. the epipolar lines are aligned, so we only need to search along the horizontal direction.
Possible similarity measures for block matching:
* Sum of Squared Differences (SSD): $$D(x,y,dx,dy)=\sum_{(i,j)\in W_{x,y}}{[I_l(i,j)-I_r(i-dx, j-dy)]^2}$$
* Normalized Cross-correlation (NCC): $$D(x,y,dx,dy)=\frac{\sum_{(i,j) \in W_{x,y}}{I_l(i,j) I_r(i-dx, j-dy)} } {\sqrt{\sum_{(i,j)\in W_{x,y}}{I_{l}^{2} (i,j)} \sum_{(i,j)\in W_{x,y}}{I_{r}^{2}{(i-dx, j-dy)} } } }$$
where $W_{x,y}$ is the square window of a certain size centered around pixel $(x,y)$, $I_l$ and $I_r$ are the left and right intensity images, and $(dx, dy)$ are the horizontal and vertical *disparities* (shifted amounts). Note that $dx$ is zero as we are only searching for horizontal shifts.
The goal is to find for each pixel $(x,y)$ the disparity $(0,dy)$ that either minimizes the error (sum of squared differences) or maximizes the similarity (cross-correlation).
In order to do this, we need to search over a range of disparities up to an allowed maximum disparity.
The output is the so-called `disparity map`: a map where pixel intensities describe the relative depth of points within a scene.
Implement functions `stereo_corr_...(left, right, win_size, max_disp)` which returns the disparity map `disp_map` for the stereo image pair `left` and `right`, given a correlation window size `win_size` and an upper limit on the allowed disparity range `max_disp`. Implement both the SSD and NCC-based block matching and match from left to right (i.e., for each window in the left image, search in the right image, so that the disparity map is with respect to the left image).
Note: When coded as nested `for` loops in python, this can be very slow. Be creative about how you code this. Using **convolution** (e.g. `cv2.blur`) is one possibility.
Use the stereo image pair `left.jpg` and `right.jpg` in the archive to test your algorithm. You may assume that the images are rectified. Visualize the resulting disparity map with the `plt.imshow` command.
Experiment with the following and explain the effects:
* try out different window sizes (e.g. `win_size` 3, 5, 9, 11),
* try out different values of maximum disparity (e.g. 10, 16),
* compare the results obtained with SSD and NCC.
**Task**: Find an example where NCC clearly outperforms SDD.
```
def stereo_corr_ssd(left, right, win_size, max_disp, use_convolution=False):
"""Computes the disparity map (from left to right) for a stereo image pair subject
to a maximum allowed disparity using Sum of Squared Differences as
similarity measure.
It assumes rectified images (search only in the horizontal direction).
In:
left, right: the left and right images in the stereo pair
win_size: correlation window size
max_disp: upper bound on the allowed disparity
Out:
disparity_map: disparity map of the same size as the input
"""
not_same_size = len([1 for x,y in zip(left.shape, right.shape) if x!=y]) > 0
if not_same_size:
raise ValueError('The images should have the same size.')
height, width = left.shape[0], left.shape[1]
# TODO: compute squared diff-based disparity map
if use_convolution:
pass
else:
pass
for d in range(max_disp):
if use_convolution:
pass
else:
pass
disparity_map = 42*np.ones((height, width))
return disparity_map
def stereo_corr_NCC(left, right, win_size, max_disp):
"""Computes the disparity map (from left to right) for a stereo image pair subject
to a maximum allowed disparity using Normalized Cross-Correlation as
similarity measure.
It assumes rectified images (search only in the horizontal direction).
In:
left, right: the left and right images in the stereo pair
win_size: correlation window size
max_disp: upper bound on the allowed disparity
Out:
disparity_map: disparity map of the same size as the input
"""
not_same_size = len([1 for x,y in zip(left.shape, right.shape) if x!=y]) > 0
if not_same_size:
raise ValueError('The images should have the same size.')
height, width = left.shape[0], left.shape[1]
# TODO: compute correlation-based disparity map
disparity_map = 13*np.ones((height, width))
return disparity_map
# load images
left = cv2.cvtColor(cv2.imread('data/exercise_5/left.jpg'), cv2.COLOR_RGB2GRAY).astype('float32')/255.
right = cv2.cvtColor(cv2.imread('data/exercise_5/right.jpg'), cv2.COLOR_RGB2GRAY).astype('float32')/255.
# TODO: define parameters
win_sizes_ = [3]
max_disps_ = [5]
for win_size in win_sizes_:
for max_disp in max_disps_:
disparity_map_ssd = stereo_corr_ssd(left, right, win_size, max_disp)
disparity_map_NCC = stereo_corr_NCC(left, right, win_size, max_disp)
_, ax = plt.subplots(1)
ax.imshow(disparity_map_ssd)
ax.set_title('SSD: win_size: {}, max_disp: {}'.format(win_size, max_disp))
_, ax = plt.subplots(1)
ax.imshow(disparity_map_NCC)
ax.set_title('NCC: win_size: {}, max_disp: {}'.format(win_size, max_disp))
plt.show()
```
|
github_jupyter
|
import sys, os
if 'google.colab' in sys.modules:
if os.getcwd() == '/content':
!git clone 'https://github.com/inb-luebeck/cs4250.git'
os.chdir('cs4250')
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def cross_corr_seq(X, Y, dt):
# returns the cross-correlation sequence over
# the time shift range [-dt,dt]
rho_seq=[]
for t in np.arange(-dt, dt):
rho = cross_corr(X, Y, t)
rho_seq.append(rho)
return np.array(rho_seq)
def cross_corr(X, Y, dt):
# TODO: compute the correlation coeff. for time shift dt
rho = -2.
return rho
X =np.load('data/exercise_5/ultrasound.npy', allow_pickle=True)
Y = X.tolist()['Y'][0]
X = X.tolist()['X'][0]
_, ax = plt.subplots(1)
plt.plot(X, '-b')
plt.plot(Y, '-r')
ax.set_title('The two ultrasound signals')
# TODO: compute cross correlation
dt=42
rho_seq=cross_corr_seq(X, Y, dt)
max_idx = np.argmax(rho_seq)
max_rho = rho_seq.max()
# TODO: compute lag value
lag = 42
print('The signal lag is {}.'.format(lag))
_, ax = plt.subplots(1)
ax.plot(np.arange(-dt,dt),rho_seq)
ax.set_title(
'cross-correlation sequence over the lag range [{},{}]'.format(
-dt, dt)
)
_, ax = plt.subplots(1)
plt.plot(X, '-b')
plt.plot(Y[lag:], '-r')
ax.set_title('Two signals with adjusted lag')
print('Signal difference after lag-adjustment:')
print((X[:-lag]-Y[lag:]).sum())
def stereo_corr_ssd(left, right, win_size, max_disp, use_convolution=False):
"""Computes the disparity map (from left to right) for a stereo image pair subject
to a maximum allowed disparity using Sum of Squared Differences as
similarity measure.
It assumes rectified images (search only in the horizontal direction).
In:
left, right: the left and right images in the stereo pair
win_size: correlation window size
max_disp: upper bound on the allowed disparity
Out:
disparity_map: disparity map of the same size as the input
"""
not_same_size = len([1 for x,y in zip(left.shape, right.shape) if x!=y]) > 0
if not_same_size:
raise ValueError('The images should have the same size.')
height, width = left.shape[0], left.shape[1]
# TODO: compute squared diff-based disparity map
if use_convolution:
pass
else:
pass
for d in range(max_disp):
if use_convolution:
pass
else:
pass
disparity_map = 42*np.ones((height, width))
return disparity_map
def stereo_corr_NCC(left, right, win_size, max_disp):
"""Computes the disparity map (from left to right) for a stereo image pair subject
to a maximum allowed disparity using Normalized Cross-Correlation as
similarity measure.
It assumes rectified images (search only in the horizontal direction).
In:
left, right: the left and right images in the stereo pair
win_size: correlation window size
max_disp: upper bound on the allowed disparity
Out:
disparity_map: disparity map of the same size as the input
"""
not_same_size = len([1 for x,y in zip(left.shape, right.shape) if x!=y]) > 0
if not_same_size:
raise ValueError('The images should have the same size.')
height, width = left.shape[0], left.shape[1]
# TODO: compute correlation-based disparity map
disparity_map = 13*np.ones((height, width))
return disparity_map
# load images
left = cv2.cvtColor(cv2.imread('data/exercise_5/left.jpg'), cv2.COLOR_RGB2GRAY).astype('float32')/255.
right = cv2.cvtColor(cv2.imread('data/exercise_5/right.jpg'), cv2.COLOR_RGB2GRAY).astype('float32')/255.
# TODO: define parameters
win_sizes_ = [3]
max_disps_ = [5]
for win_size in win_sizes_:
for max_disp in max_disps_:
disparity_map_ssd = stereo_corr_ssd(left, right, win_size, max_disp)
disparity_map_NCC = stereo_corr_NCC(left, right, win_size, max_disp)
_, ax = plt.subplots(1)
ax.imshow(disparity_map_ssd)
ax.set_title('SSD: win_size: {}, max_disp: {}'.format(win_size, max_disp))
_, ax = plt.subplots(1)
ax.imshow(disparity_map_NCC)
ax.set_title('NCC: win_size: {}, max_disp: {}'.format(win_size, max_disp))
plt.show()
| 0.309128 | 0.9434 |
# probability distributions in python
source code: [datacamp](https://www.datacamp.com/community/tutorials/probability-distributions-python)
## import libs
```
import matplotlib.pyplot as plt
from IPython.display import Math, Latex
from IPython.core.display import Image
import seaborn as sb
%matplotlib inline
sb.set(color_codes=True)
sb.set(rc={'figure.figsize':(5,5)})
```
## Uniform distribution
continuous variable <br>
f(X) = 1/(b-a) for a <= x <= b <br>
f(x) = 0 for x < a or x > b
```
from scipy.stats import uniform
n = 10000
start = 10
width = 20
data_uniform = uniform.rvs(size=n,loc=start,scale=width,random_state=42)
# plot-object ax
ax = sb.displot(data_uniform,bins=100,kde=True,color='skyblue')
ax.set(xlabel='Uniform distribution', ylabel='Frequency')
```
## normal/Gaussian distribution
continuous<br>
f(x|mu,sigma^2) = 1/np.sqrt(2*pi*sigma^2) * exp(-(x-mu)^2 / 2*sigma^2)
```
from scipy.stats import norm
data_normal = norm.rvs(size=10000,loc=0,scale=1,random_state=42)
ax = sb.displot(data_normal,bins=100,kde=True,color='skyblue')
ax.set(xlabel='Normal distribution',ylabel='Frequency')
```
## Gamma distribution
continuous<br>
f(x;alpha,beta) = beta^alpha / Gamma(alpha) * x^(alpha-1) * exp(-beta*x)
```
from scipy.stats import gamma
data_gamma = gamma.rvs(a=5,size=10000,random_state=42)
ax = sb.displot(data_gamma,kde=True,bins=100,color='skyblue')
ax.set(xlabel='Gamma distribution',ylabel='Frequency')
```
## exponential distribution
continuous<br>
f(x;lambda) = lambda * exp(-lambda*x) for x >= 0<br>
otherwise 0
```
from scipy.stats import expon
data_exponential = expon.rvs(loc=0,scale=1,size=10000,random_state=42)
ax = sb.displot(data_exponential,kde=True,bins=100,color='skyblue')
ax.set(xlabel='Exponential distribution',ylabel='Frequency')
```
## Poisson distribution
discrete<br>
P(k events in interval) = exp(-lambda) * lambda^k / k!
```
from scipy.stats import poisson
data_poisson = poisson.rvs(mu=3,size=10000,random_state=42)
# mu is lambda
ax = sb.displot(data_poisson,bins=30,kde=False,color='skyblue')
ax.set(xlabel='Poisson distribution',ylabel='Frequency')
```
## Binomial distribution
discrete<br>
f(k,n,p) = P(k;n,p) = P(X=k) = (n k)p^k * (1-p)^(n-k)
```
from scipy.stats import binom
data_binom = binom.rvs(n=10,p=0.8,size=10000,random_state=42)
ax = sb.displot(data_binom,kde=False,color='skyblue')
ax.set(xlabel='Binomial distribution',ylabel='Frequency')
```
## Bernoulli distribution
discrete<br>
f(k;p) = p^k * (1-p)^(1-k) for k in {0,1}<br>
n=1 (single trail) case for Binomial distribution.
```
from scipy.stats import bernoulli
data_bernoulli = bernoulli.rvs(size=10000,p=0.6,random_state=42)
ax = sb.displot(data_bernoulli,bins=10,kde=False,color='skyblue')
ax.set(xlabel='Bernoulli distribution',ylabel='Frequency')
```
|
github_jupyter
|
import matplotlib.pyplot as plt
from IPython.display import Math, Latex
from IPython.core.display import Image
import seaborn as sb
%matplotlib inline
sb.set(color_codes=True)
sb.set(rc={'figure.figsize':(5,5)})
from scipy.stats import uniform
n = 10000
start = 10
width = 20
data_uniform = uniform.rvs(size=n,loc=start,scale=width,random_state=42)
# plot-object ax
ax = sb.displot(data_uniform,bins=100,kde=True,color='skyblue')
ax.set(xlabel='Uniform distribution', ylabel='Frequency')
from scipy.stats import norm
data_normal = norm.rvs(size=10000,loc=0,scale=1,random_state=42)
ax = sb.displot(data_normal,bins=100,kde=True,color='skyblue')
ax.set(xlabel='Normal distribution',ylabel='Frequency')
from scipy.stats import gamma
data_gamma = gamma.rvs(a=5,size=10000,random_state=42)
ax = sb.displot(data_gamma,kde=True,bins=100,color='skyblue')
ax.set(xlabel='Gamma distribution',ylabel='Frequency')
from scipy.stats import expon
data_exponential = expon.rvs(loc=0,scale=1,size=10000,random_state=42)
ax = sb.displot(data_exponential,kde=True,bins=100,color='skyblue')
ax.set(xlabel='Exponential distribution',ylabel='Frequency')
from scipy.stats import poisson
data_poisson = poisson.rvs(mu=3,size=10000,random_state=42)
# mu is lambda
ax = sb.displot(data_poisson,bins=30,kde=False,color='skyblue')
ax.set(xlabel='Poisson distribution',ylabel='Frequency')
from scipy.stats import binom
data_binom = binom.rvs(n=10,p=0.8,size=10000,random_state=42)
ax = sb.displot(data_binom,kde=False,color='skyblue')
ax.set(xlabel='Binomial distribution',ylabel='Frequency')
from scipy.stats import bernoulli
data_bernoulli = bernoulli.rvs(size=10000,p=0.6,random_state=42)
ax = sb.displot(data_bernoulli,bins=10,kde=False,color='skyblue')
ax.set(xlabel='Bernoulli distribution',ylabel='Frequency')
| 0.707506 | 0.963814 |
```
import numpy as np
import torch
from torch.utils.data import DataLoader
from models.gan_camel import Discriminator, Generator
from tqdm.auto import trange, tqdm
import matplotlib.pyplot as plt
%matplotlib inline
real_images = np.load("../data/camel/full_numpy_bitmap_camel.npy").reshape((-1, 1, 28, 28)).astype(np.float32) / 255
plt.imshow(real_images[0][0], cmap='gray')
plt.show()
generator = Generator().cuda()
discriminator = Discriminator().cuda()
adversarial_loss = torch.nn.BCELoss().cuda()
optimizer_G = torch.optim.RMSprop(generator.parameters(), lr=2e-4)
optimizer_D = torch.optim.RMSprop(discriminator.parameters(), lr=2e-4)
N_EPOCHS = 10
BATCH_SIZE = 64
N_CRITIC_UPDATES = 5
image_loader = DataLoader(real_images, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
g_losses = []
real_losses = []
fake_losses = []
for epoch in range(N_EPOCHS):
t = tqdm(image_loader, desc=f"Epoch {epoch}. g_loss {0.0:.2f} d_loss {0.0:.2f}")
for i, real_imgs in enumerate(t):
generator.train()
discriminator.eval()
valid = torch.ones(BATCH_SIZE, 1)
fake = torch.zeros(BATCH_SIZE, 1)
# train generator
optimizer_G.zero_grad()
z = torch.randn(BATCH_SIZE, 100, requires_grad=True)
gen_imgs = generator(z.cuda())
g_loss = adversarial_loss(discriminator(gen_imgs), valid.cuda())
g_losses.append(g_loss.item())
g_loss.backward()
optimizer_G.step()
# train discriminator
discriminator.train()
optimizer_D.zero_grad()
real_loss = adversarial_loss(discriminator(real_imgs.cuda()), valid.cuda())
fake_loss = adversarial_loss(discriminator(gen_imgs.detach()), fake.cuda())
real_losses.append(real_loss.item())
fake_losses.append(fake_loss.item())
d_loss = (real_loss + fake_loss) / 2
d_loss.backward()
optimizer_D.step()
if i % 100 == 0:
t.set_description(f"Epoch {epoch}. g_loss {g_loss.item():.2f} d_loss {d_loss.item():.2f}")
plt.figure()
plt.title(f"{epoch+1} {g_loss.item():.2f} {d_loss.item():.2f}")
plt.axis("off")
generator.eval()
plt.imshow(generator(torch.randn(1, 100).cuda()).cpu().detach().squeeze(), cmap='gray')
plt.show()
torch.save(generator.state_dict(), "models/generator_camel.pt")
torch.save(discriminator.state_dict(), "models/discriminator_camel.pt")
plt.plot(g_losses, label='generator', alpha=0.7, linewidth=0.2)
plt.plot(real_losses, label='real', alpha=0.7, linewidth=0.2)
plt.plot(fake_losses, label='fake', alpha=0.7, linewidth=0.2)
leg = plt.legend()
for i in range(3):
leg.get_lines()[i].set_linewidth(2)
plt.ylim(0, 5)
plt.show()
```
|
github_jupyter
|
import numpy as np
import torch
from torch.utils.data import DataLoader
from models.gan_camel import Discriminator, Generator
from tqdm.auto import trange, tqdm
import matplotlib.pyplot as plt
%matplotlib inline
real_images = np.load("../data/camel/full_numpy_bitmap_camel.npy").reshape((-1, 1, 28, 28)).astype(np.float32) / 255
plt.imshow(real_images[0][0], cmap='gray')
plt.show()
generator = Generator().cuda()
discriminator = Discriminator().cuda()
adversarial_loss = torch.nn.BCELoss().cuda()
optimizer_G = torch.optim.RMSprop(generator.parameters(), lr=2e-4)
optimizer_D = torch.optim.RMSprop(discriminator.parameters(), lr=2e-4)
N_EPOCHS = 10
BATCH_SIZE = 64
N_CRITIC_UPDATES = 5
image_loader = DataLoader(real_images, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
g_losses = []
real_losses = []
fake_losses = []
for epoch in range(N_EPOCHS):
t = tqdm(image_loader, desc=f"Epoch {epoch}. g_loss {0.0:.2f} d_loss {0.0:.2f}")
for i, real_imgs in enumerate(t):
generator.train()
discriminator.eval()
valid = torch.ones(BATCH_SIZE, 1)
fake = torch.zeros(BATCH_SIZE, 1)
# train generator
optimizer_G.zero_grad()
z = torch.randn(BATCH_SIZE, 100, requires_grad=True)
gen_imgs = generator(z.cuda())
g_loss = adversarial_loss(discriminator(gen_imgs), valid.cuda())
g_losses.append(g_loss.item())
g_loss.backward()
optimizer_G.step()
# train discriminator
discriminator.train()
optimizer_D.zero_grad()
real_loss = adversarial_loss(discriminator(real_imgs.cuda()), valid.cuda())
fake_loss = adversarial_loss(discriminator(gen_imgs.detach()), fake.cuda())
real_losses.append(real_loss.item())
fake_losses.append(fake_loss.item())
d_loss = (real_loss + fake_loss) / 2
d_loss.backward()
optimizer_D.step()
if i % 100 == 0:
t.set_description(f"Epoch {epoch}. g_loss {g_loss.item():.2f} d_loss {d_loss.item():.2f}")
plt.figure()
plt.title(f"{epoch+1} {g_loss.item():.2f} {d_loss.item():.2f}")
plt.axis("off")
generator.eval()
plt.imshow(generator(torch.randn(1, 100).cuda()).cpu().detach().squeeze(), cmap='gray')
plt.show()
torch.save(generator.state_dict(), "models/generator_camel.pt")
torch.save(discriminator.state_dict(), "models/discriminator_camel.pt")
plt.plot(g_losses, label='generator', alpha=0.7, linewidth=0.2)
plt.plot(real_losses, label='real', alpha=0.7, linewidth=0.2)
plt.plot(fake_losses, label='fake', alpha=0.7, linewidth=0.2)
leg = plt.legend()
for i in range(3):
leg.get_lines()[i].set_linewidth(2)
plt.ylim(0, 5)
plt.show()
| 0.795221 | 0.524882 |
<a href="https://colab.research.google.com/github/davidrkearney/colab-notebooks/blob/main/Text_Generation_LSTM_Dostoevsky.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Text Generation - LSTM
Credit: Code from https://github.com/jeffheaton/t81_558_deep_learning
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
import numpy as np
import random
import sys
import io
import requests
import re
r = requests.get("https://www.gutenberg.org/cache/epub/600/pg600.txt")
raw_text = r.text
print(raw_text[0:1000])
processed_text = raw_text.lower()
processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text)
print('corpus length:', len(processed_text))
chars = sorted(list(set(processed_text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(processed_text) - maxlen, step):
sentences.append(processed_text[i: i + maxlen])
next_chars.append(processed_text[i + maxlen])
print('nb sequences:', len(sentences))
sentences
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
x.shape
y.shape
y[0:10]
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print("******************************************************")
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(processed_text) - maxlen - 1)
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('----- temperature:', temperature)
generated = ''
sentence = processed_text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Fit the model
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y,
batch_size=128,
epochs=60,
callbacks=[print_callback])
```
|
github_jupyter
|
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
import numpy as np
import random
import sys
import io
import requests
import re
r = requests.get("https://www.gutenberg.org/cache/epub/600/pg600.txt")
raw_text = r.text
print(raw_text[0:1000])
processed_text = raw_text.lower()
processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text)
print('corpus length:', len(processed_text))
chars = sorted(list(set(processed_text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(processed_text) - maxlen, step):
sentences.append(processed_text[i: i + maxlen])
next_chars.append(processed_text[i + maxlen])
print('nb sequences:', len(sentences))
sentences
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
x.shape
y.shape
y[0:10]
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print("******************************************************")
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(processed_text) - maxlen - 1)
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('----- temperature:', temperature)
generated = ''
sentence = processed_text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Fit the model
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y,
batch_size=128,
epochs=60,
callbacks=[print_callback])
| 0.562657 | 0.837487 |
# <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109A Introduction to Data Science
## Lab 12: Building and Regularizing your first Neural Network
**Harvard University**<br>
**Fall 2019**<br>
**Instructors:** Pavlos Protopapas, Kevin Rader, Chris Tanner<br>
**Lab Instructors:** Chris Tanner and Eleni Kaxiras. <br>
**Authors:** Eleni Kaxiras, David Sondak, and Pavlos Protopapas.
```
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import pandas as pd
%matplotlib inline
from PIL import Image
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a 2.0.0 here!
```
#### Picking up where we left off `tf.keras` with Tensorflow 2.0:
```
tf.keras.models.Sequential
tf.keras.layers.Dense, tf.keras.layers.Activation,
tf.keras.layers.Dropout, tf.keras.layers.Flatten, tf.keras.layers.Reshape
tf.keras.optimizers.SGD
tf.keras.preprocessing.image.ImageDataGenerator
tf.keras.regularizers
tf.keras.datasets.mnist
```
## Learning Goals
In this lab we will continue with the basics of feedforward neural networks, we will create one and explore various ways to optimize and regularize it using `tf.keras`, a deep learning library inside the broader framework called [Tensorflow](https://www.tensorflow.org). By the end of this lab, you should:
- Understand how a simple neural network works and code some of its functionality using `tf.keras`.
- Think of vectors and arrays as tensors. Learn how to do basic image manipulations.
- Implement a simple real world example using a neural network. Find ways to improve its performance.
## Part 1: Motivation
<div class="exercise"><b>In class discussion : why do we care about Neural Nets?</b></div>
**Buzzwords**: Linearity, Interpretability, Performance
## Part 2: Data Preparation
### Tensors
We can think of tensors as multidimensional arrays of real numerical values; their job is to generalize matrices to multiple dimensions.
- **scalar** = just a number = rank 0 tensor ($a$ ∈ $F$,)
<BR><BR>
- **vector** = 1D array = rank 1 tensor ( $x = (\;x_1,...,x_i\;)⊤$ ∈ $F^n$ )
<BR><BR>
- **matrix** = 2D array = rank 2 tensor ( $\textbf{X} = [a_{ij}] ∈ F^{m×n}$ )
<BR><BR>
- **3D array** = rank 3 tensor ( $\mathscr{X} =[t_{i,j,k}]∈F^{m×n×l}$ )
<BR><BR>
- **$\mathscr{N}$D array** = rank $\mathscr{N}$ tensor ( $\mathscr{T} =[t_{i1},...,t_{i\mathscr{N}}]∈F^{n_1×...×n_\mathscr{N}}$ ) <-- **Things start to get complicated here...**
#### Tensor indexing
We can create subarrays by fixing some of the given tensor’s indices. We can create a vector by fixing all but one index. A 2D matrix is created when fixing all but two indices. For example, for a third order tensor the vectors are
<br><BR>
$\mathscr{X}[:,j,k]$ = $\mathscr{X}[j,k]$ (column), <br>
$\mathscr{X}[i,:,k]$ = $\mathscr{X}[i,k]$ (row), and <BR>
$\mathscr{X}[i,j,:]$ = $\mathscr{X}[i,j]$ (tube) <BR>
#### Tensor multiplication
We can multiply one matrix with another as long as the sizes are compatible ((n × m) × (m × p) = n × p), and also multiply an entire matrix by a constant. Numpy `numpy.dot` performs a matrix multiplication which is straightforward when we have 2D or 1D arrays. But what about > 3D arrays? The function will choose according to the matching dimentions but if we want to choose we should use `tensordot`, but, again, we **do not need tensordot** for this class.
### Reese Witherspoon as a Rank 3 Tensor
A common kind of data input to a neural network is images. Images are nice to look at, but remember, the computer only sees a series of numbers arranged in `tensors`. In this part we will look at how images are displayed and altered in Python.
`matplotlib` supports only .png images but uses a library called `Pillow` to handle any image. If you do not have `Pillow` installed you can do this in anaconda:
```
conda install -c anaconda pillow
OR
pip install pillow
```
This image is from the dataset [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/person/Reese_Witherspoon.html) used for machine learning training. Images are 24-bit RGB images (height, width, channels) with 8 bits for each of R, G, B channel. Explore and print the array.
```
import matplotlib.image as mpimg
# load and show the image
FILE = '../fig/Reese_Witherspoon.jpg'
img = mpimg.imread(FILE);
imgplot = plt.imshow(img);
print(f'The image is a: {type(img)} of shape {img.shape}')
img[3:5, 3:5, :]
```
#### Slicing tensors: slice along each axis
```
# we want to show each color channel
fig, axes = plt.subplots(1, 3, figsize=(10,10))
for i, subplot in zip(range(3), axes):
temp = np.zeros(img.shape, dtype='uint8')
temp[:,:,i] = img[:,:,i]
subplot.imshow(temp)
subplot.set_axis_off()
plt.show()
```
#### Multiplying Images with a scalar
Just for fun, no real use for this lab!
```
temp = img
temp = temp * 2
plt.imshow(temp)
```
For more on image manipulation by `matplotlib` see: [matplotlib-images](https://matplotlib.org/3.1.1/tutorials/introductory/images.html)
## Part 3: Building an Artificial Neural Network
https://www.tensorflow.org/guide/keras
`tf.keras` is TensorFlow's high-level API for building and training deep learning models. It's used for fast prototyping, state-of-the-art research, and production. `Keras` is a library created by François Chollet. After Google released Tensorflow 2.0, the creators of `keras` recommend that "Keras users who use multi-backend Keras with the TensorFlow backend switch to `tf.keras` in TensorFlow 2.0. `tf.keras` is better maintained and has better integration with TensorFlow features".
NOTE: In `Keras` everything starts with a Tensor of N samples as input and ends with a Tensor of N samples as output.
### First you build it ...
Parts of a NN:
* Part 1: the input layer (our dataset)
* Part 2: the internal architecture or hidden layers (the number of layers, the activation functions, the learnable parameters and other hyperparameters)
* Part 3: the output layer (what we want from the network - classification or regression)
### ... and then you train it!
1. Load and pre-process the data
2. Define the layers of the model.
3. Compile the model.
4. Fit the model to the train set (also using a validation set).
5. Evaluate the model on the test set.
6. We learn a lot by studying History! Plot metrics such as accuracy.
7. Now let's use the Network for what it was meant to do: Predict on the test set!
8. Try our model on a sandal from the Kanye West collection!
```
# set the seed for reproducability of results
seed = 7
np.random.seed(seed)
```
### Fashion MNIST
**Fashion-MNIST** is a dataset of clothing article images (created by [Zalando](https://github.com/zalandoresearch/fashion-mnist)), consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a **28 x 28** grayscale image, associated with a label from **10 classes**. The creators intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. Each pixel is 8 bits so its value ranges from 0 to 255.
Let's load and look at it!
#### 1. Load and pre-process the data
```
# get the data from keras - how convenient!
fashion_mnist = tf.keras.datasets.fashion_mnist
# load the data splitted in train and test! how nice!
(x_train, y_train),(x_test, y_test) = fashion_mnist.load_data()
# normalize the data by dividing with pixel intensity
# (each pixel is 8 bits so its value ranges from 0 to 255)
x_train, x_test = x_train / 255.0, x_test / 255.0
# classes are named 0-9 so define names for plotting clarity
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(class_names[y_train[i]])
plt.show()
# choose one image to look at
plt.imshow(x_train[3], cmap=plt.cm.binary)
# take a look at the array shapes
x_train.shape, x_test.shape, y_train.shape
```
#### 2. Define the layers of the model.
```
# type your code here along with instructor
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(154, activation = 'relu'),
tf.keras.layers.Dense(64, activation ='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation = 'softmax')
])
```
#### 3. Compile the model
```
# type your code here along with instructor
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
model.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])
# print a summary of your model
model.summary()
# use this cool `tf.keras` method to visualize the layers of your network
tf.keras.utils.plot_model(
model,
#to_file='model.png', # if you want to save the image
show_shapes=True, # True for more details than you need
show_layer_names=True,
rankdir='TB',
expand_nested=False,
dpi=96
)
```
[Everything you wanted to know about a Keras Model and were afraid to ask](https://www.tensorflow.org/api_docs/python/tf/keras/Model)
#### 4. Fit the model to the train set (also using a validation set)
This is the part that takes the longest in terms of time and where having GPUs helps.
-----------------------------------------------------------
**ep·och** <BR>
noun: epoch; plural noun: epochs. A period of time in history or a person's life, typically one marked by notable events or particular characteristics. Examples: "the Victorian epoch", "my Neural Netwok's epochs". <BR>
-----------------------------------------------------------
```
%%time
# type your code here along with instructor
```
#### Save the model
You can save the model so you do not have `.fit` everytime you reset the kernel in the notebook. Network training is expensive!
For more details on this see [https://www.tensorflow.org/guide/keras/save_and_serialize](https://www.tensorflow.org/guide/keras/save_and_serialize)
```
# save the model so you do not have to run the code everytime
model.save('fashion_model.h5')
# Recreate the exact same model purely from the file
#model = tf.keras.models.load_model('fashion_model.h5')
```
#### 5. Evaluate the model on the test set.
```
# type your code here along with instructor
history = model.fit(x_train, y_train, validation_split=0.33, epochs=50, verbose=2)
# print results
print(f'Test accuracy={test_accuracy:.4f}')
if test_accuracy>0.8: print(f'Not bad!')
```
#### 6. We learn a lot by studying History! Plot metrics such as accuracy.
You can learn a lot about neural networks by observing how they perform while training. You can issue `kallbacks` in `keras`. The networks's performance is stored in a `keras` callback aptly named `history` which can be plotted.
```
print(history.history.keys())
# plot accuracy and loss for the test set
fig, ax = plt.subplots(1,2, figsize=(20,6))
ax[0].plot(history.history['accuracy'])
ax[0].plot(history.history['val_accuracy'])
ax[0].set_title('Model accuracy')
ax[0].set_ylabel('accuracy')
ax[0].set_xlabel('epoch')
ax[0].legend(['train', 'val'], loc='best')
ax[1].plot(history.history['loss'])
ax[1].plot(history.history['val_loss'])
ax[1].set_title('Model loss')
ax[1].set_ylabel('loss')
ax[1].set_xlabel('epoch')
ax[1].legend(['train', 'val'], loc='best')
```
#### 7. Now let's use the Network for what it was meant to do: Predict on the test set!
```
# type your code here along with instructor
# print results
print(f'These are the Network\'s predicted probabilities for each class for the first test image: \n{predictions[0]}')
print(f'Our Oracle says this is a class {np.argmax(predictions[0]):.2f}, which is a {class_names[np.argmax(predictions[0])]}')
```
Let's see if our network predicted right! Does this item really look like what was predicted?
```
plt.figure()
plt.imshow(x_test[0], cmap=plt.cm.binary)
plt.xlabel(class_names[y_test[0]])
plt.colorbar()
```
Now let's see how confident our model is by plotting the probability values:
```
# code source: https://www.tensorflow.org/tutorials/keras/classification
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label],
100*np.max(predictions_array),
class_names[true_label]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], y_test, x_test)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], y_test)
plt.show()
```
#### 8. Try our model on a sandal from the Kanye West collection!
Let's see if our network can generalize beyond the MNIST fashion dataset. Let's give it a trendy shoe and see what it predicts. Here is the image:
<img src="../fig/kanye_shoe.jpg" alt="shoe" width="150" height="150"><BR>
<div class="exercise"><b>In class discussion : What kinds of images can our model predict?</b></div>
**Buzzword**: Generalization
```
# Let'see the tensor shape
shoe = np.array(Image.open('../fig/kanye_28.jpg'))
shoe.shape
# We need to delete the other 2 channels and make the image B&W.
shoe = shoe[:,:,0]
shoe.shape
plt.figure()
plt.imshow(shoe, cmap=plt.cm.binary)
plt.xlabel('a cool shoe')
plt.colorbar()
```
`tf.keras` models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
```
# Add the image to a batch where it's the only member.
shoe_batch = (np.expand_dims(shoe,0))
print(shoe_batch.shape)
# write the code to predict here
```
<div class="exercise"><b>In class discussion : How did our model perform?</b></div>
**Buzzword:** Convolutional Neural Networks!
Let's now try a different boot:
```
boot = np.array(Image.open('../fig/random_boot.png'))
plt.figure()
plt.imshow(boot, cmap=plt.cm.binary)
plt.xlabel('random boot from web')
plt.colorbar()
# make into one channel
boot = boot[:,:,0]
boot.shape
boots = (np.expand_dims(boot,0))
print(boot.shape)
predictions_single = model.predict(boots)
print(predictions_single[0])
print(np.argmax(predictions_single[0]), class_names[np.argmax(predictions_single[0])])
# if it's either a sneaker or a boot we are good
if np.argmax(predictions_single[0]) in [7,9]: print(f'We did better this time!')
```
### Regularization
Let's try adding a regularizer in our model. For more see `tf.keras` [regularizers](https://www.tensorflow.org/api_docs/python/tf/keras/regularizers).<BR>
1. Norm penalties: `kernel_regularizer= tf.keras.regularizers.l2(l=0.1)`
2. Early stopping via `tf.keras.callbacks`. Callbacks provide a way to interact with the model while it's training and inforce some decisions automatically. Callbacks need to be instantiated and are added to the `.fit()` function via the `callbacks` argument.
3. Dropout
```
# callbacks
# watch validation loss and be "patient" for 50 epochs of no improvement
#es = tf.keras.callbacks.EarlyStopping(monitor='val_loss', verbose=1, patience=30)
model_regular = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(154, activation='relu',
kernel_regularizer= tf.keras.regularizers.l2(l=0.1)),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation='relu',
kernel_regularizer= tf.keras.regularizers.l2(l=0.1)),
tf.keras.layers.Dense(10, activation='softmax')
])
# compile
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
model_regular.compile(optimizer=optimizer,
loss=loss_fn,
metrics=['accuracy'])
# fit
history_regular = model_regular.fit(x_train, y_train, validation_split=0.33, epochs=50,
verbose=2) #, callbacks=[es])
test_loss, test_accuracy = model_regular.evaluate(x_test, y_test, verbose=0)
print(f'Test accuracy for regularized model={test_accuracy}')
# plot accuracy and loss for the test set
fig, ax = plt.subplots(1,2, figsize=(20,6))
ax[0].plot(history_regular.history['accuracy'])
ax[0].plot(history_regular.history['val_accuracy'])
ax[0].set_title('Regularized Model accuracy')
ax[0].set_ylabel('accuracy')
ax[0].set_xlabel('epoch')
ax[0].legend(['train', 'val'], loc='best')
ax[1].plot(history_regular.history['loss'])
ax[1].plot(history_regular.history['val_loss'])
ax[1].set_title('Regularized Model loss')
ax[1].set_ylabel('loss')
ax[1].set_xlabel('epoch')
ax[1].legend(['train', 'val'], loc='best')
```
|
github_jupyter
|
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import pandas as pd
%matplotlib inline
from PIL import Image
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
tf.keras.backend.clear_session() # For easy reset of notebook state.
print(tf.__version__) # You should see a 2.0.0 here!
## Learning Goals
In this lab we will continue with the basics of feedforward neural networks, we will create one and explore various ways to optimize and regularize it using `tf.keras`, a deep learning library inside the broader framework called [Tensorflow](https://www.tensorflow.org). By the end of this lab, you should:
- Understand how a simple neural network works and code some of its functionality using `tf.keras`.
- Think of vectors and arrays as tensors. Learn how to do basic image manipulations.
- Implement a simple real world example using a neural network. Find ways to improve its performance.
## Part 1: Motivation
<div class="exercise"><b>In class discussion : why do we care about Neural Nets?</b></div>
**Buzzwords**: Linearity, Interpretability, Performance
## Part 2: Data Preparation
### Tensors
We can think of tensors as multidimensional arrays of real numerical values; their job is to generalize matrices to multiple dimensions.
- **scalar** = just a number = rank 0 tensor ($a$ ∈ $F$,)
<BR><BR>
- **vector** = 1D array = rank 1 tensor ( $x = (\;x_1,...,x_i\;)⊤$ ∈ $F^n$ )
<BR><BR>
- **matrix** = 2D array = rank 2 tensor ( $\textbf{X} = [a_{ij}] ∈ F^{m×n}$ )
<BR><BR>
- **3D array** = rank 3 tensor ( $\mathscr{X} =[t_{i,j,k}]∈F^{m×n×l}$ )
<BR><BR>
- **$\mathscr{N}$D array** = rank $\mathscr{N}$ tensor ( $\mathscr{T} =[t_{i1},...,t_{i\mathscr{N}}]∈F^{n_1×...×n_\mathscr{N}}$ ) <-- **Things start to get complicated here...**
#### Tensor indexing
We can create subarrays by fixing some of the given tensor’s indices. We can create a vector by fixing all but one index. A 2D matrix is created when fixing all but two indices. For example, for a third order tensor the vectors are
<br><BR>
$\mathscr{X}[:,j,k]$ = $\mathscr{X}[j,k]$ (column), <br>
$\mathscr{X}[i,:,k]$ = $\mathscr{X}[i,k]$ (row), and <BR>
$\mathscr{X}[i,j,:]$ = $\mathscr{X}[i,j]$ (tube) <BR>
#### Tensor multiplication
We can multiply one matrix with another as long as the sizes are compatible ((n × m) × (m × p) = n × p), and also multiply an entire matrix by a constant. Numpy `numpy.dot` performs a matrix multiplication which is straightforward when we have 2D or 1D arrays. But what about > 3D arrays? The function will choose according to the matching dimentions but if we want to choose we should use `tensordot`, but, again, we **do not need tensordot** for this class.
### Reese Witherspoon as a Rank 3 Tensor
A common kind of data input to a neural network is images. Images are nice to look at, but remember, the computer only sees a series of numbers arranged in `tensors`. In this part we will look at how images are displayed and altered in Python.
`matplotlib` supports only .png images but uses a library called `Pillow` to handle any image. If you do not have `Pillow` installed you can do this in anaconda:
This image is from the dataset [Labeled Faces in the Wild](http://vis-www.cs.umass.edu/lfw/person/Reese_Witherspoon.html) used for machine learning training. Images are 24-bit RGB images (height, width, channels) with 8 bits for each of R, G, B channel. Explore and print the array.
#### Slicing tensors: slice along each axis
#### Multiplying Images with a scalar
Just for fun, no real use for this lab!
For more on image manipulation by `matplotlib` see: [matplotlib-images](https://matplotlib.org/3.1.1/tutorials/introductory/images.html)
## Part 3: Building an Artificial Neural Network
https://www.tensorflow.org/guide/keras
`tf.keras` is TensorFlow's high-level API for building and training deep learning models. It's used for fast prototyping, state-of-the-art research, and production. `Keras` is a library created by François Chollet. After Google released Tensorflow 2.0, the creators of `keras` recommend that "Keras users who use multi-backend Keras with the TensorFlow backend switch to `tf.keras` in TensorFlow 2.0. `tf.keras` is better maintained and has better integration with TensorFlow features".
NOTE: In `Keras` everything starts with a Tensor of N samples as input and ends with a Tensor of N samples as output.
### First you build it ...
Parts of a NN:
* Part 1: the input layer (our dataset)
* Part 2: the internal architecture or hidden layers (the number of layers, the activation functions, the learnable parameters and other hyperparameters)
* Part 3: the output layer (what we want from the network - classification or regression)
### ... and then you train it!
1. Load and pre-process the data
2. Define the layers of the model.
3. Compile the model.
4. Fit the model to the train set (also using a validation set).
5. Evaluate the model on the test set.
6. We learn a lot by studying History! Plot metrics such as accuracy.
7. Now let's use the Network for what it was meant to do: Predict on the test set!
8. Try our model on a sandal from the Kanye West collection!
### Fashion MNIST
**Fashion-MNIST** is a dataset of clothing article images (created by [Zalando](https://github.com/zalandoresearch/fashion-mnist)), consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a **28 x 28** grayscale image, associated with a label from **10 classes**. The creators intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits. Each pixel is 8 bits so its value ranges from 0 to 255.
Let's load and look at it!
#### 1. Load and pre-process the data
#### 2. Define the layers of the model.
#### 3. Compile the model
[Everything you wanted to know about a Keras Model and were afraid to ask](https://www.tensorflow.org/api_docs/python/tf/keras/Model)
#### 4. Fit the model to the train set (also using a validation set)
This is the part that takes the longest in terms of time and where having GPUs helps.
-----------------------------------------------------------
**ep·och** <BR>
noun: epoch; plural noun: epochs. A period of time in history or a person's life, typically one marked by notable events or particular characteristics. Examples: "the Victorian epoch", "my Neural Netwok's epochs". <BR>
-----------------------------------------------------------
#### Save the model
You can save the model so you do not have `.fit` everytime you reset the kernel in the notebook. Network training is expensive!
For more details on this see [https://www.tensorflow.org/guide/keras/save_and_serialize](https://www.tensorflow.org/guide/keras/save_and_serialize)
#### 5. Evaluate the model on the test set.
#### 6. We learn a lot by studying History! Plot metrics such as accuracy.
You can learn a lot about neural networks by observing how they perform while training. You can issue `kallbacks` in `keras`. The networks's performance is stored in a `keras` callback aptly named `history` which can be plotted.
#### 7. Now let's use the Network for what it was meant to do: Predict on the test set!
Let's see if our network predicted right! Does this item really look like what was predicted?
Now let's see how confident our model is by plotting the probability values:
#### 8. Try our model on a sandal from the Kanye West collection!
Let's see if our network can generalize beyond the MNIST fashion dataset. Let's give it a trendy shoe and see what it predicts. Here is the image:
<img src="../fig/kanye_shoe.jpg" alt="shoe" width="150" height="150"><BR>
<div class="exercise"><b>In class discussion : What kinds of images can our model predict?</b></div>
**Buzzword**: Generalization
`tf.keras` models are optimized to make predictions on a batch, or collection, of examples at once. Accordingly, even though you're using a single image, you need to add it to a list:
<div class="exercise"><b>In class discussion : How did our model perform?</b></div>
**Buzzword:** Convolutional Neural Networks!
Let's now try a different boot:
### Regularization
Let's try adding a regularizer in our model. For more see `tf.keras` [regularizers](https://www.tensorflow.org/api_docs/python/tf/keras/regularizers).<BR>
1. Norm penalties: `kernel_regularizer= tf.keras.regularizers.l2(l=0.1)`
2. Early stopping via `tf.keras.callbacks`. Callbacks provide a way to interact with the model while it's training and inforce some decisions automatically. Callbacks need to be instantiated and are added to the `.fit()` function via the `callbacks` argument.
3. Dropout
| 0.860296 | 0.991041 |
## Regression with BIWI head pose dataset
This is a more advanced example to show how to create custom datasets and do regression with images. Our task is to find the center of the head in each image. The data comes from the [BIWI head pose dataset](https://data.vision.ee.ethz.ch/cvl/gfanelli/head_pose/head_forest.html#db), thanks to Gabriele Fanelli et al. We have converted the images to jpeg format, so you should download the converted dataset from [this link](https://s3.amazonaws.com/fast-ai-imagelocal/biwi_head_pose.tgz).
```
%matplotlib inline
from fastai2.basics import *
from fastai2.callback.all import *
from fastai2.vision.all import *
from fastai2.notebook.showdoc import *
```
## Getting and converting the data
```
path = untar_data(URLs.BIWI_HEAD_POSE)
cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6); cal
fname = '09/frame_00667_rgb.jpg'
def img2txt_name(f): return path/f'{str(f)[:-7]}pose.txt'
img = PILImage.create(path/fname)
img.show();
ctr = np.genfromtxt(img2txt_name(fname), skip_header=3); ctr
def convert_biwi(coords):
c1 = coords[0] * cal[0][0]/coords[2] + cal[0][2]
c2 = coords[1] * cal[1][1]/coords[2] + cal[1][2]
return tensor([c1,c2])
def get_ctr(f):
ctr = np.genfromtxt(img2txt_name(f), skip_header=3)
return convert_biwi(ctr)
def get_ip(img,pts): return TensorPoint.create(pts, sz=img.size)
get_ctr(fname)
ctr = get_ctr(fname)
ax = img.show(figsize=(6, 6))
get_ip(img, ctr).show(ctx=ax);
```
## Creating a dataset
```
dblock = DataBlock(blocks=(ImageBlock, PointBlock),
get_items=get_image_files,
splitter=FuncSplitter(lambda o: o.parent.name=='13'),
get_y=get_ctr)
dbunch = dblock.databunch(path, path=path, bs=64, batch_tfms=[*aug_transforms(size=(120,160)), Normalize(*imagenet_stats)])
dbunch.show_batch(max_n=9, figsize=(9,6))
```
## Train model
```
#TODO: look in after_item for c
dbunch.c = dbunch.train_dl.after_item.c
learn = cnn_learner(dbunch, resnet34)
learn.lr_find()
lr = 2e-2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1')
learn.load('stage-1');
learn.show_results(max_n=6)
```
## Data augmentation (not ported yet)
```
tfms = get_transforms(max_rotate=20, max_zoom=1.5, max_lighting=0.5, max_warp=0.4, p_affine=1., p_lighting=1.)
data = (PointsItemList.from_folder(path)
.split_by_valid_func(lambda o: o.parent.name=='13')
.label_from_func(get_ctr)
.transform(tfms, tfm_y=True, size=(120,160))
.databunch().normalize(imagenet_stats)
)
def _plot(i,j,ax):
x,y = data.train_ds[0]
x.show(ax, y=y)
plot_multi(_plot, 3, 3, figsize=(8,6))
```
|
github_jupyter
|
%matplotlib inline
from fastai2.basics import *
from fastai2.callback.all import *
from fastai2.vision.all import *
from fastai2.notebook.showdoc import *
path = untar_data(URLs.BIWI_HEAD_POSE)
cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6); cal
fname = '09/frame_00667_rgb.jpg'
def img2txt_name(f): return path/f'{str(f)[:-7]}pose.txt'
img = PILImage.create(path/fname)
img.show();
ctr = np.genfromtxt(img2txt_name(fname), skip_header=3); ctr
def convert_biwi(coords):
c1 = coords[0] * cal[0][0]/coords[2] + cal[0][2]
c2 = coords[1] * cal[1][1]/coords[2] + cal[1][2]
return tensor([c1,c2])
def get_ctr(f):
ctr = np.genfromtxt(img2txt_name(f), skip_header=3)
return convert_biwi(ctr)
def get_ip(img,pts): return TensorPoint.create(pts, sz=img.size)
get_ctr(fname)
ctr = get_ctr(fname)
ax = img.show(figsize=(6, 6))
get_ip(img, ctr).show(ctx=ax);
dblock = DataBlock(blocks=(ImageBlock, PointBlock),
get_items=get_image_files,
splitter=FuncSplitter(lambda o: o.parent.name=='13'),
get_y=get_ctr)
dbunch = dblock.databunch(path, path=path, bs=64, batch_tfms=[*aug_transforms(size=(120,160)), Normalize(*imagenet_stats)])
dbunch.show_batch(max_n=9, figsize=(9,6))
#TODO: look in after_item for c
dbunch.c = dbunch.train_dl.after_item.c
learn = cnn_learner(dbunch, resnet34)
learn.lr_find()
lr = 2e-2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1')
learn.load('stage-1');
learn.show_results(max_n=6)
tfms = get_transforms(max_rotate=20, max_zoom=1.5, max_lighting=0.5, max_warp=0.4, p_affine=1., p_lighting=1.)
data = (PointsItemList.from_folder(path)
.split_by_valid_func(lambda o: o.parent.name=='13')
.label_from_func(get_ctr)
.transform(tfms, tfm_y=True, size=(120,160))
.databunch().normalize(imagenet_stats)
)
def _plot(i,j,ax):
x,y = data.train_ds[0]
x.show(ax, y=y)
plot_multi(_plot, 3, 3, figsize=(8,6))
| 0.249082 | 0.941277 |
# RippleNet_training_bidirectional
Training of simple bidirectional recurrent neural network (RNN) implementation in `tensorflow.keras` using LSTM (long short-term memory) units to identify time of occurence of sharp wave ripple (SPW-R) events in temporal data.
Author: Espen Hagen (<https://github.com/espenhgn>)
LICENSE: <https://github.com/CINPLA/RippleNet/blob/master/LICENSE>
```
# allow running on Google Colab for training using Google Drive for file access
try:
from google.colab import drive
drive.mount('/content/gdrive')
%cd gdrive/My\ Drive/Colab\ Notebooks/RippleNet
%tensorflow_version 2.x
except:
pass
%matplotlib inline
import os
import numpy as np
import scipy.signal as ss
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
from matplotlib import colors
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.utils import plot_model
import ripplenet.models
import h5py
import pickle
import random
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(tf.__version__)
print(tf.test.gpu_device_name())
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# set random seeds with some additional environment variables to ensure deterministic output
random_seed = 789
os.environ['TF_DETERMINISTIC_OPS'] = '1'
os.environ['PYTHONHASHSEED']=str(random_seed)
random.seed(random_seed)
np.random.seed(random_seed)
tf.random.set_seed(random_seed)
# select dataset (may have generated different sets.)
dataset_index = 0
```
# Load training/validation data
```
# select species for training/validation data (mouse, rat or both)
mouse = True
rat = True
# output destination
output_folder = 'trained_networks'
if not os.path.isdir(output_folder):
os.mkdir(output_folder)
# prefix for trained network files (training loss/MSE, weights, `best' weights)
rnn_prefix = 'ripplenet_bidirectional'
if mouse:
# training and validation files
f_name_train = 'train_{:02}.h5'
f_name_val = 'validation_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.expand_dims(f['X0'][:], -1)
Y_train = f['Y'][:]
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.expand_dims(f['X0'][:], -1)
Y_val = f['Y'][:]
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = f['X0'][:]
X1 = f['X1'][:]
S = f['S'][:]
Y = f['Y'][:]
S_freqs = f['S_freqs'][:]
f.close()
# Add rat training/validation data to sets
if rat and mouse:
# rat
f_name_train = 'train_tingley_{:02}.h5'
f_name_val = 'validation_tingley_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.concatenate((X_train, np.expand_dims(f['X0'][:], -1)))
Y_train = np.concatenate((Y_train, f['Y'][:]))
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.concatenate((X_val, np.expand_dims(f['X0'][:], -1)))
Y_val = np.concatenate((Y_val, f['Y'][:]))
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = np.concatenate((X0, f['X0'][:]))
X1 = np.concatenate((X1, f['X1'][:]))
S = np.concatenate((S, f['S'][:]))
Y = np.concatenate((Y, f['Y'][:]))
f.close()
if rat and not mouse:
# rat
f_name_train = 'train_tingley_{:02}.h5'
f_name_val = 'validation_tingley_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.expand_dims(f['X0'][:], -1)
Y_train = f['Y'][:]
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.expand_dims(f['X0'][:], -1)
Y_val = f['Y'][:]
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = f['X0'][:]
X1 = f['X1'][:]
S = f['S'][:]
Y = f['Y'][:]
S_freqs = f['S_freqs'][:]
f.close()
# needed parameters
Fs = 1250 # Hz, sampling freq
time = np.arange(X0.shape[1]) / Fs
# center raw data
X0 = (X0.T - X0.mean(axis=-1)).T
# total number of samples
n_samples = X0.shape[0]
# plot all labels and raw data matrices
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 12))
axes[0].pcolormesh(time, np.arange(n_samples), Y[:, :, 0])
axes[0].set_ylabel('#')
axes[0].set_title('labels (y)')
axes[1].pcolormesh(time, np.arange(n_samples), X0, vmin=-X0.std()*3, vmax=X0.std()*3)
axes[1].set_ylabel('#')
axes[1].set_xlabel('t (s)')
axes[1].set_title('raw data (X)')
for ax in axes:
ax.axis(ax.axis('tight'))
# plot wavelet spectrograms vs. labels and raw data for some samples
for i in range(5):
gs = GridSpec(2, 1)
fig = plt.figure(figsize=(12, 6))
ax0 = fig.add_subplot(gs[0, 0])
ax0.plot(time, X0[i, ], label='$X(t)$')
ax0.plot(time, X1[i, ], label=r'$\phi_\mathrm{bp}(t)$')
ax0.plot(time, Y[i, :, 0], label='label ($y$)' )
ax0.legend(ncol=2)
ax0.axis(ax0.axis('tight'))
ax0.set_title('label, raw data and spectrograms')
plt.setp(ax0.get_xticklabels(), visible=False)
ax1 = fig.add_subplot(gs[1:, 0], sharex=ax0)
vmin, vmax = np.exp(np.percentile(np.log(S), [1, 99]))
im = ax1.pcolormesh(time, S_freqs, S[i, ].T, norm=colors.LogNorm(vmin=vmin, vmax=vmax),
cmap='inferno')
ax1.axis(ax1.axis('tight'))
ax1.set_ylabel('$f$ (Hz)')
ax1.set_xlabel('$t$ (s)')
```
# Set up recurrent neural network
```
model = ripplenet.models.get_bidirectional_LSTM_model(input_shape=(None, X_train.shape[2]),
layer_sizes=[20, 10, 6, 6],
seed=random_seed+1)
model.summary()
# plot_model(model, show_shapes=True, expand_nested=True)
# model checkpoints when validation mse improves
filepath = os.path.join(output_folder, '{}_best_random_seed{}.h5'.format(rnn_prefix, random_seed))
checkpoint_best = keras.callbacks.ModelCheckpoint(filepath, monitor='val_mse',
verbose=1, save_best_only=True,
mode='min')
callback_hist = keras.callbacks.CSVLogger(os.path.join(output_folder, '{}_history_random_seed{}.csv'.format(rnn_prefix, random_seed)))
callbacks_list = [checkpoint_best, callback_hist]
# train model
history = model.fit(X_train, Y_train,
batch_size=20,
epochs=50,
callbacks=callbacks_list,
validation_data=(X_val, Y_val))
# save history to a pickle so we can load it later
with open(os.path.join(output_folder, '{}_history_random_seed{}.pkl'.format(rnn_prefix, random_seed)), 'wb') as f:
pickle.dump(history.history, f)
plt.figure(figsize=(12, 12))
plt.semilogy(history.history['loss'], '-o', label='loss')
plt.semilogy(history.history['val_loss'], '-o', label='val_loss')
plt.semilogy(history.history['mse'], '-o', label='mse')
plt.semilogy(history.history['val_mse'], '-o', label='val_mse')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('MSE')
plt.title('training/validation MSE')
# Save the trained model
model.save(os.path.join(output_folder, '{}_random_seed{}.h5'.format(rnn_prefix, random_seed)))
```
|
github_jupyter
|
# allow running on Google Colab for training using Google Drive for file access
try:
from google.colab import drive
drive.mount('/content/gdrive')
%cd gdrive/My\ Drive/Colab\ Notebooks/RippleNet
%tensorflow_version 2.x
except:
pass
%matplotlib inline
import os
import numpy as np
import scipy.signal as ss
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
from matplotlib import colors
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.utils import plot_model
import ripplenet.models
import h5py
import pickle
import random
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
print(tf.__version__)
print(tf.test.gpu_device_name())
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
# set random seeds with some additional environment variables to ensure deterministic output
random_seed = 789
os.environ['TF_DETERMINISTIC_OPS'] = '1'
os.environ['PYTHONHASHSEED']=str(random_seed)
random.seed(random_seed)
np.random.seed(random_seed)
tf.random.set_seed(random_seed)
# select dataset (may have generated different sets.)
dataset_index = 0
# select species for training/validation data (mouse, rat or both)
mouse = True
rat = True
# output destination
output_folder = 'trained_networks'
if not os.path.isdir(output_folder):
os.mkdir(output_folder)
# prefix for trained network files (training loss/MSE, weights, `best' weights)
rnn_prefix = 'ripplenet_bidirectional'
if mouse:
# training and validation files
f_name_train = 'train_{:02}.h5'
f_name_val = 'validation_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.expand_dims(f['X0'][:], -1)
Y_train = f['Y'][:]
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.expand_dims(f['X0'][:], -1)
Y_val = f['Y'][:]
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = f['X0'][:]
X1 = f['X1'][:]
S = f['S'][:]
Y = f['Y'][:]
S_freqs = f['S_freqs'][:]
f.close()
# Add rat training/validation data to sets
if rat and mouse:
# rat
f_name_train = 'train_tingley_{:02}.h5'
f_name_val = 'validation_tingley_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.concatenate((X_train, np.expand_dims(f['X0'][:], -1)))
Y_train = np.concatenate((Y_train, f['Y'][:]))
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.concatenate((X_val, np.expand_dims(f['X0'][:], -1)))
Y_val = np.concatenate((Y_val, f['Y'][:]))
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = np.concatenate((X0, f['X0'][:]))
X1 = np.concatenate((X1, f['X1'][:]))
S = np.concatenate((S, f['S'][:]))
Y = np.concatenate((Y, f['Y'][:]))
f.close()
if rat and not mouse:
# rat
f_name_train = 'train_tingley_{:02}.h5'
f_name_val = 'validation_tingley_{:02}.h5'
# training data
f = h5py.File(os.path.join('data',
f_name_train.format(dataset_index)),
'r')
X_train = np.expand_dims(f['X0'][:], -1)
Y_train = f['Y'][:]
f.close()
# validation data
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)),
'r')
X_val = np.expand_dims(f['X0'][:], -1)
Y_val = f['Y'][:]
f.close()
# load some data for plotting
f = h5py.File(os.path.join('data',
f_name_val.format(dataset_index)), 'r')
X0 = f['X0'][:]
X1 = f['X1'][:]
S = f['S'][:]
Y = f['Y'][:]
S_freqs = f['S_freqs'][:]
f.close()
# needed parameters
Fs = 1250 # Hz, sampling freq
time = np.arange(X0.shape[1]) / Fs
# center raw data
X0 = (X0.T - X0.mean(axis=-1)).T
# total number of samples
n_samples = X0.shape[0]
# plot all labels and raw data matrices
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(12, 12))
axes[0].pcolormesh(time, np.arange(n_samples), Y[:, :, 0])
axes[0].set_ylabel('#')
axes[0].set_title('labels (y)')
axes[1].pcolormesh(time, np.arange(n_samples), X0, vmin=-X0.std()*3, vmax=X0.std()*3)
axes[1].set_ylabel('#')
axes[1].set_xlabel('t (s)')
axes[1].set_title('raw data (X)')
for ax in axes:
ax.axis(ax.axis('tight'))
# plot wavelet spectrograms vs. labels and raw data for some samples
for i in range(5):
gs = GridSpec(2, 1)
fig = plt.figure(figsize=(12, 6))
ax0 = fig.add_subplot(gs[0, 0])
ax0.plot(time, X0[i, ], label='$X(t)$')
ax0.plot(time, X1[i, ], label=r'$\phi_\mathrm{bp}(t)$')
ax0.plot(time, Y[i, :, 0], label='label ($y$)' )
ax0.legend(ncol=2)
ax0.axis(ax0.axis('tight'))
ax0.set_title('label, raw data and spectrograms')
plt.setp(ax0.get_xticklabels(), visible=False)
ax1 = fig.add_subplot(gs[1:, 0], sharex=ax0)
vmin, vmax = np.exp(np.percentile(np.log(S), [1, 99]))
im = ax1.pcolormesh(time, S_freqs, S[i, ].T, norm=colors.LogNorm(vmin=vmin, vmax=vmax),
cmap='inferno')
ax1.axis(ax1.axis('tight'))
ax1.set_ylabel('$f$ (Hz)')
ax1.set_xlabel('$t$ (s)')
model = ripplenet.models.get_bidirectional_LSTM_model(input_shape=(None, X_train.shape[2]),
layer_sizes=[20, 10, 6, 6],
seed=random_seed+1)
model.summary()
# plot_model(model, show_shapes=True, expand_nested=True)
# model checkpoints when validation mse improves
filepath = os.path.join(output_folder, '{}_best_random_seed{}.h5'.format(rnn_prefix, random_seed))
checkpoint_best = keras.callbacks.ModelCheckpoint(filepath, monitor='val_mse',
verbose=1, save_best_only=True,
mode='min')
callback_hist = keras.callbacks.CSVLogger(os.path.join(output_folder, '{}_history_random_seed{}.csv'.format(rnn_prefix, random_seed)))
callbacks_list = [checkpoint_best, callback_hist]
# train model
history = model.fit(X_train, Y_train,
batch_size=20,
epochs=50,
callbacks=callbacks_list,
validation_data=(X_val, Y_val))
# save history to a pickle so we can load it later
with open(os.path.join(output_folder, '{}_history_random_seed{}.pkl'.format(rnn_prefix, random_seed)), 'wb') as f:
pickle.dump(history.history, f)
plt.figure(figsize=(12, 12))
plt.semilogy(history.history['loss'], '-o', label='loss')
plt.semilogy(history.history['val_loss'], '-o', label='val_loss')
plt.semilogy(history.history['mse'], '-o', label='mse')
plt.semilogy(history.history['val_mse'], '-o', label='val_mse')
plt.legend()
plt.xlabel('epochs')
plt.ylabel('MSE')
plt.title('training/validation MSE')
# Save the trained model
model.save(os.path.join(output_folder, '{}_random_seed{}.h5'.format(rnn_prefix, random_seed)))
| 0.453262 | 0.797004 |
# Reinforcement Learning (RL)
## fishing without fishing!
Imagine you want to go river fishing and you are given a fishing rod. You don't have any fish or any fishing skills. You can't afford to learn fishing from some expert right now. So All you can do is to cast your fishing rod and wait for the results. So after some waiting you might think of going some where alongside the river where the water is deeper and thus number of fishes in that area is more. so you have a bigger chance of getting any. By catching the first fish you find out that going to noted areas isn't such a bad idea. so by waiting long times in **bad** places to fish (negative reward usually small amount for each time step), you learn to stop fishing in those areas, and by catching fish in **good** places (usually a terminal state with high reward), you try to find what is good about this place (in out example the depth of the river).
## RL vs MDP
In the previous doc we saw that in MDP we want to find the optimal actions in a world which by doing those actions we may reach the maximum sum of rewards over time. The main difference between MDP and RL is that in RL we are unaware of R(s,a,s') and T(s,a,s') and we must actually **do the action** to get the reward.
We can assume MDP is an offline method because we already know what are T and R. but in RL we have to actually **do** some actions. therefor it is an online method.
## Types of RL
### Model-Based Learning
This type of learning assumes an approximate model for our problem and tries to solve it in two iterative model.
1) get some samples out of the environment. count outcome state for each starting state s and action a. Using maximum likelihood and normalization we can find T(s,a,s'). We also get R(s, a, s') for each sample.
2) solve the learned MDP. In this step we assume T(s,a,s') is calculated and fixed. We then use the same approach to sove MDP problems, i.e. value iteration.
We perform several similar step for above example. At each step we take some samples out of the environment. We apply these samples to our model; which in this case is an expected over all possible transitions between arbitary states of s and s', and find the most suited T(s,a,s') for this step.
In above example for each aribtary state s we assign T(s,a,s') = count(S'=s'|S=s) / count(S=s).
for example for s = C at episode 4 we have:
T(C,east,D)= 3/4.
It should be noted that by each sample we get the final s' and the corresponding reward R(s,a,s').
### Model-Free Learning
In this approach we only rely on the number of similar sample outputs to find T function.
for example suppose that we want to find the average age of a group of people:
* In model-based learning we can assume our model is expected value of each unique age number. we assume a mathematical model to calculate T values.
* IN model-free learning we assign each T(s,a,s') according to the number of samples with starting state s, action a, and outcome s'.
### Passive RL
In this type we have an iterative approach for calculating the optimal policy. We assume that as the we are give the $\pi_0$ polcy and then we have two approaches:
* calculate the optimal value function V^*(s,a,s') accoding to a fixed policy. get the expected over the possible outcomes of a smaple with S=s and A=a.(direct evaluation).
* By using expectimax tree with a depth-limit=1 we improve the current value functions given a fixed policy. (policy evaluation).
#### Direct Evaluation
We perform different episodes from different starting points and follow the agent through the path to terminal state (in this case D). Then we evaluate the $V^\pi$ function.
Assume we want to calculate $V^\pi(B)$. Tracking all the episodes starting from B and ending in terminal state, we calculate the function by getting expected over these episodes rewards (in this case episode 1 and 2).
$V^\pi(B) = ( (-1-1+10) + (-1-1+10) ) / 2 = 8$
similarly:
$V^\pi(C) = ( 3 * (-1+10) + (-1-10) ) = 4 $
The problems with values iteration are:
* Each state must be learnend separately.
* It wastes information about states connections.
* therefore, it takes a long time to learn.
In above example due to lack of sufficient number of examples we find different values for $V^\pi(B)$ and $V^\pi(C)$ though they are symmetric. It means we have to run a lot of episodes to reach the desired output.
#### Policy Evaluation
* In this method we use a simple bellman equation update over a fixed policy to calculate the value functions. it means we are **evaluating** the current fixed policy.
$V^\pi_0 = 0$
$V^\pi_{k+1}(s) = \sum T(s,a,s')[R(s,a,s') + \gamma V^\pi_k(s')]$
This approach considers the connection between states. The question remains that "how to implement this bellman equation according to our environment?"
#### sample-based policy evaluation
At step=k+1 we assume:
$V^\pi_{k+1}(s) = \frac{1}{n} \sum sample_i$
where all $sample_i$ are starting at arbitary state s.
#### temporal difference
Big idea: learn from each experience according to to coefficient $\alpha$.
But why not learn from each sample equally?
if we want to implement that, we maintain a counter for current state s and at updating we apply the $\alpha=\frac{1}{counter+1}$
sample of V(s): $sample = R(s,\pi(s),s') + \gamma V^\pi(s')$
Update to V(s): $V^\pi(s) <- V^\pi(s) + \alpha(sample-V^\pi(s))$
If we want to emphasize the importance of recent samples we can use a fixed $\alpha$ between 0 and 1.
$V^\pi(B) = (1-\frac{1}{2})*0 +\frac{1}{2} (-2+1*0) = -1$
$V^\pi(C) = (1-\frac{1}{2})*0 +\frac{1}{2} (-2+1*8) = -1$
**poblems with temporal difference**
Although this method is based on simple bellman equation updates. but if we want to turn values into a new policy we are sunk.
We solve this method by using Q values instead of value functions.
$\pi(s) = \underset{a}{argmax} Q(s,a)$
$Q(s,a) = \underset{s'}\sum T(s,a,s')[R(s,a,s')+\gamma V(s')]$
#### Q-Learning
By using Q-values we have a method in RL named Q-Learning. this method is a sample-based q-value iteration method.
* $sample = R(s,a,s') + \gamma \underset{a'} max(Q(s',a')$
* $Q(s,a) <- (1-\alpha)Q(s,a) + \alpha(sample)$
Q-Learning converges to optimal policy eventually even if you are acting sub-optimally. this is called **off-policy learning**.
### Active RL
In this method we still have to **act** to find the optimal value functions. but in this method the policy is not fixed and may change during the time of training.
There are two new terms defined in actve RL, exploration and exploitation. explorations refers to trying actions that are rarely done because they might have a bigger reward. exploitation refers to as we tried almost everything we should keep doing the optimal action at each state.
the simplest form of choosing between these two action is doing a $\epsilon -greedy$ action. which means with a probability of $\epsilon$ we do a random action (exploration) and with a probability of $(1-\epsilon)$ we do the current policy action (exploitation).
there is another logical way of doing this. we can count how many times we did some random action. if didn't do it much, we should try it more often and if we did it a lot and it doesn't return a good output we should just stop doing it.
Therefore our Q value update changes as below:
$Q(s,a) <- R(s,a,s') + \gamma \underset{a'} f(Q(s',a'),N(s',a'))$
in above equation f has two input v and n respectively and outputs f(v,n) = v + k/n.
k is fixed. v is the optimistic utility in this case Q. and n is the number of times we visited s' after doing action a' starting from s. which means when the n is low we get to try those actions more often but in the end the choice relies on the utility function.
### Regret
Almost all RL algorithms defined in above reach the optimum policy eventually. but the ones who reach the optimal policy later, will have a greater **regret**.
### Generalizing Across States
As in real-life RL environments the number of states are very large, we cannot explore all these states. but there are often symmetric or similar states which result in same output. In **generlizing** we try to find a way to avoid recalculating those states. One way of doing it is using a linear model for value functions instead of value function itself. In this method we try to learn features of each state and get a weighted sum of these featueres as the final value function.
$V(s) = w_1f_1(s) + w_2f_2(s) + ... + w_nf_n(s)$
$Q(s,a) = w_1f_1(s,a) + w_2f_2(s,a) + ... + w_nf_n(s,a)$
there is a tradeoff between doing less computation and miscalculating the value of similar states but with different values.
in this method instead of updating Q values we update $w_i$s.
### Minimizing Error
if we assume to have SSE as defined below:
$total error = \underset{i} \sum (y_i - \hat{y_i} ) ^ 2 = \underset{i} \sum ( y_i - \underset{k} \sum w_kf_k(x_i))^2$
Imagine we have only one point x, with f(x) features, target value y, and weights w:
$error(w) = \frac{1}{2} ( y - \underset{k} \sum w_kf_k(x) )^2$
$ \frac{\partial error(w)}{\partial w_m} = -(y-\underset{k} \sum w_kf_k(x))f_m(x) $
$ w_m <- w_m + \alpha ( y - \underset{k} \sum w_kf_k(x) ) f_m(x) $
then the approximate q update will be:
$w_m <- w_m + \alpha [ r + \gamma \underset{a'} maxQ(s',a') - Q(s,a) ] f_m(s,a)$
|
github_jupyter
|
# Reinforcement Learning (RL)
## fishing without fishing!
Imagine you want to go river fishing and you are given a fishing rod. You don't have any fish or any fishing skills. You can't afford to learn fishing from some expert right now. So All you can do is to cast your fishing rod and wait for the results. So after some waiting you might think of going some where alongside the river where the water is deeper and thus number of fishes in that area is more. so you have a bigger chance of getting any. By catching the first fish you find out that going to noted areas isn't such a bad idea. so by waiting long times in **bad** places to fish (negative reward usually small amount for each time step), you learn to stop fishing in those areas, and by catching fish in **good** places (usually a terminal state with high reward), you try to find what is good about this place (in out example the depth of the river).
## RL vs MDP
In the previous doc we saw that in MDP we want to find the optimal actions in a world which by doing those actions we may reach the maximum sum of rewards over time. The main difference between MDP and RL is that in RL we are unaware of R(s,a,s') and T(s,a,s') and we must actually **do the action** to get the reward.
We can assume MDP is an offline method because we already know what are T and R. but in RL we have to actually **do** some actions. therefor it is an online method.
## Types of RL
### Model-Based Learning
This type of learning assumes an approximate model for our problem and tries to solve it in two iterative model.
1) get some samples out of the environment. count outcome state for each starting state s and action a. Using maximum likelihood and normalization we can find T(s,a,s'). We also get R(s, a, s') for each sample.
2) solve the learned MDP. In this step we assume T(s,a,s') is calculated and fixed. We then use the same approach to sove MDP problems, i.e. value iteration.
We perform several similar step for above example. At each step we take some samples out of the environment. We apply these samples to our model; which in this case is an expected over all possible transitions between arbitary states of s and s', and find the most suited T(s,a,s') for this step.
In above example for each aribtary state s we assign T(s,a,s') = count(S'=s'|S=s) / count(S=s).
for example for s = C at episode 4 we have:
T(C,east,D)= 3/4.
It should be noted that by each sample we get the final s' and the corresponding reward R(s,a,s').
### Model-Free Learning
In this approach we only rely on the number of similar sample outputs to find T function.
for example suppose that we want to find the average age of a group of people:
* In model-based learning we can assume our model is expected value of each unique age number. we assume a mathematical model to calculate T values.
* IN model-free learning we assign each T(s,a,s') according to the number of samples with starting state s, action a, and outcome s'.
### Passive RL
In this type we have an iterative approach for calculating the optimal policy. We assume that as the we are give the $\pi_0$ polcy and then we have two approaches:
* calculate the optimal value function V^*(s,a,s') accoding to a fixed policy. get the expected over the possible outcomes of a smaple with S=s and A=a.(direct evaluation).
* By using expectimax tree with a depth-limit=1 we improve the current value functions given a fixed policy. (policy evaluation).
#### Direct Evaluation
We perform different episodes from different starting points and follow the agent through the path to terminal state (in this case D). Then we evaluate the $V^\pi$ function.
Assume we want to calculate $V^\pi(B)$. Tracking all the episodes starting from B and ending in terminal state, we calculate the function by getting expected over these episodes rewards (in this case episode 1 and 2).
$V^\pi(B) = ( (-1-1+10) + (-1-1+10) ) / 2 = 8$
similarly:
$V^\pi(C) = ( 3 * (-1+10) + (-1-10) ) = 4 $
The problems with values iteration are:
* Each state must be learnend separately.
* It wastes information about states connections.
* therefore, it takes a long time to learn.
In above example due to lack of sufficient number of examples we find different values for $V^\pi(B)$ and $V^\pi(C)$ though they are symmetric. It means we have to run a lot of episodes to reach the desired output.
#### Policy Evaluation
* In this method we use a simple bellman equation update over a fixed policy to calculate the value functions. it means we are **evaluating** the current fixed policy.
$V^\pi_0 = 0$
$V^\pi_{k+1}(s) = \sum T(s,a,s')[R(s,a,s') + \gamma V^\pi_k(s')]$
This approach considers the connection between states. The question remains that "how to implement this bellman equation according to our environment?"
#### sample-based policy evaluation
At step=k+1 we assume:
$V^\pi_{k+1}(s) = \frac{1}{n} \sum sample_i$
where all $sample_i$ are starting at arbitary state s.
#### temporal difference
Big idea: learn from each experience according to to coefficient $\alpha$.
But why not learn from each sample equally?
if we want to implement that, we maintain a counter for current state s and at updating we apply the $\alpha=\frac{1}{counter+1}$
sample of V(s): $sample = R(s,\pi(s),s') + \gamma V^\pi(s')$
Update to V(s): $V^\pi(s) <- V^\pi(s) + \alpha(sample-V^\pi(s))$
If we want to emphasize the importance of recent samples we can use a fixed $\alpha$ between 0 and 1.
$V^\pi(B) = (1-\frac{1}{2})*0 +\frac{1}{2} (-2+1*0) = -1$
$V^\pi(C) = (1-\frac{1}{2})*0 +\frac{1}{2} (-2+1*8) = -1$
**poblems with temporal difference**
Although this method is based on simple bellman equation updates. but if we want to turn values into a new policy we are sunk.
We solve this method by using Q values instead of value functions.
$\pi(s) = \underset{a}{argmax} Q(s,a)$
$Q(s,a) = \underset{s'}\sum T(s,a,s')[R(s,a,s')+\gamma V(s')]$
#### Q-Learning
By using Q-values we have a method in RL named Q-Learning. this method is a sample-based q-value iteration method.
* $sample = R(s,a,s') + \gamma \underset{a'} max(Q(s',a')$
* $Q(s,a) <- (1-\alpha)Q(s,a) + \alpha(sample)$
Q-Learning converges to optimal policy eventually even if you are acting sub-optimally. this is called **off-policy learning**.
### Active RL
In this method we still have to **act** to find the optimal value functions. but in this method the policy is not fixed and may change during the time of training.
There are two new terms defined in actve RL, exploration and exploitation. explorations refers to trying actions that are rarely done because they might have a bigger reward. exploitation refers to as we tried almost everything we should keep doing the optimal action at each state.
the simplest form of choosing between these two action is doing a $\epsilon -greedy$ action. which means with a probability of $\epsilon$ we do a random action (exploration) and with a probability of $(1-\epsilon)$ we do the current policy action (exploitation).
there is another logical way of doing this. we can count how many times we did some random action. if didn't do it much, we should try it more often and if we did it a lot and it doesn't return a good output we should just stop doing it.
Therefore our Q value update changes as below:
$Q(s,a) <- R(s,a,s') + \gamma \underset{a'} f(Q(s',a'),N(s',a'))$
in above equation f has two input v and n respectively and outputs f(v,n) = v + k/n.
k is fixed. v is the optimistic utility in this case Q. and n is the number of times we visited s' after doing action a' starting from s. which means when the n is low we get to try those actions more often but in the end the choice relies on the utility function.
### Regret
Almost all RL algorithms defined in above reach the optimum policy eventually. but the ones who reach the optimal policy later, will have a greater **regret**.
### Generalizing Across States
As in real-life RL environments the number of states are very large, we cannot explore all these states. but there are often symmetric or similar states which result in same output. In **generlizing** we try to find a way to avoid recalculating those states. One way of doing it is using a linear model for value functions instead of value function itself. In this method we try to learn features of each state and get a weighted sum of these featueres as the final value function.
$V(s) = w_1f_1(s) + w_2f_2(s) + ... + w_nf_n(s)$
$Q(s,a) = w_1f_1(s,a) + w_2f_2(s,a) + ... + w_nf_n(s,a)$
there is a tradeoff between doing less computation and miscalculating the value of similar states but with different values.
in this method instead of updating Q values we update $w_i$s.
### Minimizing Error
if we assume to have SSE as defined below:
$total error = \underset{i} \sum (y_i - \hat{y_i} ) ^ 2 = \underset{i} \sum ( y_i - \underset{k} \sum w_kf_k(x_i))^2$
Imagine we have only one point x, with f(x) features, target value y, and weights w:
$error(w) = \frac{1}{2} ( y - \underset{k} \sum w_kf_k(x) )^2$
$ \frac{\partial error(w)}{\partial w_m} = -(y-\underset{k} \sum w_kf_k(x))f_m(x) $
$ w_m <- w_m + \alpha ( y - \underset{k} \sum w_kf_k(x) ) f_m(x) $
then the approximate q update will be:
$w_m <- w_m + \alpha [ r + \gamma \underset{a'} maxQ(s',a') - Q(s,a) ] f_m(s,a)$
| 0.820793 | 0.98764 |
```
from bs4 import BeautifulSoup
# open함수가 리턴한 파일객체에서 곧바로 read함수를 실행,
#리턴된 문자열을 html변수에 할당
html = open('webtoon_list.html', 'rt').read()
soup = BeautifulSoup(html)
from urllib import parse
for a in soup.select('a.title'):
href = a['href']
print(href)
for img in soup.select('div.thumb img[title]'):
print(img['src'])
print(img['title'])
html = ''
with open ('webtoon_list.html', 'rt') as f:
html = f.read()
import re
sample = '''
<a href="asdf" class="title" title="유미의 세포들">유미의세포들</a>
<a href="asdf" class="title" title="덴마"</a>
'''
result_list = re.findall (r'<a.*?class="title".*?>(?P<name>[\w\s+])</a>', html)
m_list = re.finditer(r'<a.*?class="title".*?>(?P<name>[\\w\s]+)</a>', html)
p = re.compile(r'''
<a.*?
href="(?P<href>.*?)".*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
print("p=", p)
m_list = re.finditer(p, html)
print("2ndp=", p)
print("lsit: ", m_list)
for m in m_list:
print(m.group('href'))
print(m.group('name'))
p = re.compile(r'''
<a.*?
href="(?P<href>.*?)".*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.VERBOSE)
m_list = re.finditer(p, html)
for m in m_list:
print(m.group('href'))
title_match = re.search(r'\?titleId=(\d+)', m.group('href'))
title_id = title_match.group(1)
print(title_id)
print(m.group('name'))
p = re.compile(r'''
<a.*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
webtoon_list = re.findall(p, html)
print(len(webtoon_list))
webtoon_list = list(set(webtoon_list))
print(len(webtoon_list))
p = re.compile(r'''
<a.*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
webtoon_list = re.findall(p, html)
webtoon_dict = {}
for title in webtoon_list:
cur_webtoon_count = webtoon_dict.get(title)
if not cur_webtoon_count:
webtoon_dict[title] = 1
else:
webtoon_dict[title] += 1
from collections import Counter
webtoon_dict = Counter(webtoon_list)
print(webtoon_dict)
webtoon_count_dict = {}
for title, count in webtoon_dict.items():
webtoon_count_dict.setdefault(count, [])
webtoon_count_dict[count].append(title)
from collections import OrderedDict
s = OrderedDict(sorted(webtoon_count_dict.items(), key=lambda item: item[0], reverse=True))
print(webtoon_count_dict)
import operator
from collections import OrderedDict
s = OrderedDict(sorted(webtoon_dict.items(), key=operator.itemgetter(1), reverse=True))
print(webtoon_count_dict)
from collections import OrderedDict
def webtoon_sort(item):
return (-item[1], item[0])
s = OrderedDict(sorted(webtoon_dict.items(), key=webtoon_sort))
print(webtoon_count_dict)
from bs4 import BeautifulSoup
html = open('webtoon_list.html', 'rt').read()
soup = BeautifulSoup(html)
from urllib import parse
for a in soup.select('a.title'):
href = a['href']
print(href)
for img in soup.select('div.thumb img[title]'):
print(img['src'])
print(img['title'])
for div_thumb in soup.select('div.thumb'):
src = div_thumb.select_one('img[title]')['src']
title = div_thumb.parent.select_one('a.title').get_text(strip=True)
print(src)
print(title)
webtoon_info_dict = {
'유미의 세포들': {
'link': '/webtoon/list.nhn?titl/webtoon/eId=651673&weekday=wed',
'thumbnail': 'https://shared-comic.pstatic.net/thumb/webtoon/651673/thumbnail/thumbnail_IMAG10_659b9446-0940-494a-bb5f-5893290a84d0.jpg',
},
'덴마': {
'link': '...',
'thumbnail': '...'
},
}
def webtoon_info(title):
print(webtoon_info)
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
a = soup.select_one('a.title[title="{}"]'.format(title))
href = a['href']
li = a.parent
img = li.select_one('img')
thumbnail = img['src']
return {
'link': href,
'thumbnail': thumbnail,
}
webtoon_info('유')
def webtoon_info(title):
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
a_list = soup.select('a.title[title*="{}"]'.format(title))
results = []
for a in a_list:
href = a['href']
thumbnail = a.parent.select_one('img')['src']
title = a.get_text(strip=True)
cur_info = {
'title': title,
'link': href,
'thumbnail': thumbnail,
}
results.append(cur_info)
return results
webtoon_info('유')
webtoon_info_dict = {}
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
for div_thumb in soup.select('div.thumb'):
a = div_thumb.parent.select_one('a.title')
link = a['href']
title = a.get_text(strip=True)
src = div_thumb.select_one('img[title]')['src']
webtoon_info_dict[title] = {
'link': link,
'thumbnail': src,
}
def webtoon_info(title):
result_dict = {}
for key, value in webtoon_info_dict.items():
if title in key:
result_dict[key] = value
return result_dict
webtoon_info('유미')
```
|
github_jupyter
|
from bs4 import BeautifulSoup
# open함수가 리턴한 파일객체에서 곧바로 read함수를 실행,
#리턴된 문자열을 html변수에 할당
html = open('webtoon_list.html', 'rt').read()
soup = BeautifulSoup(html)
from urllib import parse
for a in soup.select('a.title'):
href = a['href']
print(href)
for img in soup.select('div.thumb img[title]'):
print(img['src'])
print(img['title'])
html = ''
with open ('webtoon_list.html', 'rt') as f:
html = f.read()
import re
sample = '''
<a href="asdf" class="title" title="유미의 세포들">유미의세포들</a>
<a href="asdf" class="title" title="덴마"</a>
'''
result_list = re.findall (r'<a.*?class="title".*?>(?P<name>[\w\s+])</a>', html)
m_list = re.finditer(r'<a.*?class="title".*?>(?P<name>[\\w\s]+)</a>', html)
p = re.compile(r'''
<a.*?
href="(?P<href>.*?)".*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
print("p=", p)
m_list = re.finditer(p, html)
print("2ndp=", p)
print("lsit: ", m_list)
for m in m_list:
print(m.group('href'))
print(m.group('name'))
p = re.compile(r'''
<a.*?
href="(?P<href>.*?)".*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.VERBOSE)
m_list = re.finditer(p, html)
for m in m_list:
print(m.group('href'))
title_match = re.search(r'\?titleId=(\d+)', m.group('href'))
title_id = title_match.group(1)
print(title_id)
print(m.group('name'))
p = re.compile(r'''
<a.*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
webtoon_list = re.findall(p, html)
print(len(webtoon_list))
webtoon_list = list(set(webtoon_list))
print(len(webtoon_list))
p = re.compile(r'''
<a.*?
class="title".*?>
(?P<name>[\w\s]+)
</a>
''', re.X)
webtoon_list = re.findall(p, html)
webtoon_dict = {}
for title in webtoon_list:
cur_webtoon_count = webtoon_dict.get(title)
if not cur_webtoon_count:
webtoon_dict[title] = 1
else:
webtoon_dict[title] += 1
from collections import Counter
webtoon_dict = Counter(webtoon_list)
print(webtoon_dict)
webtoon_count_dict = {}
for title, count in webtoon_dict.items():
webtoon_count_dict.setdefault(count, [])
webtoon_count_dict[count].append(title)
from collections import OrderedDict
s = OrderedDict(sorted(webtoon_count_dict.items(), key=lambda item: item[0], reverse=True))
print(webtoon_count_dict)
import operator
from collections import OrderedDict
s = OrderedDict(sorted(webtoon_dict.items(), key=operator.itemgetter(1), reverse=True))
print(webtoon_count_dict)
from collections import OrderedDict
def webtoon_sort(item):
return (-item[1], item[0])
s = OrderedDict(sorted(webtoon_dict.items(), key=webtoon_sort))
print(webtoon_count_dict)
from bs4 import BeautifulSoup
html = open('webtoon_list.html', 'rt').read()
soup = BeautifulSoup(html)
from urllib import parse
for a in soup.select('a.title'):
href = a['href']
print(href)
for img in soup.select('div.thumb img[title]'):
print(img['src'])
print(img['title'])
for div_thumb in soup.select('div.thumb'):
src = div_thumb.select_one('img[title]')['src']
title = div_thumb.parent.select_one('a.title').get_text(strip=True)
print(src)
print(title)
webtoon_info_dict = {
'유미의 세포들': {
'link': '/webtoon/list.nhn?titl/webtoon/eId=651673&weekday=wed',
'thumbnail': 'https://shared-comic.pstatic.net/thumb/webtoon/651673/thumbnail/thumbnail_IMAG10_659b9446-0940-494a-bb5f-5893290a84d0.jpg',
},
'덴마': {
'link': '...',
'thumbnail': '...'
},
}
def webtoon_info(title):
print(webtoon_info)
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
a = soup.select_one('a.title[title="{}"]'.format(title))
href = a['href']
li = a.parent
img = li.select_one('img')
thumbnail = img['src']
return {
'link': href,
'thumbnail': thumbnail,
}
webtoon_info('유')
def webtoon_info(title):
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
a_list = soup.select('a.title[title*="{}"]'.format(title))
results = []
for a in a_list:
href = a['href']
thumbnail = a.parent.select_one('img')['src']
title = a.get_text(strip=True)
cur_info = {
'title': title,
'link': href,
'thumbnail': thumbnail,
}
results.append(cur_info)
return results
webtoon_info('유')
webtoon_info_dict = {}
soup = BeautifulSoup(open('webtoon_list.html', 'rt').read())
for div_thumb in soup.select('div.thumb'):
a = div_thumb.parent.select_one('a.title')
link = a['href']
title = a.get_text(strip=True)
src = div_thumb.select_one('img[title]')['src']
webtoon_info_dict[title] = {
'link': link,
'thumbnail': src,
}
def webtoon_info(title):
result_dict = {}
for key, value in webtoon_info_dict.items():
if title in key:
result_dict[key] = value
return result_dict
webtoon_info('유미')
| 0.167695 | 0.23351 |
# Gradient-boosting decision tree (GBDT)
In this notebook, we will present the gradient boosting decision tree
algorithm and contrast it with AdaBoost.
Gradient-boosting differs from AdaBoost due to the following reason: instead
of assigning weights to specific samples, GBDT will fit a decision tree on
the residuals error (hence the name "gradient") of the previous tree.
Therefore, each new tree in the ensemble predicts the error made by the
previous learner instead of predicting the target directly.
In this section, we will provide some intuition about the way learners are
combined to give the final prediction. In this regard, let's go back to our
regression problem which is more intuitive for demonstrating the underlying
machinery.
```
import pandas as pd
import numpy as np
# Create a random number generator that will be used to set the randomness
rng = np.random.RandomState(0)
def generate_data(n_samples=50):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_max, x_min = 1.4, -1.4
len_x = x_max - x_min
x = rng.rand(n_samples) * len_x - len_x / 2
noise = rng.randn(n_samples) * 0.3
y = x ** 3 - 0.5 * x ** 2 + noise
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(np.linspace(x_max, x_min, num=300),
columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
data_train, data_test, target_train = generate_data()
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
```
As we previously discussed, boosting will be based on assembling a sequence
of learners. We will start by creating a decision tree regressor. We will set
the depth of the tree so that the resulting learner will underfit the data.
```
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
target_train_predicted = tree.predict(data_train)
target_test_predicted = tree.predict(data_test)
# plot the data
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
# plot the predictions
line_predictions = plt.plot(data_test, target_test_predicted, "--")
# plot the residuals
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"])
_ = plt.title("Prediction function together \nwith errors on the training set")
```
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p class="last">In the cell above, we manually edited the legend to get only a single label
for all the residual lines.</p>
</div>
Since the tree underfits the data, its accuracy is far from perfect on the
training data. We can observe this in the figure by looking at the difference
between the predictions and the ground-truth data. We represent these errors,
called "Residuals", by unbroken red lines.
Indeed, our initial tree was not expressive enough to handle the complexity
of the data, as shown by the residuals. In a gradient-boosting algorithm, the
idea is to create a second tree which, given the same data `data`, will try
to predict the residuals instead of the vector `target`. We would therefore
have a tree that is able to predict the errors made by the initial tree.
Let's train such a tree.
```
residuals = target_train - target_train_predicted
tree_residuals = DecisionTreeRegressor(max_depth=5, random_state=0)
tree_residuals.fit(data_train, residuals)
target_train_predicted_residuals = tree_residuals.predict(data_train)
target_test_predicted_residuals = tree_residuals.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=residuals, color="black", alpha=0.5)
line_predictions = plt.plot(data_test, target_test_predicted_residuals, "--")
# plot the residuals of the predicted residuals
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"])
_ = plt.title("Prediction of the previous residuals")
```
We see that this new tree only manages to fit some of the residuals. We will
focus on a specific sample from the training set (i.e. we know that the
sample will be well classified using to successive trees). We will use this
sample to explain how the predictions of both trees are combined. Let's first
select this sample in `data_train`.
```
data_max = data_train.iloc[-2, 0]
target_true = target_train.iloc[-2]
target_true_residual = residuals.iloc[-2]
```
Let's plot the previous information and highlight our sample of interest.
Let's start by plotting the original data and the prediction of the first
decision tree.
```
# Plot the previous information:
# * the dataset
# * the predictions
# * the residuals
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_test_predicted, "--")
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(data_max, target_true, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend()
_ = plt.title("Tree predictions")
```
Now, let's plot the residuals information. We will plot the residuals
computed from the first decision tree and show the residual predictions.
```
# Plot the previous information:
# * the residuals committed by the first tree
# * the residual predictions
# * the residuals of the residual predictions
sns.scatterplot(x=data_train["Feature"], y=residuals,
color="black", alpha=0.5)
plt.plot(data_test, target_test_predicted_residuals, "--")
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(data_max, target_true_residual, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend()
_ = plt.title("Prediction of the residuals")
```
For our sample of interest, our initial tree is making an error (small
residual). When fitting the second tree, the residual in this case is
perfectly fitted and predicted. We will quantitatively check this prediction
using the fitted tree. First, let's check the prediction of the initial tree
and compare it with the true value.
```
print(f"True value to predict for f(x={data_max:.3f}) = {target_true:.3f}")
y_pred_first_tree = tree.predict([[data_max]])[0]
print(f"Prediction of the first decision tree for x={data_max:.3f}: "
f"y={y_pred_first_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_tree:.3f}")
```
As we visually observed, we have a small error. Now, we can use the second
tree to try to predict this residual.
```
print(f"Prediction of the residual for x={data_max:.3f}: "
f"{tree_residuals.predict([[data_max]])[0]:.3f}")
```
We see that our second tree is capable of predicting the exact residual
(error) of our first tree. Therefore, we can predict the value of `x` by
summing the prediction of the all trees in the ensemble.
```
y_pred_first_and_second_tree = (
y_pred_first_tree + tree_residuals.predict([[data_max]])[0]
)
print(f"Prediction of the first and second decision trees combined for "
f"x={data_max:.3f}: y={y_pred_first_and_second_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_and_second_tree:.3f}")
```
We chose a sample for which only two trees were enough to make the perfect
prediction. However, we saw in the previous plot that two trees were not
enough to correct the residuals of all samples. Therefore, one needs to
add several trees to the ensemble to successfully correct the error.
(i.e. the second tree corrects the first tree's error, while the third tree
corrects the second tree's error and so on.)
We will compare the generalization performance of random-forest and gradient
boosting on the California housing dataset.
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import cross_validate
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
from sklearn.ensemble import GradientBoostingRegressor
gradient_boosting = GradientBoostingRegressor(n_estimators=200)
cv_results_gbdt = cross_validate(
gradient_boosting, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Gradient Boosting Decision Tree")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_gbdt['test_score'].mean():.3f} +/- "
f"{cv_results_gbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_gbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
from sklearn.ensemble import RandomForestRegressor
random_forest = RandomForestRegressor(n_estimators=200, n_jobs=2)
cv_results_rf = cross_validate(
random_forest, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Random Forest")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_rf['test_score'].mean():.3f} +/- "
f"{cv_results_rf['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_rf['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_rf['score_time'].mean():.3f} seconds")
```
In term of computation performance, the forest can be parallelized and will
benefit from using multiple cores of the CPU. In terms of scoring
performance, both algorithms lead to very close results.
However, we see that the gradient boosting is a very fast algorithm to
predict compared to random forest. This is due to the fact that gradient
boosting uses shallow trees. We will go into details in the next notebook
about the hyperparameters to consider when optimizing ensemble methods.
|
github_jupyter
|
import pandas as pd
import numpy as np
# Create a random number generator that will be used to set the randomness
rng = np.random.RandomState(0)
def generate_data(n_samples=50):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_max, x_min = 1.4, -1.4
len_x = x_max - x_min
x = rng.rand(n_samples) * len_x - len_x / 2
noise = rng.randn(n_samples) * 0.3
y = x ** 3 - 0.5 * x ** 2 + noise
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(np.linspace(x_max, x_min, num=300),
columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
data_train, data_test, target_train = generate_data()
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
target_train_predicted = tree.predict(data_train)
target_test_predicted = tree.predict(data_test)
# plot the data
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
# plot the predictions
line_predictions = plt.plot(data_test, target_test_predicted, "--")
# plot the residuals
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"])
_ = plt.title("Prediction function together \nwith errors on the training set")
residuals = target_train - target_train_predicted
tree_residuals = DecisionTreeRegressor(max_depth=5, random_state=0)
tree_residuals.fit(data_train, residuals)
target_train_predicted_residuals = tree_residuals.predict(data_train)
target_test_predicted_residuals = tree_residuals.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=residuals, color="black", alpha=0.5)
line_predictions = plt.plot(data_test, target_test_predicted_residuals, "--")
# plot the residuals of the predicted residuals
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"])
_ = plt.title("Prediction of the previous residuals")
data_max = data_train.iloc[-2, 0]
target_true = target_train.iloc[-2]
target_true_residual = residuals.iloc[-2]
# Plot the previous information:
# * the dataset
# * the predictions
# * the residuals
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_test_predicted, "--")
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(data_max, target_true, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend()
_ = plt.title("Tree predictions")
# Plot the previous information:
# * the residuals committed by the first tree
# * the residual predictions
# * the residuals of the residual predictions
sns.scatterplot(x=data_train["Feature"], y=residuals,
color="black", alpha=0.5)
plt.plot(data_test, target_test_predicted_residuals, "--")
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(data_max, target_true_residual, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend()
_ = plt.title("Prediction of the residuals")
print(f"True value to predict for f(x={data_max:.3f}) = {target_true:.3f}")
y_pred_first_tree = tree.predict([[data_max]])[0]
print(f"Prediction of the first decision tree for x={data_max:.3f}: "
f"y={y_pred_first_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_tree:.3f}")
print(f"Prediction of the residual for x={data_max:.3f}: "
f"{tree_residuals.predict([[data_max]])[0]:.3f}")
y_pred_first_and_second_tree = (
y_pred_first_tree + tree_residuals.predict([[data_max]])[0]
)
print(f"Prediction of the first and second decision trees combined for "
f"x={data_max:.3f}: y={y_pred_first_and_second_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_and_second_tree:.3f}")
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import cross_validate
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
from sklearn.ensemble import GradientBoostingRegressor
gradient_boosting = GradientBoostingRegressor(n_estimators=200)
cv_results_gbdt = cross_validate(
gradient_boosting, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Gradient Boosting Decision Tree")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_gbdt['test_score'].mean():.3f} +/- "
f"{cv_results_gbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_gbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
from sklearn.ensemble import RandomForestRegressor
random_forest = RandomForestRegressor(n_estimators=200, n_jobs=2)
cv_results_rf = cross_validate(
random_forest, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Random Forest")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_rf['test_score'].mean():.3f} +/- "
f"{cv_results_rf['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_rf['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_rf['score_time'].mean():.3f} seconds")
| 0.889966 | 0.98896 |
```
import os
import sys
from datetime import datetime
from HUGS.Processing import search
from HUGS.Client import Process, Search, Retrieve
from Acquire.ObjectStore import datetime_to_string
from Acquire.Client import User, Drive, Service, PAR, Authorisation, StorageCreds
from HUGS.Client import Search
from HUGS.Util import get_datapath
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
import matplotlib.cm as cm
import xarray as xr
import numpy as np
import json
import ipyleaflet as ipl
import ipywidgets as ipw
base_url= "https://hugs.acquire-aaai.com/t"
search = Search(service_url=base_url)
base_url= "https://hugs.acquire-aaai.com/t"
search = Search(service_url=base_url)
search_terms = ["ch4"]
locations = []
noaa_results = search.search(search_terms=search_terms, locations=locations, data_type="NOAA")
import ipyleaflet as ipl
center = [-5,37]
zoom = 2
noaa_map = ipl.Map(center=center, zoom=zoom)
noaa_map.layout.width = '65%'
noaa_map.layout.height = '400px'
# Load in the ACRG site data
acrg_json = "../site_data/acrg_site_info.json"
with open(acrg_json, "r") as f:
acrg_sites = json.load(f)
for res in noaa_results:
site = noaa_results[res]["metadata"]["site"]
site = site.upper()
species = noaa_results[res]["metadata"]["species"]
start_date = noaa_results[res]["start_date"]
end_date = noaa_results[res]["end_date"]
# Get the latitude and longitude from the ACRG site info
# As we don't have all the site data stored we skip a few datasets
try:
long = acrg_sites[site]["NOAA"]["longitude"]
lat = acrg_sites[site]["NOAA"]["latitude"]
long_name = acrg_sites[site]["NOAA"]["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {long_name} ({site})<br>Species: {species.upper()}<br>Daterange: {start_date} -<br>{end_date}")
noaa_map.add_layer(marker)
except:
pass
# Now we overlay the footprint image. In the future this will be dynamically updated from NetCDF
edgar_image_path = "Emissions_Americas.png"
edgar_layer = ipl.ImageOverlay(url=edgar_image_path, bounds=((-60,-140), (55,-30)))
noaa_map.add_layer(edgar_layer)
euro_search_terms = ["co2"]
euro_locations = []
eurocom_results = search.search(search_terms=euro_search_terms, locations=euro_locations, data_type="EUROCOM")
center = [55, 2]
zoom = 4
eurocom_map = ipl.Map(center=center, zoom=zoom)
eurocom_map.layout.width = '65%'
eurocom_map.layout.height = '400px'
for res in eurocom_results:
site = eurocom_results[res]["metadata"]["site"]
site = site.upper()
species = eurocom_results[res]["metadata"]["species"]
start_date = eurocom_results[res]["start_date"]
end_date = eurocom_results[res]["end_date"]
# Get the latitude and longitude from the ACRG site info
# As we don't have all the site data stored we skip a few datasets
try:
# Some sites may not be associated with EUROCOM in acrg_sites
network_key = list(acrg_sites[site].keys())[0]
long = acrg_sites[site][network_key]["longitude"]
lat = acrg_sites[site][network_key]["latitude"]
long_name = acrg_sites[site][network_key]["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {long_name} ({site})<br>Species: {species.upper()}<br>Daterange: {start_date} -<br>{end_date}")
eurocom_map.add_layer(marker)
except:
pass
center = [51.506815, -0.56]
zoom = 10
map_london = ipl.Map(center=center, zoom=zoom)
map_london.layout.width = '65%'
map_london.layout.height = '400px'
positron_layer = ipl.basemap_to_tiles(ipl.basemaps.CartoDB.Positron)
map_london.add_layer(positron_layer)
marker_legend = ipw.HTML(value="<img src='marker-icon-blue.png'> Current site<br><img src='marker-icon-green.png'> Future site")
marker_control = ipl.WidgetControl(widget=marker_legend, position="topright")
lghg_sites = "../site_data/lghg_sites.json"
map_london.add_control(marker_control)
with open(lghg_sites, "r") as f:
lghg_data = json.load(f)
for site in lghg_data["current"]:
curr_site = lghg_data["current"][site]
lat = curr_site["latitude"]
long = curr_site["longitude"]
site_name = curr_site["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {site_name} ({site})")
map_london.add_layer(marker)
for site in lghg_data["future"]:
fut_site = lghg_data["future"][site]
lat = fut_site["latitude"]
long = fut_site["longitude"]
site_name = fut_site["long_name"]
# Here we want a green icon
icon = ipl.Icon(icon_url='marker-icon-green.png', icon_size=[25, 40], icon_anchor=[12,15])
marker = ipl.Marker(location=(lat, long), draggable=False, icon=icon)
marker.popup = ipw.HTML(value=f"Site: {site_name} ({site})")
map_london.add_layer(marker)
london_footprint = "high_res_london_block_inferno_50p.png"
footprint = ipl.ImageOverlay(url=london_footprint, bounds=((51.2458, -1.259), (51.7092, 0.17389)))
map_london.add_layer(footprint)
box_layout = ipw.Layout(display='flex',
flex_flow='column',
align_items='center',
width='80%')
text_layout = ipw.Layout(display='flex',
flex_flow='column',
align_items='center',
width='70%')
text = "Figure 2. Examples of a) Global Measurements locations from the NOAA \
network, b) selected nation / continental ICOS stations and c) current and \
planned urban measurement sites from the LondonGHG project, currently\
available for analysis on the HUGS platform. GHGs currently on the platform \
include CO2, CH4, N2O, halocarbons and related tracers (e.g. CO). Panel \
(a) also shows an overlay of emissions estimates from the EDGAR dataset. \
A footprint over Greater London is overlaid on panel (c). These images are \
screenshots from a Jupyter notebook hosted on the HUGS platform."
figure_text = ipw.HTML(value=text, layout=text_layout)
complete = ipw.VBox(children=[noaa_map, eurocom_map, map_london, figure_text], layout=box_layout)
complete
```
|
github_jupyter
|
import os
import sys
from datetime import datetime
from HUGS.Processing import search
from HUGS.Client import Process, Search, Retrieve
from Acquire.ObjectStore import datetime_to_string
from Acquire.Client import User, Drive, Service, PAR, Authorisation, StorageCreds
from HUGS.Client import Search
from HUGS.Util import get_datapath
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
import matplotlib.cm as cm
import xarray as xr
import numpy as np
import json
import ipyleaflet as ipl
import ipywidgets as ipw
base_url= "https://hugs.acquire-aaai.com/t"
search = Search(service_url=base_url)
base_url= "https://hugs.acquire-aaai.com/t"
search = Search(service_url=base_url)
search_terms = ["ch4"]
locations = []
noaa_results = search.search(search_terms=search_terms, locations=locations, data_type="NOAA")
import ipyleaflet as ipl
center = [-5,37]
zoom = 2
noaa_map = ipl.Map(center=center, zoom=zoom)
noaa_map.layout.width = '65%'
noaa_map.layout.height = '400px'
# Load in the ACRG site data
acrg_json = "../site_data/acrg_site_info.json"
with open(acrg_json, "r") as f:
acrg_sites = json.load(f)
for res in noaa_results:
site = noaa_results[res]["metadata"]["site"]
site = site.upper()
species = noaa_results[res]["metadata"]["species"]
start_date = noaa_results[res]["start_date"]
end_date = noaa_results[res]["end_date"]
# Get the latitude and longitude from the ACRG site info
# As we don't have all the site data stored we skip a few datasets
try:
long = acrg_sites[site]["NOAA"]["longitude"]
lat = acrg_sites[site]["NOAA"]["latitude"]
long_name = acrg_sites[site]["NOAA"]["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {long_name} ({site})<br>Species: {species.upper()}<br>Daterange: {start_date} -<br>{end_date}")
noaa_map.add_layer(marker)
except:
pass
# Now we overlay the footprint image. In the future this will be dynamically updated from NetCDF
edgar_image_path = "Emissions_Americas.png"
edgar_layer = ipl.ImageOverlay(url=edgar_image_path, bounds=((-60,-140), (55,-30)))
noaa_map.add_layer(edgar_layer)
euro_search_terms = ["co2"]
euro_locations = []
eurocom_results = search.search(search_terms=euro_search_terms, locations=euro_locations, data_type="EUROCOM")
center = [55, 2]
zoom = 4
eurocom_map = ipl.Map(center=center, zoom=zoom)
eurocom_map.layout.width = '65%'
eurocom_map.layout.height = '400px'
for res in eurocom_results:
site = eurocom_results[res]["metadata"]["site"]
site = site.upper()
species = eurocom_results[res]["metadata"]["species"]
start_date = eurocom_results[res]["start_date"]
end_date = eurocom_results[res]["end_date"]
# Get the latitude and longitude from the ACRG site info
# As we don't have all the site data stored we skip a few datasets
try:
# Some sites may not be associated with EUROCOM in acrg_sites
network_key = list(acrg_sites[site].keys())[0]
long = acrg_sites[site][network_key]["longitude"]
lat = acrg_sites[site][network_key]["latitude"]
long_name = acrg_sites[site][network_key]["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {long_name} ({site})<br>Species: {species.upper()}<br>Daterange: {start_date} -<br>{end_date}")
eurocom_map.add_layer(marker)
except:
pass
center = [51.506815, -0.56]
zoom = 10
map_london = ipl.Map(center=center, zoom=zoom)
map_london.layout.width = '65%'
map_london.layout.height = '400px'
positron_layer = ipl.basemap_to_tiles(ipl.basemaps.CartoDB.Positron)
map_london.add_layer(positron_layer)
marker_legend = ipw.HTML(value="<img src='marker-icon-blue.png'> Current site<br><img src='marker-icon-green.png'> Future site")
marker_control = ipl.WidgetControl(widget=marker_legend, position="topright")
lghg_sites = "../site_data/lghg_sites.json"
map_london.add_control(marker_control)
with open(lghg_sites, "r") as f:
lghg_data = json.load(f)
for site in lghg_data["current"]:
curr_site = lghg_data["current"][site]
lat = curr_site["latitude"]
long = curr_site["longitude"]
site_name = curr_site["long_name"]
marker = ipl.Marker(location=(lat, long), draggable=False)
marker.popup = ipw.HTML(value=f"Site: {site_name} ({site})")
map_london.add_layer(marker)
for site in lghg_data["future"]:
fut_site = lghg_data["future"][site]
lat = fut_site["latitude"]
long = fut_site["longitude"]
site_name = fut_site["long_name"]
# Here we want a green icon
icon = ipl.Icon(icon_url='marker-icon-green.png', icon_size=[25, 40], icon_anchor=[12,15])
marker = ipl.Marker(location=(lat, long), draggable=False, icon=icon)
marker.popup = ipw.HTML(value=f"Site: {site_name} ({site})")
map_london.add_layer(marker)
london_footprint = "high_res_london_block_inferno_50p.png"
footprint = ipl.ImageOverlay(url=london_footprint, bounds=((51.2458, -1.259), (51.7092, 0.17389)))
map_london.add_layer(footprint)
box_layout = ipw.Layout(display='flex',
flex_flow='column',
align_items='center',
width='80%')
text_layout = ipw.Layout(display='flex',
flex_flow='column',
align_items='center',
width='70%')
text = "Figure 2. Examples of a) Global Measurements locations from the NOAA \
network, b) selected nation / continental ICOS stations and c) current and \
planned urban measurement sites from the LondonGHG project, currently\
available for analysis on the HUGS platform. GHGs currently on the platform \
include CO2, CH4, N2O, halocarbons and related tracers (e.g. CO). Panel \
(a) also shows an overlay of emissions estimates from the EDGAR dataset. \
A footprint over Greater London is overlaid on panel (c). These images are \
screenshots from a Jupyter notebook hosted on the HUGS platform."
figure_text = ipw.HTML(value=text, layout=text_layout)
complete = ipw.VBox(children=[noaa_map, eurocom_map, map_london, figure_text], layout=box_layout)
complete
| 0.406273 | 0.301799 |
# Applying Yields and Resolutions
This notebook serves two purposes. The first is to act as a test; you should be able to roughly reproduce the below graphic by running this notebook. The second is to serve as an example usecase. Below, we want to emulate a detector that we know follows either the Lindhard or Sorenson model for ionization energy and has a resolution that affects where the data lands; we are able to apply both the yields and resolutions to the data in order to get a more realistic model for the detector's output.

## Generating a \*.root file for this notebook
The file used to generate the above plot is already present. However, if you would like to generate your own file for comparison, you can replace it as follows. In the top-level nrCascadeSim directory, after compiling (and activating any necessary environments if applicable), run:
```
./realizeCascades -n 10000 -o test-example/data/file.root levelfiles/Si28_ngam_all_cascades.txt
```
(Note that due to the randomness of the output, some variation is expected if you replace the file.)
## Notes
You may encounter some runtime warnings - these are expected.
If you have a \*.root file you want to call saved to a different location than mentioned in the instructions above, be sure to change line 21 to point to the correct location.
Please allow sufficient time to run this notebook; for very large root files it could take up to an hour. The provided file should only take a few minutes.
```
#Import Libraries
import uproot
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatch
plt.style.use('standard.mplstyle')
from matplotlib.lines import Line2D
#Custom libraries
import sys
sys.path.append('./python')
import nc_kinematics as nck
import lindhard as lin
import R68_yield as R68y
from hist import histogramable as h
#Build stuff!
#Select a file.
file = './data/file.root'
real_Lind = np.ndarray.flatten(np.asarray(h(file)[0]))
real_Sor = np.ndarray.flatten(np.asarray(h(file,model='Sorenson')[0]))
small_Lind = np.ndarray.flatten(np.asarray(h(file,scalefactor=0.2)[0]))
small_Sor = np.ndarray.flatten(np.asarray(h(file,model='Sorenson',scalefactor=0.2)[0]))
real_Lind = real_Lind[real_Lind >= 0]
real_Sor = real_Sor[real_Sor >= 0]
small_Lind = small_Lind[small_Lind >= 0]
small_Sor = small_Sor[small_Sor >= 0]
#From https://stackoverflow.com/questions/31517156/adjust-exponent-text-after-setting-scientific-limits-on-matplotlib-axis
def format_exponent(ax, axis='y'):
# Change the ticklabel format to scientific format
ax.ticklabel_format(axis=axis, style='sci', scilimits=(-2, 2))
# Get the appropriate axis
if axis == 'y':
ax_axis = ax.yaxis
x_pos = 0.0
y_pos = 1.0
horizontalalignment='left'
verticalalignment='bottom'
else:
ax_axis = ax.xaxis
x_pos = 1.0
y_pos = -0.05
horizontalalignment='right'
verticalalignment='top'
# Run plt.tight_layout() because otherwise the offset text doesn't update
plt.tight_layout()
# Get the offset value
offset = ax_axis.get_offset_text().get_text()
if len(offset) > 0:
# Get that exponent value and change it into latex format
minus_sign = u'\u2212'
expo = np.float(offset.replace(minus_sign, '-').split('e')[-1])
offset_text = r'x$\mathregular{10^{%d}}$' %expo
# Turn off the offset text that's calculated automatically
ax_axis.offsetText.set_visible(False)
# Add in a text box at the top of the y axis
ax.text(x_pos, y_pos, offset_text, transform=ax.transAxes,
horizontalalignment=horizontalalignment,
verticalalignment=verticalalignment,fontsize=30)
return ax
fig, ax = plt.subplots(figsize=(16,12))
binsize = 8 #bin width in eVee
bins = np.arange(0,620,binsize)
plt.hist(small_Lind,alpha=0.7,label='Small Res (1/5, Lindhard)',histtype='step',edgecolor='black',density='True',linewidth=2,bins=bins)
plt.hist(small_Sor,alpha=0.7,label='Small Res (1/5, Sorenson)',histtype='step',edgecolor='black',linestyle='--',density='True',linewidth=2,bins=bins)
plt.hist(real_Sor,alpha=0.6,label='Sorenson',histtype='step',fill=True,density='True',bins=bins,linewidth=3,edgecolor='navy',color='C0')
plt.hist(real_Lind,alpha=0.6,label='Lindhard',histtype='step',fill=True,density='True',bins=bins,linewidth=3,edgecolor='#a30',color='C1')
plt.xlabel(r"Energy Yielded ($\mathrm{eV}_{\mathrm{ee}}$)",fontsize=50)
plt.ylabel("PDF",fontsize=50)#Counts/(total counts * bin width)")
ax = format_exponent(ax, axis='y')
ax.tick_params(axis='both',which='major',labelsize=40)
plt.xlim([0,None])
plt.ylim([6e-13,6e-3]) #Make corner less awkward. Smallest starting value that will make the extra 0 go away
#Legend
LindPatch = mpatch.Patch(facecolor='C1',edgecolor='#a30',linewidth=3,label='Lindhard',alpha=0.6)
SorPatch = mpatch.Patch(facecolor='C0',edgecolor='navy',linewidth=3,label='Sorenson',alpha=0.6)
LindLine = Line2D([0],[0],alpha=0.7,color='black',label='Small Res (1/5, Lindhard)')
SorLine = Line2D([0],[0],linestyle='--',alpha=0.7,color='black',label='Small Res (1/5, Sorenson)')
plt.legend(handles=[LindPatch,SorPatch,LindLine,SorLine],fontsize=40)
plt.show()
```
NOTE: values < 0 were manually removed. The resolution model generates a gaussian with a width which is proportionately larger for smaller values of E, resulting in (non-physically) negative values in the results.
|
github_jupyter
|
(Note that due to the randomness of the output, some variation is expected if you replace the file.)
## Notes
You may encounter some runtime warnings - these are expected.
If you have a \*.root file you want to call saved to a different location than mentioned in the instructions above, be sure to change line 21 to point to the correct location.
Please allow sufficient time to run this notebook; for very large root files it could take up to an hour. The provided file should only take a few minutes.
| 0.593138 | 0.95418 |
# TDA with Python using the Gudhi Library
# Visualizing simplicial complexes techniques
**Authors** : M. Glisse, V. Rouvreau
## Alpha complexes
We are going to [build a simplicial complex from a point cloud](Tuto-GUDHI-simplicial-complexes-from-data-points.ipynb). These points are randomly sampled from a 2-torus.
```
import numpy as np
import gudhi
ac = gudhi.AlphaComplex(off_file='datasets/tore3D_1307.off')
st = ac.create_simplex_tree()
```
We can retrieve coordinates for the points and triangles from the simplicial complex. Here, we limit the number of triangles by filtering them with their filtration values in the simplicial complex.
```
points = np.array([ac.get_point(i) for i in range(st.num_vertices())])
# We want to plot the alpha-complex with alpha=0.005 by default.
# We are only going to plot the triangles
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3 and s[1] <= 0.005])
```
### Matplotlib
A convenient library to display triangulations is [Matplotlib](https://matplotlib.org/).
Let's display the triangulation.
```
# For matplotlib in a notebook
%matplotlib inline
# Visualization with matplotlib
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
fig = plt.figure()
ax = fig.gca(projection='3d')
l = ax.plot_trisurf(points[:,0], points[:,1], points[:,2], triangles=triangles)
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-1.1, 1.1)
ax.set_zlim(-1.1, 1.1)
plt.show()
```
### Plotly
Thanks to [Plotly](https://plot.ly/python/), we can also visualize triangulations. Here, we added some interactivity with a [slider widget](https://www.plot.ly/python/slider-widget/) to set the maximal filtration value to use for displaying triangles.
Dynamic display do not work on github as it only exports html and not the JavaScript when rendering. By the way you can try it on a binder instance: [](https://mybinder.org/v2/gh/GUDHI/TDA-tutorial/master)
```
# Visualization with plotly
from plotly.offline import plot, iplot, init_notebook_mode
import plotly.graph_objects as go
init_notebook_mode()
fig = go.FigureWidget(data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i = triangles[:,0],
j = triangles[:,1],
k = triangles[:,2],
)])
fig.update_layout(
scene = dict(
xaxis = dict(nticks=4, range=[-1.5,1.5],),
yaxis = dict(nticks=4, range=[-1.5,1.5],),
zaxis = dict(nticks=4, range=[-1.5,1.5],),))
def update_triangle(alpha):
if alpha < 0.0015:
alpha = 0.0015
print("Alpha: ", alpha)
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3 and s[1] <= alpha])
print("Number of triangles: ", len(triangles[:,0]))
fig.data[0].i = triangles[:,0]
fig.data[0].j = triangles[:,1]
fig.data[0].k = triangles[:,2]
from ipywidgets import interactive, HBox, VBox
alpha_slider = interactive(update_triangle, alpha=(0, 0.01, 0.0001))
vb = VBox((fig, alpha_slider))
vb.layout.align_items = 'center'
vb
```
### Other libraries
Another examples are also available [here](https://gudhi.inria.fr/python/latest/examples.html):
- plot_rips_complex.py
- plot_alpha_complex.py
You can visualize the Simplicial complex thanks to [Mayavi](https://docs.enthought.com/mayavi/mayavi/).
One may be able to use [iPyVolume meshes](https://ipyvolume.readthedocs.io/en/latest/mesh.html) for instance.
But we are not going to make an exhaustive list of visualization tools in Python.
|
github_jupyter
|
import numpy as np
import gudhi
ac = gudhi.AlphaComplex(off_file='datasets/tore3D_1307.off')
st = ac.create_simplex_tree()
points = np.array([ac.get_point(i) for i in range(st.num_vertices())])
# We want to plot the alpha-complex with alpha=0.005 by default.
# We are only going to plot the triangles
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3 and s[1] <= 0.005])
# For matplotlib in a notebook
%matplotlib inline
# Visualization with matplotlib
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
fig = plt.figure()
ax = fig.gca(projection='3d')
l = ax.plot_trisurf(points[:,0], points[:,1], points[:,2], triangles=triangles)
ax.set_xlim(-1.1, 1.1)
ax.set_ylim(-1.1, 1.1)
ax.set_zlim(-1.1, 1.1)
plt.show()
# Visualization with plotly
from plotly.offline import plot, iplot, init_notebook_mode
import plotly.graph_objects as go
init_notebook_mode()
fig = go.FigureWidget(data=[
go.Mesh3d(
x=points[:,0],
y=points[:,1],
z=points[:,2],
i = triangles[:,0],
j = triangles[:,1],
k = triangles[:,2],
)])
fig.update_layout(
scene = dict(
xaxis = dict(nticks=4, range=[-1.5,1.5],),
yaxis = dict(nticks=4, range=[-1.5,1.5],),
zaxis = dict(nticks=4, range=[-1.5,1.5],),))
def update_triangle(alpha):
if alpha < 0.0015:
alpha = 0.0015
print("Alpha: ", alpha)
triangles = np.array([s[0] for s in st.get_skeleton(2) if len(s[0])==3 and s[1] <= alpha])
print("Number of triangles: ", len(triangles[:,0]))
fig.data[0].i = triangles[:,0]
fig.data[0].j = triangles[:,1]
fig.data[0].k = triangles[:,2]
from ipywidgets import interactive, HBox, VBox
alpha_slider = interactive(update_triangle, alpha=(0, 0.01, 0.0001))
vb = VBox((fig, alpha_slider))
vb.layout.align_items = 'center'
vb
| 0.535827 | 0.987204 |
```
import argparse
from datetime import datetime
import os
import random
from collections import deque
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam
tf.keras.backend.set_floatx("float64")
parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DoubleDQN")
parser.add_argument("--env", default="CartPole-v0")
parser.add_argument("--lr", type=float, default=0.005)
parser.add_argument("--batch_size", type=int, default=256)
parser.add_argument("--gamma", type=float, default=0.95)
parser.add_argument("--eps", type=float, default=1.0)
parser.add_argument("--eps_decay", type=float, default=0.995)
parser.add_argument("--eps_min", type=float, default=0.01)
parser.add_argument("--logdir", default="logs")
args = parser.parse_args([])
logdir = os.path.join(
args.logdir, parser.prog, args.env, datetime.now().strftime("%Y%m%d-%H%M%S")
)
print(f"Saving training logs to:{logdir}")
writer = tf.summary.create_file_writer(logdir)
class ReplayBuffer:
def __init__(self, capacity=10000):
self.buffer = deque(maxlen=capacity)
def store(self, state, action, reward, next_state, done):
self.buffer.append([state, action, reward, next_state, done])
def sample(self):
sample = random.sample(self.buffer, args.batch_size)
states, actions, rewards, next_states, done = map(np.asarray, zip(*sample))
states = np.array(states).reshape(args.batch_size, -1)
next_states = np.array(next_states).reshape(args.batch_size, -1)
return states, actions, rewards, next_states, done
def size(self):
return len(self.buffer)
class DQN:
def __init__(self, state_dim, aciton_dim):
self.state_dim = state_dim
self.action_dim = aciton_dim
self.epsilon = args.eps
self.model = self.nn_model()
def nn_model(self):
model = tf.keras.Sequential(
[
Input((self.state_dim,)),
Dense(32, activation="relu"),
Dense(16, activation="relu"),
Dense(self.action_dim),
]
)
model.compile(loss="mse", optimizer=Adam(args.lr))
return model
def predict(self, state):
return self.model.predict(state)
def get_action(self, state):
state = np.reshape(state, [1, self.state_dim])
self.epsilon *= args.eps_decay
self.epsilon = max(self.epsilon, args.eps_min)
q_value = self.predict(state)[0]
if np.random.random() < self.epsilon:
return random.randint(0, self.action_dim - 1)
return np.argmax(q_value)
def train(self, states, targets):
self.model.fit(states, targets, epochs=1)
class Agent:
def __init__(self, env):
self.env = env
self.state_dim = self.env.observation_space.shape[0]
self.action_dim = self.env.action_space.n
self.model = DQN(self.state_dim, self.action_dim)
self.target_model = DQN(self.state_dim, self.action_dim)
self.update_target()
self.buffer = ReplayBuffer()
def update_target(self):
weights = self.model.model.get_weights()
self.target_model.model.set_weights(weights)
def replay_experience(self):
for _ in range(10):
states, actions, rewards, next_states, done = self.buffer.sample()
targets = self.model.predict(states)
next_q_values = self.target_model.predict(next_states)[
range(args.batch_size),
np.argmax(self.model.predict(next_states), axis=1),
]
targets[range(args.batch_size), actions] = (
rewards + (1 - done) * next_q_values * args.gamma
)
self.model.train(states, targets)
def train(self, max_episodes=1000):
with writer.as_default():
for ep in range(max_episodes):
done, episode_reward = False, 0
observation = self.env.reset()
while not done:
action = self.model.get_action(observation)
next_observation, reward, done, _ = self.env.step(action)
self.buffer.store(
observation, action, reward, next_observation, done
)
episode_reward += reward
observation = next_observation
if self.buffer.size() >= args.batch_size:
self.replay_experience()
self.update_target()
print(f"Episode#{ep} Reward:{episode_reward}")
tf.summary.scalar("episode_reward", episode_reward, step=ep)
if __name__ == "__main__":
env = gym.make("CartPole-v0")
agent = Agent(env)
agent.train(max_episodes=2) # Increase max_episodes value
```
|
github_jupyter
|
import argparse
from datetime import datetime
import os
import random
from collections import deque
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam
tf.keras.backend.set_floatx("float64")
parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DoubleDQN")
parser.add_argument("--env", default="CartPole-v0")
parser.add_argument("--lr", type=float, default=0.005)
parser.add_argument("--batch_size", type=int, default=256)
parser.add_argument("--gamma", type=float, default=0.95)
parser.add_argument("--eps", type=float, default=1.0)
parser.add_argument("--eps_decay", type=float, default=0.995)
parser.add_argument("--eps_min", type=float, default=0.01)
parser.add_argument("--logdir", default="logs")
args = parser.parse_args([])
logdir = os.path.join(
args.logdir, parser.prog, args.env, datetime.now().strftime("%Y%m%d-%H%M%S")
)
print(f"Saving training logs to:{logdir}")
writer = tf.summary.create_file_writer(logdir)
class ReplayBuffer:
def __init__(self, capacity=10000):
self.buffer = deque(maxlen=capacity)
def store(self, state, action, reward, next_state, done):
self.buffer.append([state, action, reward, next_state, done])
def sample(self):
sample = random.sample(self.buffer, args.batch_size)
states, actions, rewards, next_states, done = map(np.asarray, zip(*sample))
states = np.array(states).reshape(args.batch_size, -1)
next_states = np.array(next_states).reshape(args.batch_size, -1)
return states, actions, rewards, next_states, done
def size(self):
return len(self.buffer)
class DQN:
def __init__(self, state_dim, aciton_dim):
self.state_dim = state_dim
self.action_dim = aciton_dim
self.epsilon = args.eps
self.model = self.nn_model()
def nn_model(self):
model = tf.keras.Sequential(
[
Input((self.state_dim,)),
Dense(32, activation="relu"),
Dense(16, activation="relu"),
Dense(self.action_dim),
]
)
model.compile(loss="mse", optimizer=Adam(args.lr))
return model
def predict(self, state):
return self.model.predict(state)
def get_action(self, state):
state = np.reshape(state, [1, self.state_dim])
self.epsilon *= args.eps_decay
self.epsilon = max(self.epsilon, args.eps_min)
q_value = self.predict(state)[0]
if np.random.random() < self.epsilon:
return random.randint(0, self.action_dim - 1)
return np.argmax(q_value)
def train(self, states, targets):
self.model.fit(states, targets, epochs=1)
class Agent:
def __init__(self, env):
self.env = env
self.state_dim = self.env.observation_space.shape[0]
self.action_dim = self.env.action_space.n
self.model = DQN(self.state_dim, self.action_dim)
self.target_model = DQN(self.state_dim, self.action_dim)
self.update_target()
self.buffer = ReplayBuffer()
def update_target(self):
weights = self.model.model.get_weights()
self.target_model.model.set_weights(weights)
def replay_experience(self):
for _ in range(10):
states, actions, rewards, next_states, done = self.buffer.sample()
targets = self.model.predict(states)
next_q_values = self.target_model.predict(next_states)[
range(args.batch_size),
np.argmax(self.model.predict(next_states), axis=1),
]
targets[range(args.batch_size), actions] = (
rewards + (1 - done) * next_q_values * args.gamma
)
self.model.train(states, targets)
def train(self, max_episodes=1000):
with writer.as_default():
for ep in range(max_episodes):
done, episode_reward = False, 0
observation = self.env.reset()
while not done:
action = self.model.get_action(observation)
next_observation, reward, done, _ = self.env.step(action)
self.buffer.store(
observation, action, reward, next_observation, done
)
episode_reward += reward
observation = next_observation
if self.buffer.size() >= args.batch_size:
self.replay_experience()
self.update_target()
print(f"Episode#{ep} Reward:{episode_reward}")
tf.summary.scalar("episode_reward", episode_reward, step=ep)
if __name__ == "__main__":
env = gym.make("CartPole-v0")
agent = Agent(env)
agent.train(max_episodes=2) # Increase max_episodes value
| 0.767603 | 0.222362 |
# S001 King Keltner
King Keltner 策略是基于移动平均线创立的, 基本思想是 在由最高价, 最低价, 收盘价得出的中心价格基础上计算出市场价格通道线的上下轨, 当价格上穿上轨时做多, 下穿下轨时做空, 由于不是每一次突破都会成功, 因此, 合理的止损设置显得尤为重要, 在kk策略中, 选择中心价格作为出场信号
1. 计算中心价 MP = 最高价, 最低价, 收盘价三者平均后的40周期移动平均价
MP = MA((high+low+close)/3, 40)
2. 计算真实价格区间 TrueRange
TR = max( abs(high_t - low_t), abs(high_t- close_t-1), abs(low_t - close_t-1))
3. 计算通道上下轨 (upBand, dnBand), 其中mu是一个可变参数, 默认为1
upBand = MP + mu*MA(TR, 40)
dnBand = MP - mu*MA(TR, 40)
4. 计算平仓价格
FP = MP = MA((high+low+close)/3, 40)
5. 开平仓条件
买入开仓 BUY_OPEN : 当前周期MP > 上一个周期的MP AND 当前价格 > upBand
卖出开仓 SELL_OPEN: 当前周期MP < 上一个周期的MP AND 当前价格 < dnBand
平仓 : 当前周期价格 下穿 平仓价格FP
平仓 : 当前周期价格 上穿 平仓价格FP
平仓既是止盈 也是止损条件
策略简单描述到此, 下面进入分析和策略代码部分
```
import QUANTAXIS as QA
N = 40
mu = 1
import pandas as pd
def strategy001(data, N=40, mu=1):
MP = QA.MA((data.high+data.low+data.close)/3, N)
TR = pd.concat([abs(data.high - data.low), abs(data.high- data.close.shift(1)), abs(data.low - data.close.shift(1))],axis=1).max(axis=1)
upBand = MP + mu*QA.MA(TR, N)
dnBand = MP - mu*QA.MA(TR, N)
FP = MP
return pd.DataFrame({'MP': MP, 'TR': TR, 'upBand': upBand, 'dnBand':dnBand, 'FP':MP})
```
## 灌入数据
```
data = QA.QA_fetch_future_day_adv("RBL8", '2018-05-01', '2019-09-10')
data
ind = data.add_func(strategy001)
print(ind.tail())
```
## 实现策略
```
MPDIFF = ind.MP.diff().dropna()
# 在测试阶段, 我们只需要写个伪回测代码即可
lastprice = 0
for idx, item in data.iterrows():
try:
if MPDIFF.loc[idx]>0 and item['close']> ind.upBand.loc[idx]:
print('buyOPEN _ {}'.format(idx))
if MPDIFF.loc[idx]<0 and item['close']< ind.dnBand.loc[idx]:
print('sellOPEN_ {}'.format(idx))
if lastprice< ind.FP.loc[idx] and item['close']> ind.FP.loc[idx]:
print('close')
if lastprice> ind.FP.loc[idx] and item['close']< ind.FP.loc[idx]:
print('close')
except:
pass
lastprice = item['close']
```
## 回测代码
```
user = QA.QA_User(username='quantaxiss', password='quantaxis')
portfolio = user.new_portfolio('strategy101')
acc = portfolio.new_account(account_cookie='acc001', init_hold={'RBL8':0}, init_cash=30000, market_type=QA.MARKET_TYPE.FUTURE_CN)
lastprice = 0
for idx, item in data.iterrows():
try:
if acc.hold_available.get(idx[1],0) ==0 and MPDIFF.loc[idx]>0 and item['close']> ind.upBand.loc[idx]:
print('buyOPEN _ {}'.format(idx))
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_OPEN,
trade_time= idx[0])
if acc.hold_available.get(idx[1],0) ==0 and MPDIFF.loc[idx]<0 and item['close']< ind.dnBand.loc[idx]:
print('sellOPEN_ {}'.format(idx))
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_OPEN,
trade_time= idx[0])
if lastprice< ind.FP.loc[idx] and item['close']> ind.FP.loc[idx]:
print('close')
if acc.hold_available.get(idx[1],0)>0:
#多单止盈
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_CLOSE,
trade_time= idx[0])
elif acc.hold_available.get(idx[1],0)<0:
# 空单止损
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_CLOSE,
trade_time= idx[0])
if lastprice> ind.FP.loc[idx] and item['close']< ind.FP.loc[idx]:
print('close')
if acc.hold_available.get(idx[1],0)>0:
#多单止损
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_CLOSE,
trade_time= idx[0])
elif acc.hold_available.get(idx[1],0)<0:
# 空单止盈
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_CLOSE,
trade_time= idx[0])
except:
pass
lastprice = item['close']
acc.history_table
performance = QA.QA_Performance(acc)
performance.pnl_fifo
acc.market_type
risk = QA.QA_Risk(acc)
# 用市价计算的每日总资产
(risk.daily_market_value+ acc.daily_cash.cash).plot()
# 用冻结保证金计算的总资产
(acc.daily_frozen+ acc.daily_cash.cash).plot()
risk.plot_assets_curve()
risk.save()
acc.save()
```
|
github_jupyter
|
import QUANTAXIS as QA
N = 40
mu = 1
import pandas as pd
def strategy001(data, N=40, mu=1):
MP = QA.MA((data.high+data.low+data.close)/3, N)
TR = pd.concat([abs(data.high - data.low), abs(data.high- data.close.shift(1)), abs(data.low - data.close.shift(1))],axis=1).max(axis=1)
upBand = MP + mu*QA.MA(TR, N)
dnBand = MP - mu*QA.MA(TR, N)
FP = MP
return pd.DataFrame({'MP': MP, 'TR': TR, 'upBand': upBand, 'dnBand':dnBand, 'FP':MP})
data = QA.QA_fetch_future_day_adv("RBL8", '2018-05-01', '2019-09-10')
data
ind = data.add_func(strategy001)
print(ind.tail())
MPDIFF = ind.MP.diff().dropna()
# 在测试阶段, 我们只需要写个伪回测代码即可
lastprice = 0
for idx, item in data.iterrows():
try:
if MPDIFF.loc[idx]>0 and item['close']> ind.upBand.loc[idx]:
print('buyOPEN _ {}'.format(idx))
if MPDIFF.loc[idx]<0 and item['close']< ind.dnBand.loc[idx]:
print('sellOPEN_ {}'.format(idx))
if lastprice< ind.FP.loc[idx] and item['close']> ind.FP.loc[idx]:
print('close')
if lastprice> ind.FP.loc[idx] and item['close']< ind.FP.loc[idx]:
print('close')
except:
pass
lastprice = item['close']
user = QA.QA_User(username='quantaxiss', password='quantaxis')
portfolio = user.new_portfolio('strategy101')
acc = portfolio.new_account(account_cookie='acc001', init_hold={'RBL8':0}, init_cash=30000, market_type=QA.MARKET_TYPE.FUTURE_CN)
lastprice = 0
for idx, item in data.iterrows():
try:
if acc.hold_available.get(idx[1],0) ==0 and MPDIFF.loc[idx]>0 and item['close']> ind.upBand.loc[idx]:
print('buyOPEN _ {}'.format(idx))
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_OPEN,
trade_time= idx[0])
if acc.hold_available.get(idx[1],0) ==0 and MPDIFF.loc[idx]<0 and item['close']< ind.dnBand.loc[idx]:
print('sellOPEN_ {}'.format(idx))
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_OPEN,
trade_time= idx[0])
if lastprice< ind.FP.loc[idx] and item['close']> ind.FP.loc[idx]:
print('close')
if acc.hold_available.get(idx[1],0)>0:
#多单止盈
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_CLOSE,
trade_time= idx[0])
elif acc.hold_available.get(idx[1],0)<0:
# 空单止损
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_CLOSE,
trade_time= idx[0])
if lastprice> ind.FP.loc[idx] and item['close']< ind.FP.loc[idx]:
print('close')
if acc.hold_available.get(idx[1],0)>0:
#多单止损
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.SELL_CLOSE,
trade_time= idx[0])
elif acc.hold_available.get(idx[1],0)<0:
# 空单止盈
acc.receive_simpledeal(
code= idx[1],
trade_price = item['close'],
trade_amount = 1,
trade_towards= QA.ORDER_DIRECTION.BUY_CLOSE,
trade_time= idx[0])
except:
pass
lastprice = item['close']
acc.history_table
performance = QA.QA_Performance(acc)
performance.pnl_fifo
acc.market_type
risk = QA.QA_Risk(acc)
# 用市价计算的每日总资产
(risk.daily_market_value+ acc.daily_cash.cash).plot()
# 用冻结保证金计算的总资产
(acc.daily_frozen+ acc.daily_cash.cash).plot()
risk.plot_assets_curve()
risk.save()
acc.save()
| 0.129114 | 0.694931 |
<h1>Содержание<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Подготовка" data-toc-modified-id="Подготовка-1"><span class="toc-item-num">1 </span>Подготовка</a></span></li><li><span><a href="#Анализ" data-toc-modified-id="Анализ-2"><span class="toc-item-num">2 </span>Анализ</a></span></li><li><span><a href="#Обучение" data-toc-modified-id="Обучение-3"><span class="toc-item-num">3 </span>Обучение</a></span><ul class="toc-item"><li><span><a href="#Проверка-на-адекватность" data-toc-modified-id="Проверка-на-адекватность-3.1"><span class="toc-item-num">3.1 </span>Проверка на адекватность</a></span></li><li><span><a href="#Вывод" data-toc-modified-id="Вывод-3.2"><span class="toc-item-num">3.2 </span>Вывод</a></span></li></ul></li></ul></div>
# Прогнозирование заказов такси
## Подготовка
```
import pandas as pd
import numpy as np
from statsmodels.tsa.seasonal import seasonal_decompose
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from lightgbm import LGBMRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.pipeline import Pipeline
from tqdm.auto import tqdm
from sklearn.metrics.scorer import make_scorer
import time
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import TimeSeriesSplit
from sklearn.preprocessing import StandardScaler
#параметры
pd.options.display.max_columns = 100
pd.options.display.max_rows = 200
pd.options.display.max_colwidth = 300
#Параметры для отладки
DEBUGING = False #вывод на экран текущих параметров
models_params_find = False
R_STATE = 737
N_JOBS = -1
filled_VehicleType=False
MODEL_TEXT_NAME = {'LinearReg':'Линейная регрессия'
,'DecTreeReg':'Дерево принятия решений для регрессии'
,'RndForestReg':'Случайный лес для регрессии'
,'DecTreeCls':'Дерево принятия решений для классификации'
,'RndForestCls':'Случайный лес для классификации'
,'LGBMReg':'Ансамбль LightGBM для регрессии'}
def disp_font_size(text,font_size=3):
#display(HTML(f"<font size='{font_size}'>{text}</font>"))
display(text)
def info_print (data, title, column='', bins=20, unit=''
, prntInfo=True, prntGraph=True, prnt_smpl=True
, returnMetric=False, drop_out=False):
'''
функция для отображения информации о наборе данных
'''
div = ''.join(['=' for i in range(1,len(title))])
print(div)
print('\n',title,'\n')
print(div)
if prntInfo:
data.info()
print(div)
print(f'Количество дубликатов: {data.duplicated().sum()}')
print(div)
if data.duplicated().sum() > 0:
try:
print(f'Пример дубликатов:')
display(data.loc[data.duplicated(keep=False)].sort_values(by=data.columns[1]).head())
print(div)
except: None
if prnt_smpl:
print(f'Пример первых 5 записей:')
display(data.head())
print(div)
if column != '':
data_prin = data[column]
else:
data_prin = data
if prntGraph:
#try:
std_dev = np.std(data_prin) #Стандартное распределение
mu = data_prin.mean() #среднее значение
sigma3_min = mu - 3*std_dev
sigma3_max = mu + 3*std_dev
print(f'Среднее: {round(mu,2)} \n')
print(f'Стандартное отклонение: {round(std_dev,2)} \n')
print(f'Доверительный интервал: от {round(sigma3_min,2)} до {round(sigma3_max,2)} \n')
print(f'Минимум: {round(data_prin.min(),4)} Максимум: {round(data_prin.max(),4)} \n')
fig, ax = plt.subplots(1,2,figsize=(17, 3))
fig.suptitle(title)
sns.boxplot(x=data_prin, ax=ax[0])
sns.histplot(x=data_prin, ax=ax[1],bins=bins)
#границы по правилам трёх сигм
plt.axvline(sigma3_min,color='r',linestyle='--')
plt.axvline(sigma3_max,color='r',linestyle='--')
ax[0].set_xlabel(unit, fontsize=15, color='black')
ax[1].set_xlabel(unit, fontsize=15, color='black')
plt.show()
if returnMetric:
if drop_out:
mu = data_prin.loc[(sigma3_min < data_prin) & (data_prin < sigma3_max)].mean()
print(f'Среднее без учёта выбросов: {round(mu,2)} \n')
return mu, std_dev
#except: None
try:
taxi = pd.read_csv('/datasets/taxi.csv')
except:
taxi = pd.read_csv('taxi.csv')
display(taxi.info())
display(taxi.head(3))
taxi['datetime'] = taxi['datetime'].astype('datetime64')
taxi.set_index('datetime',inplace=True)
taxi.sort_index(inplace=True)
taxi_day = taxi.resample('1D').sum()
taxi = taxi.resample('1h').sum()
display(taxi.head(3))
```
## Анализ
```
ax = taxi['num_orders'].plot(figsize=(20, 5))
taxi['num_orders'].shift().rolling(10).mean().plot(figsize=(20, 5),ax=ax)
decomposed = seasonal_decompose(taxi['num_orders'])
decomposed.trend.plot(figsize=(20, 5),title='Тренд')
decomposed.seasonal['2018-06':'2018-07'].plot(figsize=(20, 5),title='по Месяца')
decomposed.seasonal['2018-07-01':'2018-07-15'].plot(figsize=(20, 5),title='по Дням')
decomposed.seasonal['2018-07-07 00:00:00':'2018-07-08 23:59:59'].plot(figsize=(20, 5),title='по Часам')
info_print (taxi, 'num_orders', column='num_orders', bins=20, unit=''
, prntInfo=True, prntGraph=True, prnt_smpl=False)
decomp_day = seasonal_decompose(taxi_day['num_orders'])
decomp_day.seasonal['2018-07':'2018-08'].plot(figsize=(20, 5),title='по дням недели')
```
## Обучение
```
def rmse(test,pred):
return round(np.sqrt(mean_squared_error(test,pred)),4)
def display_metric(score, scorer, title=''):
"""
Функция для вывода метрик
"""
scorer_name = re.sub(r".*\(|\,.*|\)", '', str(scorer))
disp_font_size(f" {title} {scorer_name.upper()}: = {score}",5)
def score_cv(model, features, target, scorer, cv=5, mean=True):
el_time = time.time()
result = cross_val_score(model, features, target, scoring=scorer, cv=TimeSeriesSplit(n_splits=2), n_jobs=N_JOBS)
el_time = round(time.time() - el_time,2)
if mean: result = round(result.mean(),4)
return result, el_time
def make_pipe(model, **kwargs):
return Pipeline([
#('scale', StandardScaler()),
('clf', model)
])
def make_clf(model_type, **kwargs):
if model_type == 'RndForestReg':
model = make_pipe(RandomForestRegressor(random_state=R_STATE, **kwargs))
elif model_type == 'DecTreeReg':
model = make_pipe(DecisionTreeRegressor(random_state=R_STATE, **kwargs))
elif model_type == 'LinearReg':
model = make_pipe(LinearRegression(**kwargs))
elif model_type == 'RndForestCls':
model = make_pipe(RandomForestClassifier(random_state=R_STATE, **kwargs))
elif model_type == 'DecTreeCls':
model = make_pipe(DecisionTreeClassifier(random_state=R_STATE, **kwargs))
elif model_type == 'LGBMReg':
model = make_pipe(LGBMRegressor(boosting_type='gbdt',random_state=R_STATE, **kwargs))
else: return 0, 'Тип модели не найден'
return model
#Таблица для хранения лучших результатов моделей
columns_model = ['model','g_param','Score']
models_valid = pd.DataFrame(columns=columns_model)
def get_model(params, train_features, train_target):
model = make_clf(params['model'], **params['g_param'])
pbar = tqdm(total = 1)
model.fit(train_features, train_target)
pbar.update(1)
pbar.close()
return model
def get_param_list(kwargs):
params_list = pd.DataFrame()
if len(kwargs) > 0:
new_str = pd.Series()
for key, param in kwargs.items():
old_param_list = params_list.copy()
params_list = pd.DataFrame()
num_params = len(old_param_list)
if num_params < 1 : num_params=1
for str_ind in range(num_params):
try:
for p in param:
try:
new_str = old_param_list.iloc[str_ind]
except: None
new_str[key] = p
params_list = params_list.append(new_str, ignore_index=True)
except:
params_list[key] = param
if len(params_list) < 1: params_list.loc[0,'no_param']='no_param'
return params_list
def get_model_params(model_type, features, target, scorer, title=''
, disp_metr=True
,returnModel=False, debug=False, **kwargs):
"""
Функция для создания модели
"""
columns_model = ['model','g_param','el_time','score']
models_param = pd.DataFrame(columns=columns_model)
print('==========================')
disp_font_size(f"Модель: {MODEL_TEXT_NAME[model_type]} {title}",5)
params_list = get_param_list(kwargs)
pbar = tqdm(total = len(params_list))
params = {}
for ind, row in params_list.iterrows():
for col_n, val in row.items():
if col_n != 'no_param':
if float(val)%1>0: params[col_n] = float(val)
else: params[col_n] = int(val)
model = make_clf(model_type, **params)
try:
score, el_time = score_cv(model, features, target, scorer)
if debug: print(f'{kwargs}|{score}')
models_param = models_param.append({'model': model_type
,'g_param':{**params}
,'el_time':el_time
,'score':score}
, ignore_index=True)
except:
models_param = models_param.append({'model': model_type
,'g_param':{**params}
,'el_time':'Err'
,'score':'Err'}
, ignore_index=True)
pbar.update(1)
pbar.close()
#Вывод результатов
models_param = models_param.loc[~models_param['score'].isin(['Err','-'] )]
models_param = models_param.sort_values(by='score', ascending=False).reset_index(drop=True)
if disp_metr:
display(models_param)
disp_font_size(re.sub(r'[\{\}]', '', str(models_param.loc[0,'g_param'])),5)
disp_font_size(f"Время: {models_param.loc[0,'el_time']}",5)
display_metric(models_param.loc[0,'score'], scorer)
return models_param
def models_test(params, features, target, test_features, test_target, scorer):
models_p = pd.DataFrame(columns=['model','g_param','test_time','test_score'])
tested_model = pd.DataFrame()
for i, row in params.iterrows():
el_time = time.time()
try:
model = get_model(row, features, target)
predict = model.predict(test_features)
score = round(scorer(test_target,predict)) * -1
except:
score = 'Err'
el_time = round(time.time() - el_time,2)
disp_font_size(MODEL_TEXT_NAME[row['model']],5)
disp_font_size(re.sub(r'[\{\}]', '', str(row['g_param'])),4)
display_metric(score, '', title='Тестовая выборка')
tested_model = row[['model','g_param']]
tested_model['test_score'] = score
tested_model['test_time'] = el_time
models_p = models_p.append(tested_model, ignore_index=True)
return models_p.sort_values(by='test_score', ascending=True).reset_index(drop=True)
def models_serch(params, features, target, test_features=None, test_target=None, scorer=[rmse,False]
, test=False, test_num=1):
models_p = pd.DataFrame()
my_scorer = make_scorer(scorer[0], greater_is_better=scorer[1])
for model, params in params.items():
model_name = re.sub(r".*\_", '', str(model))
log = get_model_params(model_name, features, target, my_scorer, disp_metr=True, **params, debug=DEBUGING)
sort_column = 'score'
test_num_str = test_num
log['test_time'] = '-'
log['test_score'] = '-'
if test:
print('======================')
disp_font_size("Результаты на тестовой выборке",5)
if test_num < len(log): test_num_str = test_num
else: test_num_str = len(log)
models_test_p = models_test(log.head(test_num_str+1), features, target, test_features, test_target
, scorer=scorer[0])
display(models_test_p)
log.loc[0:test_num,'test_time'] = models_test_p.loc[0:test_num,'test_time']
log.loc[0:test_num,'test_score'] = models_test_p.loc[0:test_num,'test_score']
sort_column = 'test_score'
models_p = models_p.append(log, ignore_index=True,sort=False)
result = models_p.loc[~models_p[sort_column].isin(['Err','-'] )]
return result.sort_values(by=sort_column, ascending=False).reset_index(drop=True)
def div_features_target(df,targetname):
"""
Функция для разделения таблицы на признаки и целевой признак
"""
return df.drop([targetname], axis=1), df[targetname]
def make_features(data, max_lag=6, rolling_mean_size=10):
# data['year'] = data.index.year
# data['month'] = data.index.month
#data['month_day'] = f"{data.index.month}_{data.index.day}"
data['hour'] = data.index.hour
data['dayofweek'] = data.index.dayofweek
for lag in range(1, max_lag + 1):
data['lag_{}'.format(lag)] = data['num_orders'].shift(lag)
data['rolling_mean'] = data['num_orders'].shift().rolling(rolling_mean_size).mean()
params_find = False
if params_find:
params_s ={'max_lag': range(15,32,1), 'rolling_mean_size': range(24,169,24)}
ft_params_list = get_param_list(params_s)
ft_params_score = pd.DataFrame(columns=['g_param','train_score','test_score'])
model = LinearRegression()
params = {}
pbar = tqdm(total = len(ft_params_list))
for ind, row in ft_params_list.iterrows():
taxi_copy = taxi.copy()
for col_n, val in row.items():
if col_n != 'no_param':
if float(val)%1>0: params[col_n] = float(val)
else: params[col_n] = int(val)
make_features(taxi_copy, **params)
train, test = train_test_split(taxi_copy, shuffle=False, test_size=0.1)
train = train.dropna()
train_features, train_target = div_features_target(train,'num_orders')
test_features, test_target = div_features_target(test, 'num_orders')
model.fit(train_features, train_target)
train_pred = model.predict(train_features)
test_pred = model.predict(test_features)
ft_params_score = ft_params_score.append({'g_param':{**params},
'train_score':rmse(train_target,train_pred),
'test_score':rmse(test_target,test_pred)},
ignore_index=True)
pbar.update(1)
pbar.close
ft_params_score = ft_params_score.sort_values(by='test_score', ascending=True).reset_index(drop=True)
else:
ft_params_score = pd.DataFrame({
'g_param':[{'max_lag': 31, 'rolling_mean_size': 72},
{'max_lag': 2, 'rolling_mean_size': 72},
{'max_lag': 99, 'rolling_mean_size': 24}
],
'train_score':[25, 30, 22],
'test_score':[44, 49, 39]
})
display(ft_params_score)
make_features(taxi, **ft_params_score.loc[0,'g_param'])
train, test = train_test_split(taxi, shuffle=False, test_size=0.1)
train = train.dropna()
train_features, train_target = div_features_target(train,'num_orders')
test_features, test_target = div_features_target(test, 'num_orders')
```
### Проверка на адекватность
```
print("Среднее количество заказов такси в час:", round(test['num_orders'].mean(),4))
print("Все значения тестовой выборки предсказываются одним и тем же числом (константой):")
pred_median = np.ones(test.shape) * train['num_orders'].median()
print("RMSE:", rmse(test,pred_median))
print("Новое значение x(t) прогнозируется предыдущим значением ряда, то есть x(t-1):")
pred_previous = test.shift()
pred_previous.iloc[0] = train['num_orders'].iloc[-1]
print("RMSE:", rmse(test,pred_previous))
models_params_find = False
if models_params_find:
models_param = {
'LinearReg':{},
'DecTreeReg':{ 'max_depth': range(10,12,1)},
'RndForestReg':{'max_depth': range(10,12,1), 'n_estimators': range(39,41,1)},
'LGBMReg':{'max_depth': range(5,6,1),'n_estimators': range(855,858,1),
'learning_rate':[0.01]}
}
model_list = models_serch(models_param, train_features, train_target
, test_features, test_target
, test=True, test_num=1)
else:
model_list = pd.DataFrame({'model' : ['LGBMReg','RndForestReg','LinearReg'],
'g_param' : [{'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 856},
{'max_depth': 10, 'n_estimators': 39},
{}],
'el_time' : [12.7, 2.4, 0.43],
'score' : [-27, -27, -27],
'test_time' : [8.7, 2.3, 0.22],
'test_score' : [-43, -44, -44]}
)
display(model_list)
def check_model_predict(model_param, train_features, train_target, test_features, test_target):
model_01 = get_model(model_param, train_features, train_target)
day_test_pred = model_01.predict(test_features)
day_test_target_pred = pd.DataFrame()
day_test_target_pred['target'] = test_target
day_test_target_pred['pred'] = day_test_pred
ax = day_test_target_pred['target'].plot(figsize=(20, 5), label='target', title=model_param['model'])
day_test_target_pred['pred'].plot(figsize=(20, 5),ax=ax, label='prediction')
plt.legend()
plt.show()
for i in range(len(model_list)):
check_model_predict(model_list.loc[i], train_features, train_target
, test_features['2018-08-20':'2018-08-23']
, test_target['2018-08-20':'2018-08-23'])
```
### Вывод
Большое значения для конечной метрики играют созданные признаки и определение закономерностей временного ряда, а не выбор моделей!
Судя по графику модель случайного леса точнее предсказывает резкие перепады и точки максимума.
|
github_jupyter
|
import pandas as pd
import numpy as np
from statsmodels.tsa.seasonal import seasonal_decompose
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from lightgbm import LGBMRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.pipeline import Pipeline
from tqdm.auto import tqdm
from sklearn.metrics.scorer import make_scorer
import time
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import TimeSeriesSplit
from sklearn.preprocessing import StandardScaler
#параметры
pd.options.display.max_columns = 100
pd.options.display.max_rows = 200
pd.options.display.max_colwidth = 300
#Параметры для отладки
DEBUGING = False #вывод на экран текущих параметров
models_params_find = False
R_STATE = 737
N_JOBS = -1
filled_VehicleType=False
MODEL_TEXT_NAME = {'LinearReg':'Линейная регрессия'
,'DecTreeReg':'Дерево принятия решений для регрессии'
,'RndForestReg':'Случайный лес для регрессии'
,'DecTreeCls':'Дерево принятия решений для классификации'
,'RndForestCls':'Случайный лес для классификации'
,'LGBMReg':'Ансамбль LightGBM для регрессии'}
def disp_font_size(text,font_size=3):
#display(HTML(f"<font size='{font_size}'>{text}</font>"))
display(text)
def info_print (data, title, column='', bins=20, unit=''
, prntInfo=True, prntGraph=True, prnt_smpl=True
, returnMetric=False, drop_out=False):
'''
функция для отображения информации о наборе данных
'''
div = ''.join(['=' for i in range(1,len(title))])
print(div)
print('\n',title,'\n')
print(div)
if prntInfo:
data.info()
print(div)
print(f'Количество дубликатов: {data.duplicated().sum()}')
print(div)
if data.duplicated().sum() > 0:
try:
print(f'Пример дубликатов:')
display(data.loc[data.duplicated(keep=False)].sort_values(by=data.columns[1]).head())
print(div)
except: None
if prnt_smpl:
print(f'Пример первых 5 записей:')
display(data.head())
print(div)
if column != '':
data_prin = data[column]
else:
data_prin = data
if prntGraph:
#try:
std_dev = np.std(data_prin) #Стандартное распределение
mu = data_prin.mean() #среднее значение
sigma3_min = mu - 3*std_dev
sigma3_max = mu + 3*std_dev
print(f'Среднее: {round(mu,2)} \n')
print(f'Стандартное отклонение: {round(std_dev,2)} \n')
print(f'Доверительный интервал: от {round(sigma3_min,2)} до {round(sigma3_max,2)} \n')
print(f'Минимум: {round(data_prin.min(),4)} Максимум: {round(data_prin.max(),4)} \n')
fig, ax = plt.subplots(1,2,figsize=(17, 3))
fig.suptitle(title)
sns.boxplot(x=data_prin, ax=ax[0])
sns.histplot(x=data_prin, ax=ax[1],bins=bins)
#границы по правилам трёх сигм
plt.axvline(sigma3_min,color='r',linestyle='--')
plt.axvline(sigma3_max,color='r',linestyle='--')
ax[0].set_xlabel(unit, fontsize=15, color='black')
ax[1].set_xlabel(unit, fontsize=15, color='black')
plt.show()
if returnMetric:
if drop_out:
mu = data_prin.loc[(sigma3_min < data_prin) & (data_prin < sigma3_max)].mean()
print(f'Среднее без учёта выбросов: {round(mu,2)} \n')
return mu, std_dev
#except: None
try:
taxi = pd.read_csv('/datasets/taxi.csv')
except:
taxi = pd.read_csv('taxi.csv')
display(taxi.info())
display(taxi.head(3))
taxi['datetime'] = taxi['datetime'].astype('datetime64')
taxi.set_index('datetime',inplace=True)
taxi.sort_index(inplace=True)
taxi_day = taxi.resample('1D').sum()
taxi = taxi.resample('1h').sum()
display(taxi.head(3))
ax = taxi['num_orders'].plot(figsize=(20, 5))
taxi['num_orders'].shift().rolling(10).mean().plot(figsize=(20, 5),ax=ax)
decomposed = seasonal_decompose(taxi['num_orders'])
decomposed.trend.plot(figsize=(20, 5),title='Тренд')
decomposed.seasonal['2018-06':'2018-07'].plot(figsize=(20, 5),title='по Месяца')
decomposed.seasonal['2018-07-01':'2018-07-15'].plot(figsize=(20, 5),title='по Дням')
decomposed.seasonal['2018-07-07 00:00:00':'2018-07-08 23:59:59'].plot(figsize=(20, 5),title='по Часам')
info_print (taxi, 'num_orders', column='num_orders', bins=20, unit=''
, prntInfo=True, prntGraph=True, prnt_smpl=False)
decomp_day = seasonal_decompose(taxi_day['num_orders'])
decomp_day.seasonal['2018-07':'2018-08'].plot(figsize=(20, 5),title='по дням недели')
def rmse(test,pred):
return round(np.sqrt(mean_squared_error(test,pred)),4)
def display_metric(score, scorer, title=''):
"""
Функция для вывода метрик
"""
scorer_name = re.sub(r".*\(|\,.*|\)", '', str(scorer))
disp_font_size(f" {title} {scorer_name.upper()}: = {score}",5)
def score_cv(model, features, target, scorer, cv=5, mean=True):
el_time = time.time()
result = cross_val_score(model, features, target, scoring=scorer, cv=TimeSeriesSplit(n_splits=2), n_jobs=N_JOBS)
el_time = round(time.time() - el_time,2)
if mean: result = round(result.mean(),4)
return result, el_time
def make_pipe(model, **kwargs):
return Pipeline([
#('scale', StandardScaler()),
('clf', model)
])
def make_clf(model_type, **kwargs):
if model_type == 'RndForestReg':
model = make_pipe(RandomForestRegressor(random_state=R_STATE, **kwargs))
elif model_type == 'DecTreeReg':
model = make_pipe(DecisionTreeRegressor(random_state=R_STATE, **kwargs))
elif model_type == 'LinearReg':
model = make_pipe(LinearRegression(**kwargs))
elif model_type == 'RndForestCls':
model = make_pipe(RandomForestClassifier(random_state=R_STATE, **kwargs))
elif model_type == 'DecTreeCls':
model = make_pipe(DecisionTreeClassifier(random_state=R_STATE, **kwargs))
elif model_type == 'LGBMReg':
model = make_pipe(LGBMRegressor(boosting_type='gbdt',random_state=R_STATE, **kwargs))
else: return 0, 'Тип модели не найден'
return model
#Таблица для хранения лучших результатов моделей
columns_model = ['model','g_param','Score']
models_valid = pd.DataFrame(columns=columns_model)
def get_model(params, train_features, train_target):
model = make_clf(params['model'], **params['g_param'])
pbar = tqdm(total = 1)
model.fit(train_features, train_target)
pbar.update(1)
pbar.close()
return model
def get_param_list(kwargs):
params_list = pd.DataFrame()
if len(kwargs) > 0:
new_str = pd.Series()
for key, param in kwargs.items():
old_param_list = params_list.copy()
params_list = pd.DataFrame()
num_params = len(old_param_list)
if num_params < 1 : num_params=1
for str_ind in range(num_params):
try:
for p in param:
try:
new_str = old_param_list.iloc[str_ind]
except: None
new_str[key] = p
params_list = params_list.append(new_str, ignore_index=True)
except:
params_list[key] = param
if len(params_list) < 1: params_list.loc[0,'no_param']='no_param'
return params_list
def get_model_params(model_type, features, target, scorer, title=''
, disp_metr=True
,returnModel=False, debug=False, **kwargs):
"""
Функция для создания модели
"""
columns_model = ['model','g_param','el_time','score']
models_param = pd.DataFrame(columns=columns_model)
print('==========================')
disp_font_size(f"Модель: {MODEL_TEXT_NAME[model_type]} {title}",5)
params_list = get_param_list(kwargs)
pbar = tqdm(total = len(params_list))
params = {}
for ind, row in params_list.iterrows():
for col_n, val in row.items():
if col_n != 'no_param':
if float(val)%1>0: params[col_n] = float(val)
else: params[col_n] = int(val)
model = make_clf(model_type, **params)
try:
score, el_time = score_cv(model, features, target, scorer)
if debug: print(f'{kwargs}|{score}')
models_param = models_param.append({'model': model_type
,'g_param':{**params}
,'el_time':el_time
,'score':score}
, ignore_index=True)
except:
models_param = models_param.append({'model': model_type
,'g_param':{**params}
,'el_time':'Err'
,'score':'Err'}
, ignore_index=True)
pbar.update(1)
pbar.close()
#Вывод результатов
models_param = models_param.loc[~models_param['score'].isin(['Err','-'] )]
models_param = models_param.sort_values(by='score', ascending=False).reset_index(drop=True)
if disp_metr:
display(models_param)
disp_font_size(re.sub(r'[\{\}]', '', str(models_param.loc[0,'g_param'])),5)
disp_font_size(f"Время: {models_param.loc[0,'el_time']}",5)
display_metric(models_param.loc[0,'score'], scorer)
return models_param
def models_test(params, features, target, test_features, test_target, scorer):
models_p = pd.DataFrame(columns=['model','g_param','test_time','test_score'])
tested_model = pd.DataFrame()
for i, row in params.iterrows():
el_time = time.time()
try:
model = get_model(row, features, target)
predict = model.predict(test_features)
score = round(scorer(test_target,predict)) * -1
except:
score = 'Err'
el_time = round(time.time() - el_time,2)
disp_font_size(MODEL_TEXT_NAME[row['model']],5)
disp_font_size(re.sub(r'[\{\}]', '', str(row['g_param'])),4)
display_metric(score, '', title='Тестовая выборка')
tested_model = row[['model','g_param']]
tested_model['test_score'] = score
tested_model['test_time'] = el_time
models_p = models_p.append(tested_model, ignore_index=True)
return models_p.sort_values(by='test_score', ascending=True).reset_index(drop=True)
def models_serch(params, features, target, test_features=None, test_target=None, scorer=[rmse,False]
, test=False, test_num=1):
models_p = pd.DataFrame()
my_scorer = make_scorer(scorer[0], greater_is_better=scorer[1])
for model, params in params.items():
model_name = re.sub(r".*\_", '', str(model))
log = get_model_params(model_name, features, target, my_scorer, disp_metr=True, **params, debug=DEBUGING)
sort_column = 'score'
test_num_str = test_num
log['test_time'] = '-'
log['test_score'] = '-'
if test:
print('======================')
disp_font_size("Результаты на тестовой выборке",5)
if test_num < len(log): test_num_str = test_num
else: test_num_str = len(log)
models_test_p = models_test(log.head(test_num_str+1), features, target, test_features, test_target
, scorer=scorer[0])
display(models_test_p)
log.loc[0:test_num,'test_time'] = models_test_p.loc[0:test_num,'test_time']
log.loc[0:test_num,'test_score'] = models_test_p.loc[0:test_num,'test_score']
sort_column = 'test_score'
models_p = models_p.append(log, ignore_index=True,sort=False)
result = models_p.loc[~models_p[sort_column].isin(['Err','-'] )]
return result.sort_values(by=sort_column, ascending=False).reset_index(drop=True)
def div_features_target(df,targetname):
"""
Функция для разделения таблицы на признаки и целевой признак
"""
return df.drop([targetname], axis=1), df[targetname]
def make_features(data, max_lag=6, rolling_mean_size=10):
# data['year'] = data.index.year
# data['month'] = data.index.month
#data['month_day'] = f"{data.index.month}_{data.index.day}"
data['hour'] = data.index.hour
data['dayofweek'] = data.index.dayofweek
for lag in range(1, max_lag + 1):
data['lag_{}'.format(lag)] = data['num_orders'].shift(lag)
data['rolling_mean'] = data['num_orders'].shift().rolling(rolling_mean_size).mean()
params_find = False
if params_find:
params_s ={'max_lag': range(15,32,1), 'rolling_mean_size': range(24,169,24)}
ft_params_list = get_param_list(params_s)
ft_params_score = pd.DataFrame(columns=['g_param','train_score','test_score'])
model = LinearRegression()
params = {}
pbar = tqdm(total = len(ft_params_list))
for ind, row in ft_params_list.iterrows():
taxi_copy = taxi.copy()
for col_n, val in row.items():
if col_n != 'no_param':
if float(val)%1>0: params[col_n] = float(val)
else: params[col_n] = int(val)
make_features(taxi_copy, **params)
train, test = train_test_split(taxi_copy, shuffle=False, test_size=0.1)
train = train.dropna()
train_features, train_target = div_features_target(train,'num_orders')
test_features, test_target = div_features_target(test, 'num_orders')
model.fit(train_features, train_target)
train_pred = model.predict(train_features)
test_pred = model.predict(test_features)
ft_params_score = ft_params_score.append({'g_param':{**params},
'train_score':rmse(train_target,train_pred),
'test_score':rmse(test_target,test_pred)},
ignore_index=True)
pbar.update(1)
pbar.close
ft_params_score = ft_params_score.sort_values(by='test_score', ascending=True).reset_index(drop=True)
else:
ft_params_score = pd.DataFrame({
'g_param':[{'max_lag': 31, 'rolling_mean_size': 72},
{'max_lag': 2, 'rolling_mean_size': 72},
{'max_lag': 99, 'rolling_mean_size': 24}
],
'train_score':[25, 30, 22],
'test_score':[44, 49, 39]
})
display(ft_params_score)
make_features(taxi, **ft_params_score.loc[0,'g_param'])
train, test = train_test_split(taxi, shuffle=False, test_size=0.1)
train = train.dropna()
train_features, train_target = div_features_target(train,'num_orders')
test_features, test_target = div_features_target(test, 'num_orders')
print("Среднее количество заказов такси в час:", round(test['num_orders'].mean(),4))
print("Все значения тестовой выборки предсказываются одним и тем же числом (константой):")
pred_median = np.ones(test.shape) * train['num_orders'].median()
print("RMSE:", rmse(test,pred_median))
print("Новое значение x(t) прогнозируется предыдущим значением ряда, то есть x(t-1):")
pred_previous = test.shift()
pred_previous.iloc[0] = train['num_orders'].iloc[-1]
print("RMSE:", rmse(test,pred_previous))
models_params_find = False
if models_params_find:
models_param = {
'LinearReg':{},
'DecTreeReg':{ 'max_depth': range(10,12,1)},
'RndForestReg':{'max_depth': range(10,12,1), 'n_estimators': range(39,41,1)},
'LGBMReg':{'max_depth': range(5,6,1),'n_estimators': range(855,858,1),
'learning_rate':[0.01]}
}
model_list = models_serch(models_param, train_features, train_target
, test_features, test_target
, test=True, test_num=1)
else:
model_list = pd.DataFrame({'model' : ['LGBMReg','RndForestReg','LinearReg'],
'g_param' : [{'learning_rate': 0.01, 'max_depth': 5, 'n_estimators': 856},
{'max_depth': 10, 'n_estimators': 39},
{}],
'el_time' : [12.7, 2.4, 0.43],
'score' : [-27, -27, -27],
'test_time' : [8.7, 2.3, 0.22],
'test_score' : [-43, -44, -44]}
)
display(model_list)
def check_model_predict(model_param, train_features, train_target, test_features, test_target):
model_01 = get_model(model_param, train_features, train_target)
day_test_pred = model_01.predict(test_features)
day_test_target_pred = pd.DataFrame()
day_test_target_pred['target'] = test_target
day_test_target_pred['pred'] = day_test_pred
ax = day_test_target_pred['target'].plot(figsize=(20, 5), label='target', title=model_param['model'])
day_test_target_pred['pred'].plot(figsize=(20, 5),ax=ax, label='prediction')
plt.legend()
plt.show()
for i in range(len(model_list)):
check_model_predict(model_list.loc[i], train_features, train_target
, test_features['2018-08-20':'2018-08-23']
, test_target['2018-08-20':'2018-08-23'])
| 0.273089 | 0.931525 |
# Basic trigonometry
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
If two right triangles (triangles with an angle of $90^o$ ($\pi/2$ radians)) have equal acute angles, they are similar, so their side lengths are proportional.
These proportionality constants are the values of $\sin\theta$, $\cos\theta$, and $\tan\theta$.
Here is a geometric representation of the main [trigonometric functions](http://en.wikipedia.org/wiki/Trigonometric_function) of an angle $\theta$:
<br>
<figure><img src='http://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Academ_Base_of_trigonometry.svg/300px-Academ_Base_of_trigonometry.svg.png' alt='Main trigonometric functions'/><figcaption><center><i>Figure. Main trigonometric functions (<a href="http://en.wikipedia.org/wiki/Trigonometric_function">image from Wikipedia</a>).</i></center></figcaption></figure>
## Radian
An arc of a circle with the same length as the radius of that circle corresponds to an angle of 1 radian:
<br>
<figure><img src='http://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Radian_cropped_color.svg/220px-Radian_cropped_color.svg.png' width=200/><figcaption><center><i>Figure. Definition of the radian (<a href="https://en.wikipedia.org/wiki/Radian">image from Wikipedia</a>).</i></center></figcaption></figure>
## Common trigonometric values
<table>
<tr>
<th style="text-align: center; background-color:#FBFBEF">$\;\theta\;(^o)$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\theta\;(rad)$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\sin \theta\;$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\cos \theta\;$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\tan \theta\;$</th>
</tr>
<tr>
<td style="text-align: center">$0^o$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$1$</td>
<td style="text-align: center">$0$</td>
</tr>
<tr>
<td style="text-align: center">$30^o$</td>
<td style="text-align: center">$\pi/6$</td>
<td style="text-align: center">$1/2$</td>
<td style="text-align: center">$\sqrt{3}/2$</td>
<td style="text-align: center">$1\sqrt{3}$</td>
</tr>
<tr>
<td style="text-align: center">$45^o$</td>
<td style="text-align: center">$\pi/4$</td>
<td style="text-align: center">$\sqrt{2}/2$</td>
<td style="text-align: center">$\sqrt{2}/2$</td>
<td style="text-align: center">$1$</td>
</tr>
<tr>
<td style="text-align: center">$60^o$</td>
<td style="text-align: center">$\pi/3$</td>
<td style="text-align: center">$\sqrt{3}/2$</td>
<td style="text-align: center">$1/2$</td>
<td style="text-align: center">$\sqrt{3}$</td>
</tr>
<tr>
<td style="text-align: center">$90^o$</td>
<td style="text-align: center">$\pi/2$</td>
<td style="text-align: center">$1$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$\infty$</td>
</tr>
</table>
## Trigonometric identities
Some of the main trigonometric identities are (see a [complete list at Wikipedia](http://en.wikipedia.org/wiki/List_of_trigonometric_identities)):
$$ \sin^2{\alpha} + \cos^2{\alpha} = 1 $$
$$ \sin(2\alpha) = 2\sin{\alpha} \cos{\alpha} $$
$$ \cos(2\alpha) = \cos^2{\alpha} - \sin^2{\alpha} $$
$$ \sin(\alpha \pm \beta) = \sin{\alpha} \cos{\beta} \pm \cos{\alpha} \sin{\beta} $$
$$ \cos(\alpha \pm \beta) = \cos{\alpha} \cos{\beta} \mp \sin{\alpha} \cos{\beta} $$
## References
- [Trigonometric functions [Wikipedia]](http://en.wikipedia.org/wiki/Trigonometric_function).
- [Trigonometry [S.O.S. Mathematics]](http://www.sosmath.com/trig/trig.html).
|
github_jupyter
|
# Basic trigonometry
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
If two right triangles (triangles with an angle of $90^o$ ($\pi/2$ radians)) have equal acute angles, they are similar, so their side lengths are proportional.
These proportionality constants are the values of $\sin\theta$, $\cos\theta$, and $\tan\theta$.
Here is a geometric representation of the main [trigonometric functions](http://en.wikipedia.org/wiki/Trigonometric_function) of an angle $\theta$:
<br>
<figure><img src='http://upload.wikimedia.org/wikipedia/commons/thumb/1/11/Academ_Base_of_trigonometry.svg/300px-Academ_Base_of_trigonometry.svg.png' alt='Main trigonometric functions'/><figcaption><center><i>Figure. Main trigonometric functions (<a href="http://en.wikipedia.org/wiki/Trigonometric_function">image from Wikipedia</a>).</i></center></figcaption></figure>
## Radian
An arc of a circle with the same length as the radius of that circle corresponds to an angle of 1 radian:
<br>
<figure><img src='http://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/Radian_cropped_color.svg/220px-Radian_cropped_color.svg.png' width=200/><figcaption><center><i>Figure. Definition of the radian (<a href="https://en.wikipedia.org/wiki/Radian">image from Wikipedia</a>).</i></center></figcaption></figure>
## Common trigonometric values
<table>
<tr>
<th style="text-align: center; background-color:#FBFBEF">$\;\theta\;(^o)$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\theta\;(rad)$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\sin \theta\;$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\cos \theta\;$</th>
<th style="text-align: center; background-color:#FBFBEF">$\;\tan \theta\;$</th>
</tr>
<tr>
<td style="text-align: center">$0^o$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$1$</td>
<td style="text-align: center">$0$</td>
</tr>
<tr>
<td style="text-align: center">$30^o$</td>
<td style="text-align: center">$\pi/6$</td>
<td style="text-align: center">$1/2$</td>
<td style="text-align: center">$\sqrt{3}/2$</td>
<td style="text-align: center">$1\sqrt{3}$</td>
</tr>
<tr>
<td style="text-align: center">$45^o$</td>
<td style="text-align: center">$\pi/4$</td>
<td style="text-align: center">$\sqrt{2}/2$</td>
<td style="text-align: center">$\sqrt{2}/2$</td>
<td style="text-align: center">$1$</td>
</tr>
<tr>
<td style="text-align: center">$60^o$</td>
<td style="text-align: center">$\pi/3$</td>
<td style="text-align: center">$\sqrt{3}/2$</td>
<td style="text-align: center">$1/2$</td>
<td style="text-align: center">$\sqrt{3}$</td>
</tr>
<tr>
<td style="text-align: center">$90^o$</td>
<td style="text-align: center">$\pi/2$</td>
<td style="text-align: center">$1$</td>
<td style="text-align: center">$0$</td>
<td style="text-align: center">$\infty$</td>
</tr>
</table>
## Trigonometric identities
Some of the main trigonometric identities are (see a [complete list at Wikipedia](http://en.wikipedia.org/wiki/List_of_trigonometric_identities)):
$$ \sin^2{\alpha} + \cos^2{\alpha} = 1 $$
$$ \sin(2\alpha) = 2\sin{\alpha} \cos{\alpha} $$
$$ \cos(2\alpha) = \cos^2{\alpha} - \sin^2{\alpha} $$
$$ \sin(\alpha \pm \beta) = \sin{\alpha} \cos{\beta} \pm \cos{\alpha} \sin{\beta} $$
$$ \cos(\alpha \pm \beta) = \cos{\alpha} \cos{\beta} \mp \sin{\alpha} \cos{\beta} $$
## References
- [Trigonometric functions [Wikipedia]](http://en.wikipedia.org/wiki/Trigonometric_function).
- [Trigonometry [S.O.S. Mathematics]](http://www.sosmath.com/trig/trig.html).
| 0.892846 | 0.983279 |
```
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
import GPy
import emupints
import emupints.plot as emuplt
import emupints.utils as emutils
import copy
import operator
%matplotlib inline
# Create a model
model = pints.toy.FitzhughNagumoModel()
# Run a simulation
real_parameters = [0.1, 0.5, 3]
times = np.linspace(0, 20, 200)
org_values = model.simulate(real_parameters, times)
# take 5-10% of range as your std for noise
Vs, Rs = org_values.reshape(2, 200)
V_std = (Vs.max() - Vs.min()) * .1
R_std = (Rs.max() - Rs.min()) * .1
# Add noise
noise = [V_std, R_std]
values = org_values + np.random.normal(0, noise, org_values.shape)
# Create an object with links to the model and time series
problem = pints.MultiOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
real_log_likelihood = pints.KnownNoiseLogLikelihood(problem, noise)
bounds = pints.Boundaries(lower = [0, 0, 2], upper = [1, 1, 4])
log_prior = pints.UniformLogPrior(bounds)
input_parameters = log_prior.sample(500)
likelihoods = np.apply_along_axis(real_log_likelihood, 1, input_parameters)
likelihoods[:5]
emu = emupints.GPEmulator(real_log_likelihood, input_parameters,
likelihoods,
normalize_input = True,
)
```
## Gradually trying bigger and bigger kernels
```
def is_prod_kernel(kernel):
return type(kernel) == GPy.kern.src.prod.Prod
def is_add_kernel(kernel):
return type(kernel) == GPy.kern.src.add.Add
def kernel_to_string(kernel, ident = 0):
if kernel is None:
return ""
s = ""
tab = ident * " "
if is_prod_kernel(kernel) or is_add_kernel(kernel):
op = "*" if is_prod_kernel(kernel) else "+"
sub_kernels = []
for sub_kernel in kernel.parameters:
sub_kernels.append(kernel_to_string(sub_kernel, ident = ident + 1))
s = "(" + op + "\n" + "\n".join(sub_kernels) + "\n" + tab + ")"
else:
# get name of kernel without "'>" characters
name = str(type(kernel)).split(".")[-1]
name = name[:-2]
values = ",".join(["{:5f}".format(x) for x in kernel])
s = name + "(" + values + ")"
return " " * ident + s
def get_total_variance(kernel):
ans = 0
if is_prod_kernel(kernel) or is_add_kernel(kernel):
for sub_kernel in kernel.parameters:
ans += get_total_variance(sub_kernel)
else:
try:
if hasattr(kernel, "variance"):
ans += kernel.variance
elif hasattr(kernel, "variances"):
ans += kernel.variances
except:
# some kernels don't have variance as parameter
ans += 0
return ans
# function to get a measure of performance of kernel
# marginal_log_likelihood - alpha * var
def get_score(emu, alpha = 5):
marg_log_likelihood = emu.get_log_marginal_likelihood()
kern = emu.get_trained_kern()
var = get_total_variance(kern)
return marg_log_likelihood - alpha * var
n_parameters = emu.n_parameters()
base_kerns = [GPy.kern.RBF(n_parameters),
GPy.kern.RatQuad(n_parameters),
GPy.kern.Linear(n_parameters),
GPy.kern.PeriodicExponential(1),
GPy.kern.PeriodicMatern52(1),
]
max_depth = 10
prev_kern = None
#time_limit?
#objective
optimizer = 'lfgs'
#perform initial kernel selection
max_score = -1000
max_kern = None
for kern in base_kerns:
emu.set_parameters(kernel = kern)
emu.fit(normalizer = True, messages = False)
score = get_score(emu)
if score > max_score:
print('here')
max_score = score
max_kern = emu.get_trained_kern()
print(emu.get_trained_kern())
print('Marginal log likelihood - alpha * var: ', score)
emu.set_parameters(kernel = max_kern)
emu.fit(normalizer = True)
emu.get_gp()
depth = 1
prev_max_kern = max_kern
while depth < max_depth:
print("-" * 10 + "Depth " + str(depth) + '-'*10)
max_score = -1000
for op in [operator.add, operator.mul]:
for kern in base_kerns:
current_kern = op(prev_max_kern, kern)
emu.set_parameters(kernel = current_kern)
emu.fit(normalizer = True, messages = False)
score = get_score(emu)
if score > max_score:
max_kern = emu.get_trained_kern()
max_score = score
prev_max_kern = max_kern
print(max_kern)
print(max_score)
depth += 1
get_total_variance(prev_max_kern)
```
|
github_jupyter
|
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
import GPy
import emupints
import emupints.plot as emuplt
import emupints.utils as emutils
import copy
import operator
%matplotlib inline
# Create a model
model = pints.toy.FitzhughNagumoModel()
# Run a simulation
real_parameters = [0.1, 0.5, 3]
times = np.linspace(0, 20, 200)
org_values = model.simulate(real_parameters, times)
# take 5-10% of range as your std for noise
Vs, Rs = org_values.reshape(2, 200)
V_std = (Vs.max() - Vs.min()) * .1
R_std = (Rs.max() - Rs.min()) * .1
# Add noise
noise = [V_std, R_std]
values = org_values + np.random.normal(0, noise, org_values.shape)
# Create an object with links to the model and time series
problem = pints.MultiOutputProblem(model, times, values)
# Create a log-likelihood function (adds an extra parameter!)
real_log_likelihood = pints.KnownNoiseLogLikelihood(problem, noise)
bounds = pints.Boundaries(lower = [0, 0, 2], upper = [1, 1, 4])
log_prior = pints.UniformLogPrior(bounds)
input_parameters = log_prior.sample(500)
likelihoods = np.apply_along_axis(real_log_likelihood, 1, input_parameters)
likelihoods[:5]
emu = emupints.GPEmulator(real_log_likelihood, input_parameters,
likelihoods,
normalize_input = True,
)
def is_prod_kernel(kernel):
return type(kernel) == GPy.kern.src.prod.Prod
def is_add_kernel(kernel):
return type(kernel) == GPy.kern.src.add.Add
def kernel_to_string(kernel, ident = 0):
if kernel is None:
return ""
s = ""
tab = ident * " "
if is_prod_kernel(kernel) or is_add_kernel(kernel):
op = "*" if is_prod_kernel(kernel) else "+"
sub_kernels = []
for sub_kernel in kernel.parameters:
sub_kernels.append(kernel_to_string(sub_kernel, ident = ident + 1))
s = "(" + op + "\n" + "\n".join(sub_kernels) + "\n" + tab + ")"
else:
# get name of kernel without "'>" characters
name = str(type(kernel)).split(".")[-1]
name = name[:-2]
values = ",".join(["{:5f}".format(x) for x in kernel])
s = name + "(" + values + ")"
return " " * ident + s
def get_total_variance(kernel):
ans = 0
if is_prod_kernel(kernel) or is_add_kernel(kernel):
for sub_kernel in kernel.parameters:
ans += get_total_variance(sub_kernel)
else:
try:
if hasattr(kernel, "variance"):
ans += kernel.variance
elif hasattr(kernel, "variances"):
ans += kernel.variances
except:
# some kernels don't have variance as parameter
ans += 0
return ans
# function to get a measure of performance of kernel
# marginal_log_likelihood - alpha * var
def get_score(emu, alpha = 5):
marg_log_likelihood = emu.get_log_marginal_likelihood()
kern = emu.get_trained_kern()
var = get_total_variance(kern)
return marg_log_likelihood - alpha * var
n_parameters = emu.n_parameters()
base_kerns = [GPy.kern.RBF(n_parameters),
GPy.kern.RatQuad(n_parameters),
GPy.kern.Linear(n_parameters),
GPy.kern.PeriodicExponential(1),
GPy.kern.PeriodicMatern52(1),
]
max_depth = 10
prev_kern = None
#time_limit?
#objective
optimizer = 'lfgs'
#perform initial kernel selection
max_score = -1000
max_kern = None
for kern in base_kerns:
emu.set_parameters(kernel = kern)
emu.fit(normalizer = True, messages = False)
score = get_score(emu)
if score > max_score:
print('here')
max_score = score
max_kern = emu.get_trained_kern()
print(emu.get_trained_kern())
print('Marginal log likelihood - alpha * var: ', score)
emu.set_parameters(kernel = max_kern)
emu.fit(normalizer = True)
emu.get_gp()
depth = 1
prev_max_kern = max_kern
while depth < max_depth:
print("-" * 10 + "Depth " + str(depth) + '-'*10)
max_score = -1000
for op in [operator.add, operator.mul]:
for kern in base_kerns:
current_kern = op(prev_max_kern, kern)
emu.set_parameters(kernel = current_kern)
emu.fit(normalizer = True, messages = False)
score = get_score(emu)
if score > max_score:
max_kern = emu.get_trained_kern()
max_score = score
prev_max_kern = max_kern
print(max_kern)
print(max_score)
depth += 1
get_total_variance(prev_max_kern)
| 0.533884 | 0.805364 |
# Multivariate Gaussian Random Walk
```
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
from scipy.linalg import cholesky
np.random.seed(42)
%matplotlib inline
```
Simulate the data:
```
D = 3
N = 300
sections = 5
period = N/sections
Sigma_a = np.random.randn(D, D)
Sigma_a = Sigma_a.T.dot(Sigma_a)
L_a = cholesky(Sigma_a, lower=True)
Sigma_b = np.random.randn(D, D)
Sigma_b = Sigma_b.T.dot(Sigma_b)
L_b = cholesky(Sigma_b, lower=True)
# Gaussian Random walk:
alpha = np.cumsum(L_a.dot(np.random.randn(D, sections)), axis=1).T
beta = np.cumsum(L_b.dot(np.random.randn(D, sections)), axis=1).T
sigma = 0.1
t = np.arange(N)[:, None]/ N
alpha = np.repeat(alpha, period, axis=0)
beta = np.repeat(beta, period, axis=0)
y = alpha + beta*t + sigma*np.random.randn(N, 1)
plt.figure(figsize=(12, 5))
plt.plot(t, y)
plt.title('Three Correlated Series')
plt.show()
class Scaler():
def __init__(self):
mean_ = None
std_ = None
def transform(self, x):
return (x - self.mean_) / self.std_
def fit_transform(self, x):
self.mean_ = x.mean(axis=0)
self.std_ = x.std(axis=0)
return self.transform(x)
def inverse_transform(self, x):
return x*self.std_ + self.mean_
def inference(t, y, sections, n_samples=100):
N, D = y.shape
# Standardies y and t
y_scaler = Scaler()
t_scaler = Scaler()
y = y_scaler.fit_transform(y)
t = t_scaler.fit_transform(t)
# Create a section index
t_section = np.repeat(np.arange(sections), N/sections)
# Create theano equivalent
t_t = theano.shared(np.repeat(t, D, axis=1))
y_t = theano.shared(y)
t_section_t = theano.shared(t_section)
with pm.Model() as model:
packed_L_α = pm.LKJCholeskyCov('packed_L_α', n=D,
eta=2., sd_dist=pm.HalfCauchy.dist(2.5))
L_α = pm.expand_packed_triangular(D, packed_L_α)
packed_L_β = pm.LKJCholeskyCov('packed_L_β', n=D,
eta=2., sd_dist=pm.HalfCauchy.dist(2.5))
L_β = pm.expand_packed_triangular(D, packed_L_β)
α = pm.MvGaussianRandomWalk('alpha', shape=(sections, D), chol=L_α)
β = pm.MvGaussianRandomWalk('beta', shape=(sections, D), chol=L_β)
alpha_r = α[t_section_t]
beta_r = β[t_section_t]
regression = alpha_r+beta_r*t_t
sd = pm.Uniform('sd', 0, 1)
likelihood = pm.Normal('y', mu=regression, sigma=sd, observed=y_t)
trace = pm.sample(n_samples, cores=4)
return trace, y_scaler, t_scaler, t_section
trace, y_scaler, t_scaler, t_section = inference(t, y, sections)
```
Predict the mean expected y value.
```
a_mean = trace['alpha'][-1000:].mean(axis=0)
b_mean = trace['beta'][-1000:].mean(axis=0)
y_pred = y_scaler.inverse_transform(a_mean[t_section] + b_mean[t_section]*t_scaler.transform(t))
plt.figure(figsize=(12, 5))
plt.gca().set_prop_cycle('color', ['red', 'green', 'blue'])
plt.plot(t, y, '.')
plt.plot(t, y_pred)
plt.title('Mean Prediction of Three Correlated Series')
plt.show()
%load_ext watermark
%watermark -n -u -v -iv -w
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import theano
from scipy.linalg import cholesky
np.random.seed(42)
%matplotlib inline
D = 3
N = 300
sections = 5
period = N/sections
Sigma_a = np.random.randn(D, D)
Sigma_a = Sigma_a.T.dot(Sigma_a)
L_a = cholesky(Sigma_a, lower=True)
Sigma_b = np.random.randn(D, D)
Sigma_b = Sigma_b.T.dot(Sigma_b)
L_b = cholesky(Sigma_b, lower=True)
# Gaussian Random walk:
alpha = np.cumsum(L_a.dot(np.random.randn(D, sections)), axis=1).T
beta = np.cumsum(L_b.dot(np.random.randn(D, sections)), axis=1).T
sigma = 0.1
t = np.arange(N)[:, None]/ N
alpha = np.repeat(alpha, period, axis=0)
beta = np.repeat(beta, period, axis=0)
y = alpha + beta*t + sigma*np.random.randn(N, 1)
plt.figure(figsize=(12, 5))
plt.plot(t, y)
plt.title('Three Correlated Series')
plt.show()
class Scaler():
def __init__(self):
mean_ = None
std_ = None
def transform(self, x):
return (x - self.mean_) / self.std_
def fit_transform(self, x):
self.mean_ = x.mean(axis=0)
self.std_ = x.std(axis=0)
return self.transform(x)
def inverse_transform(self, x):
return x*self.std_ + self.mean_
def inference(t, y, sections, n_samples=100):
N, D = y.shape
# Standardies y and t
y_scaler = Scaler()
t_scaler = Scaler()
y = y_scaler.fit_transform(y)
t = t_scaler.fit_transform(t)
# Create a section index
t_section = np.repeat(np.arange(sections), N/sections)
# Create theano equivalent
t_t = theano.shared(np.repeat(t, D, axis=1))
y_t = theano.shared(y)
t_section_t = theano.shared(t_section)
with pm.Model() as model:
packed_L_α = pm.LKJCholeskyCov('packed_L_α', n=D,
eta=2., sd_dist=pm.HalfCauchy.dist(2.5))
L_α = pm.expand_packed_triangular(D, packed_L_α)
packed_L_β = pm.LKJCholeskyCov('packed_L_β', n=D,
eta=2., sd_dist=pm.HalfCauchy.dist(2.5))
L_β = pm.expand_packed_triangular(D, packed_L_β)
α = pm.MvGaussianRandomWalk('alpha', shape=(sections, D), chol=L_α)
β = pm.MvGaussianRandomWalk('beta', shape=(sections, D), chol=L_β)
alpha_r = α[t_section_t]
beta_r = β[t_section_t]
regression = alpha_r+beta_r*t_t
sd = pm.Uniform('sd', 0, 1)
likelihood = pm.Normal('y', mu=regression, sigma=sd, observed=y_t)
trace = pm.sample(n_samples, cores=4)
return trace, y_scaler, t_scaler, t_section
trace, y_scaler, t_scaler, t_section = inference(t, y, sections)
a_mean = trace['alpha'][-1000:].mean(axis=0)
b_mean = trace['beta'][-1000:].mean(axis=0)
y_pred = y_scaler.inverse_transform(a_mean[t_section] + b_mean[t_section]*t_scaler.transform(t))
plt.figure(figsize=(12, 5))
plt.gca().set_prop_cycle('color', ['red', 'green', 'blue'])
plt.plot(t, y, '.')
plt.plot(t, y_pred)
plt.title('Mean Prediction of Three Correlated Series')
plt.show()
%load_ext watermark
%watermark -n -u -v -iv -w
| 0.743541 | 0.960175 |
```
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib
matplotlib.rcParams["image.interpolation"] = None
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from glob import glob
from tqdm import tqdm
from tifffile import imread
from csbdeep.utils import Path, download_and_extract_zip_file
from stardist import relabel_image_stardist3D, Rays_GoldenSpiral, calculate_extents
from stardist import fill_label_holes, random_label_cmap
from stardist.matching import matching_dataset
np.random.seed(42)
lbl_cmap = random_label_cmap()
```
# Data
This notebook demonstrates how the training data for *StarDist* should look like and whether the annotated objects can be appropriately described by star-convex polyhedra.
<div class="alert alert-block alert-info">
The training data that needs to be provided for StarDist consists of corresponding pairs of raw images and pixelwise annotated ground truth images (masks), where every pixel has a unique integer value indicating the object id (or 0 for background).
</div>
Load the raw images as `X` and the corrresponding ground truth labels as `Y` in the cell below.
```
X = sorted(glob('../../data/raw/*.tif'))
Y = sorted(glob('../../data/GT/*.tif'))
assert all(Path(x).name==Path(y).name for x,y in zip(X,Y))
X = list(map(imread,X))
Y = list(map(imread,Y))
extents = calculate_extents(Y)
anisotropy = tuple(np.max(extents) / extents)
print('empirical anisotropy of labeled objects = %s' % str(anisotropy))
```
# Example image
```
i = 0
img, lbl = X[i], fill_label_holes(Y[i])
assert img.ndim in (3,4)
# assumed axes ordering of img and lbl is: ZYX(C)
plt.figure(figsize=(16,10))
z = img.shape[0] // 2
y = img.shape[1] // 2
plt.subplot(121); plt.imshow(img[z],cmap='gray'); plt.axis('off'); plt.title('Raw image (XY slice)')
plt.subplot(122); plt.imshow(lbl[z],cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels (XY slice)')
plt.figure(figsize=(16,10))
plt.subplot(121); plt.imshow(img[:,y],cmap='gray'); plt.axis('off'); plt.title('Raw image (XZ slice)')
plt.subplot(122); plt.imshow(lbl[:,y],cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels (XZ slice)')
None;
```
# Fitting ground-truth labels with star-convex polyhedra
```
def reconstruction_scores(n_rays, anisotropy):
scores = []
for r in tqdm(n_rays):
rays = Rays_GoldenSpiral(r, anisotropy=anisotropy)
Y_reconstructed = [relabel_image_stardist3D(lbl, rays) for lbl in Y]
mean_iou = matching_dataset(Y, Y_reconstructed, thresh=0, show_progress=False).mean_true_score
scores.append(mean_iou)
return scores
n_rays = [8, 16, 32, 64, 96, 128]
scores_iso = reconstruction_scores(n_rays, anisotropy=None)
scores_aniso = reconstruction_scores(n_rays, anisotropy=anisotropy)
plt.figure(figsize=(8,5))
plt.plot(n_rays, scores_iso, 'o-', label='Isotropic')
plt.plot(n_rays, scores_aniso, 'o-', label='Anisotropic')
plt.xlabel('Number of rays for star-convex polyhedra')
plt.ylabel('Reconstruction score (mean intersection over union)')
plt.legend()
None;
```
# Example image reconstructed with various number of rays
## Without taking anisotropy into account
```
fig, ax = plt.subplots(2,3, figsize=(16,11))
for a,r in zip(ax.flat,n_rays):
z = lbl.shape[0] // 2
rays = Rays_GoldenSpiral(r, anisotropy=None)
a.imshow(relabel_image_stardist3D(lbl, rays)[z], cmap=lbl_cmap)
a.set_title('Reconstructed (XY slice, %d rays)' % r)
a.axis('off')
plt.tight_layout();
```
## Taking anisotropy into account
```
fig, ax = plt.subplots(2,3, figsize=(16,11))
for a,r in zip(ax.flat,n_rays):
z = lbl.shape[0] // 2
rays = Rays_GoldenSpiral(r, anisotropy=anisotropy)
a.imshow(relabel_image_stardist3D(lbl, rays)[z], cmap=lbl_cmap)
a.set_title('Reconstructed (XY slice, %d rays)' % r)
a.axis('off')
plt.tight_layout();
```
|
github_jupyter
|
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib
matplotlib.rcParams["image.interpolation"] = None
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from glob import glob
from tqdm import tqdm
from tifffile import imread
from csbdeep.utils import Path, download_and_extract_zip_file
from stardist import relabel_image_stardist3D, Rays_GoldenSpiral, calculate_extents
from stardist import fill_label_holes, random_label_cmap
from stardist.matching import matching_dataset
np.random.seed(42)
lbl_cmap = random_label_cmap()
X = sorted(glob('../../data/raw/*.tif'))
Y = sorted(glob('../../data/GT/*.tif'))
assert all(Path(x).name==Path(y).name for x,y in zip(X,Y))
X = list(map(imread,X))
Y = list(map(imread,Y))
extents = calculate_extents(Y)
anisotropy = tuple(np.max(extents) / extents)
print('empirical anisotropy of labeled objects = %s' % str(anisotropy))
i = 0
img, lbl = X[i], fill_label_holes(Y[i])
assert img.ndim in (3,4)
# assumed axes ordering of img and lbl is: ZYX(C)
plt.figure(figsize=(16,10))
z = img.shape[0] // 2
y = img.shape[1] // 2
plt.subplot(121); plt.imshow(img[z],cmap='gray'); plt.axis('off'); plt.title('Raw image (XY slice)')
plt.subplot(122); plt.imshow(lbl[z],cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels (XY slice)')
plt.figure(figsize=(16,10))
plt.subplot(121); plt.imshow(img[:,y],cmap='gray'); plt.axis('off'); plt.title('Raw image (XZ slice)')
plt.subplot(122); plt.imshow(lbl[:,y],cmap=lbl_cmap); plt.axis('off'); plt.title('GT labels (XZ slice)')
None;
def reconstruction_scores(n_rays, anisotropy):
scores = []
for r in tqdm(n_rays):
rays = Rays_GoldenSpiral(r, anisotropy=anisotropy)
Y_reconstructed = [relabel_image_stardist3D(lbl, rays) for lbl in Y]
mean_iou = matching_dataset(Y, Y_reconstructed, thresh=0, show_progress=False).mean_true_score
scores.append(mean_iou)
return scores
n_rays = [8, 16, 32, 64, 96, 128]
scores_iso = reconstruction_scores(n_rays, anisotropy=None)
scores_aniso = reconstruction_scores(n_rays, anisotropy=anisotropy)
plt.figure(figsize=(8,5))
plt.plot(n_rays, scores_iso, 'o-', label='Isotropic')
plt.plot(n_rays, scores_aniso, 'o-', label='Anisotropic')
plt.xlabel('Number of rays for star-convex polyhedra')
plt.ylabel('Reconstruction score (mean intersection over union)')
plt.legend()
None;
fig, ax = plt.subplots(2,3, figsize=(16,11))
for a,r in zip(ax.flat,n_rays):
z = lbl.shape[0] // 2
rays = Rays_GoldenSpiral(r, anisotropy=None)
a.imshow(relabel_image_stardist3D(lbl, rays)[z], cmap=lbl_cmap)
a.set_title('Reconstructed (XY slice, %d rays)' % r)
a.axis('off')
plt.tight_layout();
fig, ax = plt.subplots(2,3, figsize=(16,11))
for a,r in zip(ax.flat,n_rays):
z = lbl.shape[0] // 2
rays = Rays_GoldenSpiral(r, anisotropy=anisotropy)
a.imshow(relabel_image_stardist3D(lbl, rays)[z], cmap=lbl_cmap)
a.set_title('Reconstructed (XY slice, %d rays)' % r)
a.axis('off')
plt.tight_layout();
| 0.715126 | 0.826572 |
# MDL DB outliers
Check runs wit low score...
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
pd.set_option("display.max_columns", None)
import numpy as np
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['lines.linewidth'] = 1.5
import os
import copy
import data
from mdldb.mdl_db import MDLDataBase
from rolldecay import database
from mdldb.tables import Run
from rolldecayestimators.transformers import CutTransformer, LowpassFilterDerivatorTransformer, ScaleFactorTransformer, OffsetTransformer
from rolldecayestimators.analytical_linear_estimator import AnalyticalLinearEstimator
from rolldecayestimators.direct_linear_estimator import DirectLinearEstimator
from rolldecayestimators.direct_estimator_cubic import DirectEstimatorCubic
from rolldecayestimators.direct_estimator import DirectEstimator
from rolldecayestimators.norwegian_estimator import NorwegianEstimator
from mdldb import mdl_to_evaluation
from evaluation.run_dynamic import RunDynamic
from evaluation.run_manoeuvring import RunZigZag
from sklearn.pipeline import Pipeline
import signal_lab
df_rolldecay = database.load(rolldecay_table_name='rolldecay_direct',only_latest_runs=True, limit_score=0.0)
db = database.get_db()
df_rolldecay.head()
df_rolldecay['score'].hist(bins=30)
sql = """
SELECT * from
std
INNER JOIN run
ON std.run_id == run.id
INNER JOIN projects
ON run.project_number==projects.project_number
INNER JOIN loading_conditions
ON (run.loading_condition_id == loading_conditions.id)
INNER JOIN models
ON run.model_number == models.model_number
INNER JOIN ships
ON models.ship_name == ships.name
"""
df_std = pd.read_sql_query(sql=sql, con=db.engine,index_col='run_id')
df_std=pd.merge(left=df_rolldecay, right=df_std, how='left', left_index=True, right_index=True, suffixes=('','_std') )
df_std.plot(x='score',y='psi', style='.', alpha=0.5)
mask = df_rolldecay['score'] < 0.90
df_rolldecay=df_rolldecay.loc[mask].copy()
df_rolldecay.sort_values(by='score', inplace=True)
df_rolldecay.describe()
df_rolldecay.head()
row = df_rolldecay.iloc[2]
run_id = int(row.name)
db_run = db.session.query(Run).get(run_id)
assert not (db_run is None)
run_id
ascii_file = db_run.load()
df_raw = ascii_file.channels
df = signal_lab.mdl_to_evaluation.do_transforms(df=df_raw)
df.rename(columns={'MA/Roll':'phi'}, inplace=True)
row['score']
fig,ax=plt.subplots()
df.plot(y='phi',ax=ax)
ax.grid(True)
df.plot(y='Carriage/Psip')
lowpass_filter = LowpassFilterDerivatorTransformer(cutoff=1, minimum_score=0)
scaler = ScaleFactorTransformer(scale_factor=db_run.model.scale_factor) # dummy value None for now
cutter = CutTransformer(phi_max=np.deg2rad(9), phi_min=np.deg2rad(1))
offset_transformer = OffsetTransformer()
steps = [('filter',lowpass_filter),
('offset',offset_transformer),
('scaler',scaler),
('cutter', cutter),
]
preprocess = Pipeline(steps)
X = preprocess.fit_transform(df)
fig,ax=plt.subplots()
X.plot(y='phi', ax=ax)
ax.grid(True)
fig,ax=plt.subplots()
X.plot(y='phi', ax=ax)
ax.grid(True)
ax.set_xlim(0,200)
X.plot(y='phi1d')
estimators = []
#estimators.append(DirectLinearEstimator(omega_regression=True))
#estimators.append(AnalyticalLinearEstimator(omega_regression=True))
estimators.append(DirectEstimator(omega_regression=True, fit_method='derivation'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=True))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=False, fit_method='derivation'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=True, fit_method='integration'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=False, fit_method='integration'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
for estimator in estimators:
estimator.fit(X)
fig,ax=plt.subplots()
fig.set_size_inches(14,10)
estimator.plot_fit(ax=ax)
ax.grid(True)
score = estimator.score()
title = ''
if estimator.omega_regression:
title+='Omega regression '
else:
title+='Omega fft '
title+='%s ' % estimator.fit_method
title+='Score:%0.2f' % score
ax.set_title(title)
```
|
github_jupyter
|
%matplotlib inline
%load_ext autoreload
%autoreload 2
import pandas as pd
pd.set_option("display.max_columns", None)
import numpy as np
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['lines.linewidth'] = 1.5
import os
import copy
import data
from mdldb.mdl_db import MDLDataBase
from rolldecay import database
from mdldb.tables import Run
from rolldecayestimators.transformers import CutTransformer, LowpassFilterDerivatorTransformer, ScaleFactorTransformer, OffsetTransformer
from rolldecayestimators.analytical_linear_estimator import AnalyticalLinearEstimator
from rolldecayestimators.direct_linear_estimator import DirectLinearEstimator
from rolldecayestimators.direct_estimator_cubic import DirectEstimatorCubic
from rolldecayestimators.direct_estimator import DirectEstimator
from rolldecayestimators.norwegian_estimator import NorwegianEstimator
from mdldb import mdl_to_evaluation
from evaluation.run_dynamic import RunDynamic
from evaluation.run_manoeuvring import RunZigZag
from sklearn.pipeline import Pipeline
import signal_lab
df_rolldecay = database.load(rolldecay_table_name='rolldecay_direct',only_latest_runs=True, limit_score=0.0)
db = database.get_db()
df_rolldecay.head()
df_rolldecay['score'].hist(bins=30)
sql = """
SELECT * from
std
INNER JOIN run
ON std.run_id == run.id
INNER JOIN projects
ON run.project_number==projects.project_number
INNER JOIN loading_conditions
ON (run.loading_condition_id == loading_conditions.id)
INNER JOIN models
ON run.model_number == models.model_number
INNER JOIN ships
ON models.ship_name == ships.name
"""
df_std = pd.read_sql_query(sql=sql, con=db.engine,index_col='run_id')
df_std=pd.merge(left=df_rolldecay, right=df_std, how='left', left_index=True, right_index=True, suffixes=('','_std') )
df_std.plot(x='score',y='psi', style='.', alpha=0.5)
mask = df_rolldecay['score'] < 0.90
df_rolldecay=df_rolldecay.loc[mask].copy()
df_rolldecay.sort_values(by='score', inplace=True)
df_rolldecay.describe()
df_rolldecay.head()
row = df_rolldecay.iloc[2]
run_id = int(row.name)
db_run = db.session.query(Run).get(run_id)
assert not (db_run is None)
run_id
ascii_file = db_run.load()
df_raw = ascii_file.channels
df = signal_lab.mdl_to_evaluation.do_transforms(df=df_raw)
df.rename(columns={'MA/Roll':'phi'}, inplace=True)
row['score']
fig,ax=plt.subplots()
df.plot(y='phi',ax=ax)
ax.grid(True)
df.plot(y='Carriage/Psip')
lowpass_filter = LowpassFilterDerivatorTransformer(cutoff=1, minimum_score=0)
scaler = ScaleFactorTransformer(scale_factor=db_run.model.scale_factor) # dummy value None for now
cutter = CutTransformer(phi_max=np.deg2rad(9), phi_min=np.deg2rad(1))
offset_transformer = OffsetTransformer()
steps = [('filter',lowpass_filter),
('offset',offset_transformer),
('scaler',scaler),
('cutter', cutter),
]
preprocess = Pipeline(steps)
X = preprocess.fit_transform(df)
fig,ax=plt.subplots()
X.plot(y='phi', ax=ax)
ax.grid(True)
fig,ax=plt.subplots()
X.plot(y='phi', ax=ax)
ax.grid(True)
ax.set_xlim(0,200)
X.plot(y='phi1d')
estimators = []
#estimators.append(DirectLinearEstimator(omega_regression=True))
#estimators.append(AnalyticalLinearEstimator(omega_regression=True))
estimators.append(DirectEstimator(omega_regression=True, fit_method='derivation'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=True))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=False, fit_method='derivation'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=True, fit_method='integration'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
#estimators.append(DirectLinearEstimator(omega_regression=False))
#estimators.append(AnalyticalLinearEstimator(omega_regression=False))
estimators.append(DirectEstimator(omega_regression=False, fit_method='integration'))
#estimators.append(NorwegianEstimator())
#estimators.append(DirectEstimatorCubic(omega_regression=False))
for estimator in estimators:
estimator.fit(X)
fig,ax=plt.subplots()
fig.set_size_inches(14,10)
estimator.plot_fit(ax=ax)
ax.grid(True)
score = estimator.score()
title = ''
if estimator.omega_regression:
title+='Omega regression '
else:
title+='Omega fft '
title+='%s ' % estimator.fit_method
title+='Score:%0.2f' % score
ax.set_title(title)
| 0.442877 | 0.683964 |
## CMA Diagram
Clemmow-Mullaly-Allis (CMA) Diagram
**Warning**: This notebook would store data (png images) under your jupyter working directory. To be accurate, that is `/the-path-to-your-jupyter-working-directroy/sinupy_data/dispersion/*.png`. Of course you can modify it (`data_path`) in the following block.
```
from sympy import sqrt, pi, init_printing; init_printing()
from scipy.constants import e, m_p, m_e, c
import sinupy.mediums.plasma as pms
import matplotlib.pyplot as plt
from pathlib import Path
data_path = Path('./sinupy_data/dispersion'); data_path.mkdir(parents=True, exist_ok=True)
from sinupy.draw import draw_discontinuable_expr, add_line_with_slope
import sinupy.algebra.utility as fualguti
from sinupy import mediums, waves
from sinupy.waves import EM
plasma = mediums.ColdMagnetizedPlasma(species='e+i')
wave_eq = waves.EM.WaveEq(plasma)
wave = wave_eq.wave
m_i_N = m_p
m_e_N = m_e
omega_ce = pms.omega_ce(plasma=plasma)
omega_pe = pms.omega_pe(plasma=plasma)
# Even if your plasma.species is 'e', the ion-relevant symbols would not interrupt ...
# our calculation procedure, because `expr.subs(a_specific_symbol, a_numeric_value)` ...
# also would not interrupt our procedure (i.e. throw an exception) when it finds there ...
# does not exist such `a_specific_symbol` in the formula.
omega_ci = pms.omega_cj(plasma=plasma, varidx='i')
omega_pi = pms.omega_pj(plasma=plasma, varidx='i')
# Substitute symbol parameters with accurate numerical values.
# Note the function will capture the variables B, n_0, m_i from the working scope.
w2N = lambda expr: expr\
.subs(omega_ce, pms.omega_ce(B=B))\
.subs(omega_pe, pms.omega_pe(n_0=n_0))\
.subs(omega_ci, pms.omega_cj(q_e=1, m=m_i_N, B=B))\
.subs(omega_pi, pms.omega_pj(n_0=n_0, q_e=1, m=m_i_N))
```
### $N^2(\omega, \theta=0)$ and $\omega$ Singularies
Express $N^2$ with $\omega$, $\omega_{ce}$, $\omega_{pe}$, rather than $\kappa_\perp$, $\kappa_\times$, $\kappa_\parallel$.
There exist omega $\omega$ singularites. At these points, $\omega$ would cause an infinite $N^2$, *i.e.* induce resonance.
The number of numerical result may be less than analytic symbol results, because sympy knows $\omega \geq 0$ and removes some obviously wrong answers.
```
# Substitute kappa components with omega.
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=0)] # <-- Set theta here
# Symbol results of omega singularities
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
# Substitute constant parameters with accurate numerical values.
B, n_0 = 5, 1e20
N2_in_omega = [w2N(sol) for sol in N2_in_omega]
# Numerical result of omega singularities
omega_singularites = \
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" # display all expression in one cell instead of the last one
from sympy import symbols, solve, Eq, sqrt, pi
from sinupy.algebra.draw import draw_discontinuable_expr, add_line_with_slope
e, epsilon = symbols('e, epsilon', positive=True)
m_e, m_i = plasma.m['e'], plasma.m['i']
n_e, n_i = plasma.n['e'], plasma.n['i']
B = plasma.B_amp()
w2Bn = lambda expr: expr\
.subs(omega_ce, e * B / m_e)\
.subs(omega_ci, e * B / m_i)\
.subs(omega_pe**2, e**2 * n_e /(epsilon * m_e))\
.subs(omega_pi**2, e**2 * n_e /(epsilon * m_i))
```
### Wave Resonance
Wave resonance happens when the relative refracion, $N$ blow up to infinity. As we already know, $N^2$ is a function of the wave angular frequency $\omega$, the angle $\angle(\vec{k}, \vec{B})$ between the wave $\vec{k}$ vector, the external magnetic field $\vec{B}$, and other characteristic frequency in plasma, *i.e.* $\omega_{pe}$ ,$\omega_{ce}$ and so on. In the following blocks, we fix the angle and find the $\omega^2$ that would make $N$ blow up.
```
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=pi/2)] # <-- Set theta here
N2_in_omega
resonance_omega_points = [fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
resonance_omega_square_points = [
list(set(map(lambda x:pow(x,2), branch_omega_points)))
for branch_omega_points in resonance_omega_points]
resonance_omega_square_points
# The above expressions contain $omega_{pe}$, $\omega_{ce}$ and so on.
# We transform them to basic plasma parameters like $\vec{B}$, $n_e$ as follows.
resonance_omega_square_points = [[
w2Bn(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
w2Bn(omega_square) for omega_square in
[((omega_ci-omega_ce + sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2,
((omega_ci-omega_ce - sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2]
# [((pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2,
# ((-pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2, ]
]
cutoff_omega_square_points
X, Y = symbols('X, Y', real=True, negative=False)
Bn2XY = lambda expr: expr\
.subs(B**2, Y * (m_e * m_i * wave.w**2) /(e**2))\
.subs(B, sqrt(Y * (m_e * m_i)) * wave.w / e)\
.subs(n_e, X * (epsilon * m_e * wave.w**2) / e**2)
resonance_omega_square_points = [[
Bn2XY(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
Bn2XY(omega_square.expand()) for omega_square in cutoff_omega_square_points
]
cutoff_omega_square_points
resonance_points_as_Eq_1 = [
[omega_square.subs(wave.w, 1) for omega_square in branch]
for branch in resonance_omega_square_points]
resonance_points_as_Eq_1
cutoff_points_as_Eq_1 = [
omega_square.subs(wave.w, 1)
for omega_square in cutoff_omega_square_points]
cutoff_points_as_Eq_1
from sympy import solve, Eq
solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][3], 1), Y)
solve(Eq(cutoff_points_as_Eq_1[0], 1), X)
solve(Eq(cutoff_points_as_Eq_1[1], 1), X)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last" # display all expression in one cell instead of the last one
import matplotlib.pyplot as plt
fig_CMA, ax_CMA = plt.subplots(figsize=(20, 30))
ax_CMA.set_xscale('log')
ax_CMA.set_yscale('log')
ax_CMA.set_xlabel('$\omega_p^2/\omega^2$', loc='right', fontdict={'size': 32})
ax_CMA.set_ylabel('$\\frac{\omega_{ce}\omega_{ci}}{\omega^2}$ ', loc='top', fontdict={'size': 32}, rotation=0)
# ax_CMA.set_xticks([1.0])
# ax_CMA.set_xticklabels(size=20)
ax_CMA.set_yticks([1./m_i_N, 1.0, m_i_N/1])
ax_CMA.set_yticklabels(['$m_e/m_i$', '$1.0$', '$m_i/m_e$'], size=20)
# change the fontsize
ax_CMA.tick_params(axis='x', labelsize=20)
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='blue', linestyle=':', label='$u_L=0$, $\omega=\omega_{ce}$')
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='purple', linestyle=':', label='$u_R=0$, $\omega=\omega_{ci}$')
ax_CMA.axvline(
x=1,
color='darkcyan', linestyle=':', label='$u_O=\infty$, $\omega=\omega_{pe}$')
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)], X, # [1][3] is also okay
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_X=0$, $\omega=\omega_{UH}$', '$u_X=0$, $\omega=\omega_{LH}$']
)
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(cutoff_points_as_Eq_1[0], 1), Y)], X,
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_R=\infty$, $\omega=\omega_{R}$', '$u_L=\infty$, $\omega=\omega_{L}$']
)
ax_CMA.legend(prop={'size': 20})
plt.close(fig_CMA)
from matplotlib.patches import Circle
from matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage,
AnnotationBbox)
from matplotlib.cbook import get_sample_data
for i, (B, n_0, omega) in enumerate(plasma_B_n_0_omega):
with get_sample_data((data_path / f"v_ph_{i}.png").absolute()) as file:
arr_img = plt.imread(file, format='png')
imagebox = OffsetImage(arr_img, zoom=0.28)
imagebox.image.axes = ax_CMA
imagebox
x_CMA = w2N((omega_pe**2 + omega_pi**2) / omega**2)
y_CMA = w2N(omega_ce * omega_ci / omega**2)
xy_CMA = (x_CMA, y_CMA)
print(xy_CMA)
ab = AnnotationBbox(imagebox, xy_CMA,
xybox=(150., -200.),
xycoords='data',
boxcoords="offset points",
pad=0.5,
arrowprops=dict(
arrowstyle="->",
connectionstyle="angle,angleA=0,angleB=90,rad=3")
)
ax_CMA.add_artist(ab)
print(f"The {i}-th phase speed polar plot.")
fig_CMA
```
### References:
- For better color impression, [matplotlib official color gallery](https://matplotlib.org/3.1.0/gallery/color/named_colors.html) can ben refered.
|
github_jupyter
|
from sympy import sqrt, pi, init_printing; init_printing()
from scipy.constants import e, m_p, m_e, c
import sinupy.mediums.plasma as pms
import matplotlib.pyplot as plt
from pathlib import Path
data_path = Path('./sinupy_data/dispersion'); data_path.mkdir(parents=True, exist_ok=True)
from sinupy.draw import draw_discontinuable_expr, add_line_with_slope
import sinupy.algebra.utility as fualguti
from sinupy import mediums, waves
from sinupy.waves import EM
plasma = mediums.ColdMagnetizedPlasma(species='e+i')
wave_eq = waves.EM.WaveEq(plasma)
wave = wave_eq.wave
m_i_N = m_p
m_e_N = m_e
omega_ce = pms.omega_ce(plasma=plasma)
omega_pe = pms.omega_pe(plasma=plasma)
# Even if your plasma.species is 'e', the ion-relevant symbols would not interrupt ...
# our calculation procedure, because `expr.subs(a_specific_symbol, a_numeric_value)` ...
# also would not interrupt our procedure (i.e. throw an exception) when it finds there ...
# does not exist such `a_specific_symbol` in the formula.
omega_ci = pms.omega_cj(plasma=plasma, varidx='i')
omega_pi = pms.omega_pj(plasma=plasma, varidx='i')
# Substitute symbol parameters with accurate numerical values.
# Note the function will capture the variables B, n_0, m_i from the working scope.
w2N = lambda expr: expr\
.subs(omega_ce, pms.omega_ce(B=B))\
.subs(omega_pe, pms.omega_pe(n_0=n_0))\
.subs(omega_ci, pms.omega_cj(q_e=1, m=m_i_N, B=B))\
.subs(omega_pi, pms.omega_pj(n_0=n_0, q_e=1, m=m_i_N))
# Substitute kappa components with omega.
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=0)] # <-- Set theta here
# Symbol results of omega singularities
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
# Substitute constant parameters with accurate numerical values.
B, n_0 = 5, 1e20
N2_in_omega = [w2N(sol) for sol in N2_in_omega]
# Numerical result of omega singularities
omega_singularites = \
[fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all" # display all expression in one cell instead of the last one
from sympy import symbols, solve, Eq, sqrt, pi
from sinupy.algebra.draw import draw_discontinuable_expr, add_line_with_slope
e, epsilon = symbols('e, epsilon', positive=True)
m_e, m_i = plasma.m['e'], plasma.m['i']
n_e, n_i = plasma.n['e'], plasma.n['i']
B = plasma.B_amp()
w2Bn = lambda expr: expr\
.subs(omega_ce, e * B / m_e)\
.subs(omega_ci, e * B / m_i)\
.subs(omega_pe**2, e**2 * n_e /(epsilon * m_e))\
.subs(omega_pi**2, e**2 * n_e /(epsilon * m_i))
N2_in_omega = [
pms.kappa2omega(sol, wave, plasma) for sol in
EM.solve_N2(wave_eq, theta=pi/2)] # <-- Set theta here
N2_in_omega
resonance_omega_points = [fualguti.find_singularities(sol, wave.w) for sol in N2_in_omega]
resonance_omega_square_points = [
list(set(map(lambda x:pow(x,2), branch_omega_points)))
for branch_omega_points in resonance_omega_points]
resonance_omega_square_points
# The above expressions contain $omega_{pe}$, $\omega_{ce}$ and so on.
# We transform them to basic plasma parameters like $\vec{B}$, $n_e$ as follows.
resonance_omega_square_points = [[
w2Bn(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
w2Bn(omega_square) for omega_square in
[((omega_ci-omega_ce + sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2,
((omega_ci-omega_ce - sqrt((omega_ce+omega_ci)**2 + 4 * omega_pe**2))/2)**2]
# [((pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2,
# ((-pms.omega_ce + sqrt(pms.omega_ce**2 + 4 * pms.omega_pe**2))/2)**2, ]
]
cutoff_omega_square_points
X, Y = symbols('X, Y', real=True, negative=False)
Bn2XY = lambda expr: expr\
.subs(B**2, Y * (m_e * m_i * wave.w**2) /(e**2))\
.subs(B, sqrt(Y * (m_e * m_i)) * wave.w / e)\
.subs(n_e, X * (epsilon * m_e * wave.w**2) / e**2)
resonance_omega_square_points = [[
Bn2XY(omega_square) for omega_square in branch
] for branch in resonance_omega_square_points]
resonance_omega_square_points
cutoff_omega_square_points = [
Bn2XY(omega_square.expand()) for omega_square in cutoff_omega_square_points
]
cutoff_omega_square_points
resonance_points_as_Eq_1 = [
[omega_square.subs(wave.w, 1) for omega_square in branch]
for branch in resonance_omega_square_points]
resonance_points_as_Eq_1
cutoff_points_as_Eq_1 = [
omega_square.subs(wave.w, 1)
for omega_square in cutoff_omega_square_points]
cutoff_points_as_Eq_1
from sympy import solve, Eq
solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)
solve(Eq(resonance_points_as_Eq_1[1][3], 1), Y)
solve(Eq(cutoff_points_as_Eq_1[0], 1), X)
solve(Eq(cutoff_points_as_Eq_1[1], 1), X)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "last" # display all expression in one cell instead of the last one
import matplotlib.pyplot as plt
fig_CMA, ax_CMA = plt.subplots(figsize=(20, 30))
ax_CMA.set_xscale('log')
ax_CMA.set_yscale('log')
ax_CMA.set_xlabel('$\omega_p^2/\omega^2$', loc='right', fontdict={'size': 32})
ax_CMA.set_ylabel('$\\frac{\omega_{ce}\omega_{ci}}{\omega^2}$ ', loc='top', fontdict={'size': 32}, rotation=0)
# ax_CMA.set_xticks([1.0])
# ax_CMA.set_xticklabels(size=20)
ax_CMA.set_yticks([1./m_i_N, 1.0, m_i_N/1])
ax_CMA.set_yticklabels(['$m_e/m_i$', '$1.0$', '$m_i/m_e$'], size=20)
# change the fontsize
ax_CMA.tick_params(axis='x', labelsize=20)
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][0], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='blue', linestyle=':', label='$u_L=0$, $\omega=\omega_{ce}$')
ax_CMA.axhline(
y=solve(Eq(resonance_points_as_Eq_1[1][2], 1), Y)[0].subs(m_i, m_i_N).subs(m_e, m_e_N),
color='purple', linestyle=':', label='$u_R=0$, $\omega=\omega_{ci}$')
ax_CMA.axvline(
x=1,
color='darkcyan', linestyle=':', label='$u_O=\infty$, $\omega=\omega_{pe}$')
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(resonance_points_as_Eq_1[1][1], 1), Y)], X, # [1][3] is also okay
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_X=0$, $\omega=\omega_{UH}$', '$u_X=0$, $\omega=\omega_{LH}$']
)
draw_discontinuable_expr(
[sol.subs(m_i, m_i_N).subs(m_e, m_e_N)
for sol in solve(Eq(cutoff_points_as_Eq_1[0], 1), Y)], X,
varlim = (1e-3, 1e7), exprlim=(1e-5, None), num=500,
var_sample_scale='log', fig=fig_CMA, ax=ax_CMA, labels=['$u_R=\infty$, $\omega=\omega_{R}$', '$u_L=\infty$, $\omega=\omega_{L}$']
)
ax_CMA.legend(prop={'size': 20})
plt.close(fig_CMA)
from matplotlib.patches import Circle
from matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage,
AnnotationBbox)
from matplotlib.cbook import get_sample_data
for i, (B, n_0, omega) in enumerate(plasma_B_n_0_omega):
with get_sample_data((data_path / f"v_ph_{i}.png").absolute()) as file:
arr_img = plt.imread(file, format='png')
imagebox = OffsetImage(arr_img, zoom=0.28)
imagebox.image.axes = ax_CMA
imagebox
x_CMA = w2N((omega_pe**2 + omega_pi**2) / omega**2)
y_CMA = w2N(omega_ce * omega_ci / omega**2)
xy_CMA = (x_CMA, y_CMA)
print(xy_CMA)
ab = AnnotationBbox(imagebox, xy_CMA,
xybox=(150., -200.),
xycoords='data',
boxcoords="offset points",
pad=0.5,
arrowprops=dict(
arrowstyle="->",
connectionstyle="angle,angleA=0,angleB=90,rad=3")
)
ax_CMA.add_artist(ab)
print(f"The {i}-th phase speed polar plot.")
fig_CMA
| 0.602296 | 0.877004 |
# Example - Categorical Data
```
import geopandas as gpd
import pandas
from geocube.api.core import make_geocube
%matplotlib inline
```
## Load in soil data
```
ssurgo_data = gpd.read_file("../../test/test_data/input/soil_data_group.geojson")
# original data
ssurgo_data[ssurgo_data.hzdept_r==15].plot(column='sandtotal_r')
```
## Generate categories for categorical data
If your data is only a subset of all of the data, the list of categories you get will likely not be complete.
NOTE: The categories will be made unique and sorted internally if they are not already.
```
# this is only a subset of all of the classes
ssurgo_data.drclassdcd.drop_duplicates().values.tolist()
# complete list of categories
drclasses_complete = [
'Poorly drained',
'Somewhat poorly drained',
'Excessively drained',
'Subaqueous',
'Well drained',
'Somewhat excessively drained',
'Very poorly drained',
'Moderately well drained'
]
categorical_enums = {'drclassdcd': drclasses_complete}
```
## Convert data to grid
See docs for [make_geocube](../geocube.rst#make-geocube)
```
out_grid = make_geocube(
vector_data=ssurgo_data,
output_crs="epsg:32615",
group_by='hzdept_r',
resolution=(-100, 100),
categorical_enums=categorical_enums
)
out_grid
# mask nodata and plot
clay_slice = out_grid.claytotal_r.sel(hzdept_r=15)
clay_slice.where(clay_slice!=out_grid.claytotal_r.rio.nodata).plot()
```
## Dealing with categorical data
Because the data needs to be numerical for conversion from vector to raster, the code displays the categories as numbers. To convert back to strings, you will need to use the categories provided to convert back.
```
drclassdcd_slice = out_grid.drclassdcd.sel(hzdept_r=15)
drclassdcd_slice.where(drclassdcd_slice!=out_grid.drclassdcd.rio.nodata).plot()
drclassdcd_string = out_grid['drclassdcd_categories'][out_grid['drclassdcd'].astype(int)]\
.drop('drclassdcd_categories')
out_grid['drclassdcd'] = drclassdcd_string
out_grid
pdf = out_grid.drop(['spatial_ref', 'drclassdcd_categories']).to_dataframe()
pdf.head()
```
### Make sure all categories are represented
To do this, convert the column type to categorical beforehand and make sure that
you include all of the possible categories.
```
cat_dtype = pandas.api.types.CategoricalDtype(out_grid.drclassdcd_categories.values)
pdf['drclassdcd'] = pdf['drclassdcd'].astype(cat_dtype)
training_df = pandas.get_dummies(pdf, columns=['drclassdcd'])
training_df.head()
training_df.columns
```
|
github_jupyter
|
import geopandas as gpd
import pandas
from geocube.api.core import make_geocube
%matplotlib inline
ssurgo_data = gpd.read_file("../../test/test_data/input/soil_data_group.geojson")
# original data
ssurgo_data[ssurgo_data.hzdept_r==15].plot(column='sandtotal_r')
# this is only a subset of all of the classes
ssurgo_data.drclassdcd.drop_duplicates().values.tolist()
# complete list of categories
drclasses_complete = [
'Poorly drained',
'Somewhat poorly drained',
'Excessively drained',
'Subaqueous',
'Well drained',
'Somewhat excessively drained',
'Very poorly drained',
'Moderately well drained'
]
categorical_enums = {'drclassdcd': drclasses_complete}
out_grid = make_geocube(
vector_data=ssurgo_data,
output_crs="epsg:32615",
group_by='hzdept_r',
resolution=(-100, 100),
categorical_enums=categorical_enums
)
out_grid
# mask nodata and plot
clay_slice = out_grid.claytotal_r.sel(hzdept_r=15)
clay_slice.where(clay_slice!=out_grid.claytotal_r.rio.nodata).plot()
drclassdcd_slice = out_grid.drclassdcd.sel(hzdept_r=15)
drclassdcd_slice.where(drclassdcd_slice!=out_grid.drclassdcd.rio.nodata).plot()
drclassdcd_string = out_grid['drclassdcd_categories'][out_grid['drclassdcd'].astype(int)]\
.drop('drclassdcd_categories')
out_grid['drclassdcd'] = drclassdcd_string
out_grid
pdf = out_grid.drop(['spatial_ref', 'drclassdcd_categories']).to_dataframe()
pdf.head()
cat_dtype = pandas.api.types.CategoricalDtype(out_grid.drclassdcd_categories.values)
pdf['drclassdcd'] = pdf['drclassdcd'].astype(cat_dtype)
training_df = pandas.get_dummies(pdf, columns=['drclassdcd'])
training_df.head()
training_df.columns
| 0.261991 | 0.897471 |
# K-means Clustering Algorithm
Before implementation, let's understand what type of problem we will solve here. So, we have a dataset of Wine data Segmentation
A explainable Clustering Analysis
Content of this Kernel:
- Exploratory Data Analysis with some Data Visualization
- Data Preprocessing
- K-Means Clustering with K selection techniques
- Visualization of Clusters using PCA
```
# importing libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
%matplotlib inline
# Importing the dataset
data = pd.read_csv('wine-clustering.csv')
```
## Exploratory Data Analysis (EDA)
Checking the data head, as we can see all the data is numerical: there are no categorical values¶
```
data.head()
data.info()
data.describe()
```
### Checking the skewness of our dataset.
A normally distribuited data has a skewness close to zero.\
Skewness greather than zero means that there is more weight in the left side of the data.\
In another hand, skewness smaller than 0 means that there is more weight in the right side of the data
```
data.skew()
```
#### Plotting the histogram of each numerical variable (in this case, all features), the main idea here is to visualize the data distribution for each feature. This method can bring fast insights as:
Check the kind of each feature distribution\
Check data symmetry\
Verify features frequency\
Identify outliers
```
sns.set(style='white',font_scale=1.3, rc={'figure.figsize':(20,20)})
ax=data.hist(bins=20 )
```
#### To reinforce our insights about the data symmetry and their outliers, we can da plot some boxplots:
**"A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2). The whiskers extend from the edges of box to show the range of the data. The position of the whiskers is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the box. Outlier points are those past the end of the whiskers."**
```
data.plot( kind = 'box', subplots = True, layout = (4,4), sharex = False, sharey = False,color='black')
plt.show()
data.isnull().sum()
```
## Data Preprocessing
We are going to use a K-means algorithm, as it uses the distance as the principal metric to alocate the data in your respective cluster we need to be careful with scale, because we can give more "relevance" to large scale features and despite the low scale ones\
To prevent that, we can use lot of Scaling methods, in this case i'm going to Satandardize the data: to have a 0 mean and unit variance
```
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
data_cluster=data.copy()
data_cluster[data_cluster.columns]=std_scaler.fit_transform(data_cluster)
data_cluster.describe()
```
## Model Implementation¶
```
import sklearn.cluster as cluster
inertia = []
for i in tqdm(range(2,10)):
kmeans = cluster.KMeans(n_clusters=i,
init='k-means++',
n_init=15,
max_iter=500,
random_state=17)
kmeans.fit(data_cluster)
inertia.append(kmeans.inertia_)
# compute the silhouette score. Here, the bigger score the better the clustering¶
from sklearn.metrics import silhouette_score
silhouette = {}
for i in tqdm(range(2,10)):
kmeans = cluster.KMeans(n_clusters=i,
init='k-means++',
n_init=15,
max_iter=500,
random_state=17)
kmeans.fit(data_cluster)
silhouette[i] = silhouette_score(data_cluster, kmeans.labels_, metric='euclidean')
plt.subplot(1, 2, 1)
plt.plot(range(2,len(inertia)+2), inertia, marker='o',lw=2,ms=8,color='red')
plt.xlabel('Number of clusters')
plt.title('K-means Inertia',fontweight='bold')
plt.grid(True)
plt.subplot(1, 2, 2)
plt.bar(range(len(silhouette)), list(silhouette.values()), align='center',color= 'red',width=0.5)
plt.xticks(range(len(silhouette)), list(silhouette.keys()))
plt.grid()
plt.title('Silhouette Score',fontweight='bold')
plt.xlabel('Number of Clusters')
plt.show()
# As we can see, in K=3 all the metrics indicates that it is the best clusters number. So, we'll be using it
kmeans = cluster.KMeans(n_clusters=3,random_state=17,init='k-means++')
kmeans_labels = kmeans.fit_predict(data_cluster)
centroids = kmeans.cluster_centers_
# centroids_pca = pca_2.transform(centroids)
pd.Series(kmeans_labels).value_counts()
# Here, we can visualize each feature distribution according to each cluster, in this step we can define some characteristics for each group
data2=data.copy()
data2['Cluster']=kmeans_labels
aux=data2.columns.tolist()
aux[0:len(aux)-1]
for cluster in aux[0:len(aux)-1]:
grid= sns.FacetGrid(data2, col='Cluster')
grid.map(plt.hist, cluster)
#Another approach, is looking each cluster centroid to define the cluster characteristics
centroids_data=pd.DataFrame(data=std_scaler.inverse_transform(centroids), columns=data.columns)
centroids_data.head()
from sklearn.decomposition import PCA
pca_2 = PCA(2)
pca_2_result = pca_2.fit_transform(data_cluster)
print('Cumulative variance explained by 2 principal components: {:.2%}'.format(np.sum(pca_2.explained_variance_ratio_)))
plt.scatter(x=pca_2_result[:, 0], y=pca_2_result[:, 1], lw=0.1)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('Data represented by the 2 strongest principal components',fontweight='bold')
plt.show()
import sklearn.cluster as cluster
kmeans = cluster.KMeans(n_clusters=3,random_state=17,init='k-means++')
kmeans_labels = kmeans.fit_predict(data_cluster)
centroids = kmeans.cluster_centers_
centroids_pca = pca_2.transform(centroids)
pd.Series(kmeans_labels).value_counts()
sns.set(style='white', rc={'figure.figsize':(9,6)},font_scale=1.1)
plt.scatter(x=pca_2_result[:, 0], y=pca_2_result[:, 1], c=kmeans_labels, cmap='autumn')
plt.scatter(centroids_pca[:, 0], centroids_pca[:, 1],
marker='x', s=169, linewidths=3,
color='black', zorder=10,lw=3)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('Clustered Data (PCA visualization)',fontweight='bold')
plt.show()
```
|
github_jupyter
|
# importing libraries
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
%matplotlib inline
# Importing the dataset
data = pd.read_csv('wine-clustering.csv')
data.head()
data.info()
data.describe()
data.skew()
sns.set(style='white',font_scale=1.3, rc={'figure.figsize':(20,20)})
ax=data.hist(bins=20 )
data.plot( kind = 'box', subplots = True, layout = (4,4), sharex = False, sharey = False,color='black')
plt.show()
data.isnull().sum()
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
data_cluster=data.copy()
data_cluster[data_cluster.columns]=std_scaler.fit_transform(data_cluster)
data_cluster.describe()
import sklearn.cluster as cluster
inertia = []
for i in tqdm(range(2,10)):
kmeans = cluster.KMeans(n_clusters=i,
init='k-means++',
n_init=15,
max_iter=500,
random_state=17)
kmeans.fit(data_cluster)
inertia.append(kmeans.inertia_)
# compute the silhouette score. Here, the bigger score the better the clustering¶
from sklearn.metrics import silhouette_score
silhouette = {}
for i in tqdm(range(2,10)):
kmeans = cluster.KMeans(n_clusters=i,
init='k-means++',
n_init=15,
max_iter=500,
random_state=17)
kmeans.fit(data_cluster)
silhouette[i] = silhouette_score(data_cluster, kmeans.labels_, metric='euclidean')
plt.subplot(1, 2, 1)
plt.plot(range(2,len(inertia)+2), inertia, marker='o',lw=2,ms=8,color='red')
plt.xlabel('Number of clusters')
plt.title('K-means Inertia',fontweight='bold')
plt.grid(True)
plt.subplot(1, 2, 2)
plt.bar(range(len(silhouette)), list(silhouette.values()), align='center',color= 'red',width=0.5)
plt.xticks(range(len(silhouette)), list(silhouette.keys()))
plt.grid()
plt.title('Silhouette Score',fontweight='bold')
plt.xlabel('Number of Clusters')
plt.show()
# As we can see, in K=3 all the metrics indicates that it is the best clusters number. So, we'll be using it
kmeans = cluster.KMeans(n_clusters=3,random_state=17,init='k-means++')
kmeans_labels = kmeans.fit_predict(data_cluster)
centroids = kmeans.cluster_centers_
# centroids_pca = pca_2.transform(centroids)
pd.Series(kmeans_labels).value_counts()
# Here, we can visualize each feature distribution according to each cluster, in this step we can define some characteristics for each group
data2=data.copy()
data2['Cluster']=kmeans_labels
aux=data2.columns.tolist()
aux[0:len(aux)-1]
for cluster in aux[0:len(aux)-1]:
grid= sns.FacetGrid(data2, col='Cluster')
grid.map(plt.hist, cluster)
#Another approach, is looking each cluster centroid to define the cluster characteristics
centroids_data=pd.DataFrame(data=std_scaler.inverse_transform(centroids), columns=data.columns)
centroids_data.head()
from sklearn.decomposition import PCA
pca_2 = PCA(2)
pca_2_result = pca_2.fit_transform(data_cluster)
print('Cumulative variance explained by 2 principal components: {:.2%}'.format(np.sum(pca_2.explained_variance_ratio_)))
plt.scatter(x=pca_2_result[:, 0], y=pca_2_result[:, 1], lw=0.1)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('Data represented by the 2 strongest principal components',fontweight='bold')
plt.show()
import sklearn.cluster as cluster
kmeans = cluster.KMeans(n_clusters=3,random_state=17,init='k-means++')
kmeans_labels = kmeans.fit_predict(data_cluster)
centroids = kmeans.cluster_centers_
centroids_pca = pca_2.transform(centroids)
pd.Series(kmeans_labels).value_counts()
sns.set(style='white', rc={'figure.figsize':(9,6)},font_scale=1.1)
plt.scatter(x=pca_2_result[:, 0], y=pca_2_result[:, 1], c=kmeans_labels, cmap='autumn')
plt.scatter(centroids_pca[:, 0], centroids_pca[:, 1],
marker='x', s=169, linewidths=3,
color='black', zorder=10,lw=3)
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('Clustered Data (PCA visualization)',fontweight='bold')
plt.show()
| 0.651466 | 0.984155 |
**Корректность проверена на Python 3.7:**
+ pandas 0.23.0
+ numpy 1.14.5
+ scipy 1.1.0
# Доверительные интервалы для двух долей
```
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
print(np.__version__)
print(pd.__version__)
print(scipy.__version__)
```
## Загрузка данных
```
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
```
## Интервальные оценки долей
$$\frac1{ 1 + \frac{z^2}{n} } \left( \hat{p} + \frac{z^2}{2n} \pm z \sqrt{ \frac{ \hat{p}\left(1-\hat{p}\right)}{n} + \frac{z^2}{4n^2} } \right), \;\; z \equiv z_{1-\frac{\alpha}{2}}$$
```
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print('interval for banner a [%f, %f]' % conf_interval_banner_a)
print('interval for banner b [%f, %f]' % conf_interval_banner_b)
```
### Как их сравнить?
## Доверительный интервал для разности долей (независимые выборки)
| $X_1$ | $X_2$
------------- | -------------|
1 | a | b
0 | c | d
$\sum$ | $n_1$| $n_2$
$$ \hat{p}_1 = \frac{a}{n_1}$$
$$ \hat{p}_2 = \frac{b}{n_2}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \hat{p}_1 - \hat{p}_2 \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{\hat{p}_1(1 - \hat{p}_1)}{n_1} + \frac{\hat{p}_2(1 - \hat{p}_2)}{n_2}}$$
```
def proportions_confint_diff_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_ind(data.banner_a, data.banner_b))
```
## Доверительный интервал для разности долей (связанные выборки)
$X_1$ \ $X_2$ | 1| 0 | $\sum$
------------- | -------------|
1 | e | f | e + f
0 | g | h | g + h
$\sum$ | e + g| f + h | n
$$ \hat{p}_1 = \frac{e + f}{n}$$
$$ \hat{p}_2 = \frac{e + g}{n}$$
$$ \hat{p}_1 - \hat{p}_2 = \frac{f - g}{n}$$
$$\text{Доверительный интервал для }p_1 - p_2\colon \;\; \frac{f - g}{n} \pm z_{1-\frac{\alpha}{2}}\sqrt{\frac{f + g}{n^2} - \frac{(f - g)^2}{n^3}}$$
```
def proportions_confint_diff_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_rel(data.banner_a, data.banner_b))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import scipy
from statsmodels.stats.weightstats import *
from statsmodels.stats.proportion import proportion_confint
print(np.__version__)
print(pd.__version__)
print(scipy.__version__)
data = pd.read_csv('banner_click_stat.txt', header = None, sep = '\t')
data.columns = ['banner_a', 'banner_b']
data.head()
data.describe()
conf_interval_banner_a = proportion_confint(sum(data.banner_a),
data.shape[0],
method = 'wilson')
conf_interval_banner_b = proportion_confint(sum(data.banner_b),
data.shape[0],
method = 'wilson')
print('interval for banner a [%f, %f]' % conf_interval_banner_a)
print('interval for banner b [%f, %f]' % conf_interval_banner_b)
def proportions_confint_diff_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_ind(data.banner_a, data.banner_b))
def proportions_confint_diff_rel(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
sample = list(zip(sample1, sample2))
n = len(sample)
f = sum([1 if (x[0] == 1 and x[1] == 0) else 0 for x in sample])
g = sum([1 if (x[0] == 0 and x[1] == 1) else 0 for x in sample])
left_boundary = float(f - g) / n - z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
right_boundary = float(f - g) / n + z * np.sqrt(float((f + g)) / n**2 - float((f - g)**2) / n**3)
return (left_boundary, right_boundary)
print("confidence interval: [%f, %f]" % proportions_confint_diff_rel(data.banner_a, data.banner_b))
| 0.527803 | 0.94428 |
# Capítulo 9 - Senoides y Fasores
## Introducción
A causa de que la ca es más eficiente y económica para la transmisión a grandes distancias, los sistemas de ca terminaron imponiéndose. Se inicia el análisis de circuitos en los que la tensión o la corriente de la fuente varía con el tiempo. En este capítulo nos interesará en particular la exitación senoidal variable con respecto al tiempo, o simplemente excitación por una senoide.
## Senoides
<div class="alert alert-info"><strong>Definición</strong>:
Una senoide es una senál que tiene la forma de la función seno o coseno.
</div>
Una corriente senoidal se conoce usualmente como *corriente alterna (ca)*. Esta corriente se invierte a intervalos regulares y tiene valores alternadamente positivo y negativo. Los circuitos excitados por fuentes de corriente o tensión senoidal se llaman *circuitos de ca*.
Considere la tensión senoidal
$$ v(t) = V_m \sin \omega t $$
donde
$V_m =$ la amplitud de la seniode
$\omega =$ la frecuencia angular en radianes/s
$\omega t =$ el argumento de la senoide
```
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 4.5*np.pi, 0.01)
s = np.sin(1 * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='Plot')
ax.grid()
# fig.savefig("test.png")
plt.show()
```
$$ \omega T = 2 \pi \quad \Rightarrow \quad T = \frac{2 \pi}{\omega} $$
El hecho de que $v(t)$ se repita cada $T$ segundos se demuestra reemplazando $t$ por $t + T$. Así se obtiene
$$v(t + T) = V_m \sin \omega (t + T)$$
$$ v(t + T ) = V_m \sin \omega \left(t + \frac{2 \pi}{\omega} \right) $$
$$ v(t + T) = V_m \sin (\omega t + 2 \pi) $$
Entonces:
$$ v(t + T) = v(t) $$
lo cual quiere decir que $v$ tiene el mismo valor en $t + T$ que en $t$, y se dice que $v(t)$ es periódica.
<div class="alert alert-info"><strong>Definición</strong>:
Una función periódica es aquella que satisface $f(t) = f(t + nT)$ para cualquier $t$ y para cualquier $n$ entero.
</div>
Como ya se mencionó, el periodo $T$ de la función periódica es el tiempo de un ciclo completo, o el número de segundos por ciclo. El recíproco de esa cantidad es el número de ciclos por segundo, conocido como frecuencia cíclica $f$ de la senoida. Así,
$$ f = \frac{1}{T} $$
se desprende claramente que
$$ \omega = 2 \pi f $$
Mientras que $\omega$ está en radianes por segundo (rad/s), $f$ está en hertz (Hz).
Considere ahora una expresión más general de la senoide,
$$ v(t) = V_m \sin (\omega t + \phi) $$
donde $(\omega t + \phi)$ es el argumento y $\phi$ es la fase. Tanto el argumento como la fase pueden estar en radianes o en grados.
Examínese las dos senoides
$$ v_1(t) = V_m \sin \omega t \qquad \mbox{y} \qquad v_2(t) = V_m \sin (\omega t + \phi) $$
```
t = np.arange(0.0, 4.5*np.pi, 0.01)
phi = 2
v1 = np.sin(t)
v2 = np.sin(t + phi)
fig, ax = plt.subplots()
ax.plot(t, v1 , label='v1')
ax.plot(t, v2 , label='v2')
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='Plot')
ax.grid()
plt.legend()
plt.show()
```
Una senoide puede expresarse en forma de seno o de coseno. Cuando se comparan dos senoides, es útil expresar ambas como seno o coseno con amplitudes positivas. Esto se realiza usando las siguientes identidades trigonométricas:
$$\begin{array}{l}
\sin (A \pm B) = \sin A \cos B \pm \cos A \sin B \\
\cos (A \pm B) = \cos A \cos B \mp \sin A \sin B
\end{array}$$
Con estas identidades, es fácil demostrar que
$$\begin{array}{rcl}
\sin (\omega t \pm 180^\circ) &=& - \sin \omega t \\
\cos (\omega t \pm 180^\circ) &=& - \cos \omega t \\
\sin (\omega t \pm 90^\circ) &=& \pm \cos \omega t \\
\cos (\omega t \pm 90^\circ) &=& \mp \sin \omega t
\end{array}$$
Usando estas relaciones se puede transformar una senoide de la forma seno a la forma coseno o viceversa.
```
%reset -s -f
```
## Ejemplo
Halle la amplitud, fase, periodo y frecuencia de la senoide
$$ v(t) = 12 \cos (50t + 10^\circ) $$
### Solución
La amplitud es $V_m = 12 \, \mathrm{V}$
La fase es $\phi = 10^\circ$
La frecuencia angular es $\omega = 50 \, \mathrm{rad/s}$.
El periodo es
$$T = \frac{2 \pi}{\omega} = \frac{2 \pi}{50} = 0,1257$$
La frecuencia es
$$ f = \frac{1}{T} = 7,958 \, \mathrm{Hz} $$
```
import math
Vm = 12 # V
phi = 10 # deg
omega = 50 # rad/s
T = 2*math.pi/omega
f = 1/T
print('T = %.4f'%T)
print('f = %.3f Hz'%f)
```
Dada la senoide $5 \sin (4 \pi t - 60^\circ)$, calcule su amplitud, fase, frecuencia angular, periodo y frecuencia.
La amplitud $V_m = 5$
La fase es $\phi =-60^\circ$
La frecuencia angular es $\omega = 12,57$
El período es
$$ T = \frac{2 \pi}{\omega} $$
La frecuencia es
$$ f = \frac{1}{T} $$
```
Vm = 5
phi = -60 # deg
omega = math.pi*4 # rad/s
T = 2*math.pi/omega
f = 1/T
print('omega = %1.2f rad/s'%omega)
print('T = %.2f s/rev'%T)
print('f = %.2f Hz'%f)
%reset -s -f
```
# Fasores
Los senoides se expresan fácilmente en términos de fasores, con los que es más cómodo trabajar que con las funciones seno y coseno.
<div class="alert alert-info"><strong>Definición</strong>:
Un fasor es un número complejo que representa la amplitud y la fase de una senoide.
</div>
Un número complejo $z$ puede escribirse en forma rectangular como
$$ z = x + jy $$
donde $j = \sqrt{-1}$; $x$ es la parte real de $z$ e $y$ es la parte imaginaria de $z$.
El número complejo $z$ también puede escribirse en forma polar o exponencial, como
$$ z = r \angle \phi = re^{j \phi} $$
donde $r$ es la magnitud de $z$ y $\phi$ la fase de $z$. Se advierte entonces que $z$ puede representarse de tres maneras:
$$\begin{array}{lcl}
z = x + jy & & \mbox{Forma rectangular} \\
z = r \angle \phi & & \mbox{Forma polar} \\
z = re^{j \phi} & & \mbox{Forma exponencial}
\end{array}$$
Dadas $x$ e $y$, se puede obtener $r$ y $\phi$ como
$$ x = r \cos \phi \qquad \qquad y = r \sin \phi $$
Así, $z$ puede escribirse como
$$ z = x + jy = r \angle \phi = r( \cos \phi + j \sin \phi ) $$
* La suma y resta de números complejos es más sencilla en la forma rectangular.
* La multiplicación y la división lo son en forma polar.
Son importantes las siguientes operaciones.
__Suma:__
$$ z_1 + z_2 = (x_1 + x_2) + j(y_1 + y_2) $$
__Resta:__
$$ z_1 - z_2 = (x_1 - x_2) + j(y_1 - y_2) $$
__Multiplicación:__
$$ z_1 z_2 = r_1 r_2 \; \angle (\phi_1 + \phi_2) $$
__División:__
$$ \frac{z_1}{z_2} = \frac{r_1}{r_2} \; \angle (\phi_1 - \phi_2) $$
__Inverso:__
$$ \frac{1}{z} = \frac{1}{r} \; \angle (- \phi) $$
__Raíz cuadrada:__
$$ \sqrt{z} = \sqrt{r} \; \angle \frac{\phi}{2} $$
__Conjugado complejo:__
$$ z^* = x - jy = r \angle (- \phi) = r e^{-j \phi} $$
La idea de la representación fasorial se basa en la indentidad de Euler. En general:
$$ e^{\pm j \phi} = \cos \phi \pm j \sin \phi $$
lo que indica que se puede considerar a $\cos \phi$ y $\sin \phi$ como las partes real e imagenaria de $e^{j \phi}$; se puede escribir
$$\begin{array}{l}
\cos \phi = \mathrm{Re} (e^{j \phi}) \\
\sin \phi = \mathrm{Im} (e^{j \phi})
\end{array}$$
## Ejemplo
Evalúe estos números complejos:
$$ (40 \angle 50^\circ + 20 \angle -30^\circ)^{1/2} $$
### Solución
Para realizar la suma convertimos de coordenadas polares a rectangulares
$$ z = r \angle \phi \quad \rightarrow \quad
\left\{
\begin{array}{l}
x = r \cos \phi \\
y = r \sin \phi
\end{array}
\right.$$
__* Paso a paso:__
```
import math
phi1 = 50*(math.pi/180) # rad (conversión a radianes)
r1 = 40
x1 = r1*math.cos(phi1)
y1 = r1*math.sin(phi1)
print('z1 = %.2f + (%.2f)j'%(x1,y1))
phi2 = -30*(math.pi/180) # rad
r2 = 20
x2 = r2*math.cos(phi2)
y2 = r2*math.sin(phi2)
print('z2 = %.2f + (%.2f)j'%(x2,y2))
```
La suma da por resultado:
```
x3 = x1 + x2
y3 = y1 + y2
print('z3 = %.2f + (%.2f)j'%(x3,y3))
```
Convertimos $z_3$ a coordenadas polares
```
r3 = math.sqrt(x3**2 + y3**2)
phi3 = math.atan(y3/x3)
print('z3 = %.2f<%.2frad'%(r3,phi3))
print('z3 = %.2f<%.2f°'%(r3,phi3*180/math.pi))
```
Calculando la raíz cuadrada de esta expresión
```
r4 = math.sqrt(r3)
phi4 = phi3/2
print('z4 = %.2f<%.2frad'%(r4,phi4))
print('z4 = %.2f<%.2f°'%(r4,phi4*180/math.pi))
```
__* Usando biblioteca cmath__
```
import cmath
r1 = 40 ; phi1 = 50*math.pi/180
r2 = 20 ; phi2 = -30*math.pi/180
# Conversión a coordenadas rectangulares
c1 = cmath.rect(r1,phi1)
c2 = cmath.rect(r2,phi2)
c3 = cmath.sqrt(c1 + c2)
c3p = cmath.polar(c3)
print('c1 = {:.2f}'.format(c1))
print('c2 = {:.2f}'.format(c2))
print('c3 = {:.2f}'.format(c3))
print('c3p = %.2f<%.2frad'%(c3p[0],c3p[1]))
%reset -s -f
```
## Ejemplo
$$ \frac{ (10 \angle -30^\circ) + (3 - 4j) }{ (2 + 4j) \, (3 - 5j)^* } $$
### Solución
1. Convetimos $(10 \angle -30^\circ)$ a rectangulares
$$\begin{array}{l}
x = r \cos \phi \\
y = r \sin \phi
\end{array}$$
```
import math
import cmath
r1 = 10 ; phi1 = -30*(math.pi/180)
x1 = r1*math.cos(phi1)
y1 = r1*math.sin(phi1)
print('(%.2f %.2fj)'%(x1,y1))
```
2. Efectuamos la suma
```
c1 = complex(x1, y1)
c2 = complex(3, -4)
Num = c1 + c2
Num_pol = cmath.polar(Num)
print('Num = {:.2f}'.format(Num))
print('Num_pol = %.2f<%.2frad'%Num_pol)
c3 = complex(2,4)
c4 = 3-5j.conjugate()
Den = c3*c4
Den_pol = cmath.polar(Den)
print('Den = {:.2f}'.format(Den))
print('Den_pol = %.2f<%.2frad'%Den_pol)
Res = Num/Den
Res_pol = cmath.polar(Res)
print('Res = {:.2f}'.format(Res))
print('Res_pol = %.3f<%.2frad'%Res_pol)
print('%.2frad = %s°'%(Res_pol[1],round(Res_pol[1]*180/math.pi,2)))
%reset -s -f
```
## Problema de práctica
Evalúe los siguientes números complejos:
$$ [(5 + 2j)(-1 + 4j) - 5 \angle 60^\circ]^* $$
### Solución
__Usando biblioteca cmath__
```
import math
import cmath
z1 = complex(5,2)
z2 = complex(-1,4)
z3 = cmath.rect(5,60*math.pi/180)
Resultado = (z1*z2 - z3).conjugate()
print('Resultado = {:.2f}'.format(Resultado))
```
__Resolución paso a paso__
1. Convertimos $(5 + 2j)$ a coordenadas polares
```
x1 = 5
y1 = 2
r1 = math.sqrt(x1**2 + y1**2)
phi1 = math.atan(2/5)
z1 = complex(x1,y1)
z1p = (r1,phi1)
print('z1 = {:.2f}'.format(z1))
print('z1p = %.3f<%.3frad'%z1p)
```
2. Convertimos $(-1 + 4j)$ a coordenadas polares
```
x2 = -1
y2 = 4
r2 = math.sqrt((-1)**2 + 4**2)
phi2 = math.pi + math.atan(4/-1)
z2 = complex(x2,y2)
z2p = (r2,phi2)
print('z2 = {:.2f}'.format(z2))
print('z2p = %.3f<%.2frad'%z2p)
```
3. Multiplicamos
```
r3 = z1p[0]*z2p[0]
phi3 = z1p[1] + z2p[1]
z3p = (r3,phi3)
print('z3p = %.3f<%.3frad'%z3p)
x3 = r3*math.cos(phi3)
y3 = r3*math.sin(phi3)
z3 = complex(x3,y3)
print('z3 = {:.2f}'.format(z3))
```
4. Convertimos $5 \angle 60^\circ$ a coordenadas rectangulares
```
r4 = 5
phi4 = 60*math.pi/180
z4p = (r4,phi4)
x4 = r4*math.cos(phi4)
y4 = r4*math.sin(phi4)
z4 = complex(x4,y4)
print('z4 = {:.2f}'.format(z4))
```
5. Restamos
```
z5 = z3 - z4
```
6. Conjugamos
```
Res = z5.conjugate()
```
Resultado:
```
print('Res = {:.2f}'.format(Res))
%reset -s -f
```
## Problema de práctica
$$ \frac{ (10 + 5j) + (3 \angle 40^\circ) }{ (-3 + 4j) } + (10 \angle 30^\circ) $$
```
import math
import cmath
```
__Usando biblioteca cmath__
```
z1 = complex(10,5)
z2 = cmath.rect(3,40*math.pi/180)
z3 = complex(-3,4)
z4 = cmath.rect(10,30*math.pi/180)
Res = (z1 + z2)/z3 + z4
print('Res = {:.3f}'.format(Res))
```
__Resolución paso a paso__
```
# Datos:
x1 = 10 ; y1 = 5
r2 = 3
phi2 = 40*math.pi/180
x3 = -3 ; y3 = 4
r4 = 10
phi4 = 30*math.pi/180
```
$\left.
\begin{array}{l}
r_2 = 3 \\
\phi_2 = 40^\circ
\end{array}
\right\} \quad \rightarrow \quad
\begin{array}{l}
x_2 = r_2 \cos \phi_2 \\
y_2 = r_2 \sin \phi_2
\end{array}$
```
x2 = r2*math.cos(phi2)
y2 = r2*math.sin(phi2)
x_s1 = x1 + x2
y_s1 = y1 + y2
print('s1 = (%.3f,%.3f)'%(x_s1,y_s1))
```
$\begin{array}{l}
r_{s1} = \sqrt{x_{s1}^2 + y_{s1}^2} \\
\phi_{s1} = \arctan (y_{s1}/x_{s1})
\end{array}$
```
# Convertimos a polares
r_s1 = math.sqrt(x_s1**2 + y_s1**2)
phi_s1 = math.atan(y_s1/x_s1)
print('s1p = (%.3f,<%.3frad)'%(r_s1,phi_s1))
```
$\left\{
\begin{array}{l}
r_3 = \sqrt{x_3^2 + y_3^2} \\
\displaystyle \phi_3 = \pi - \arctan \left| \frac{y_3}{x_3} \right|
\end{array}
\right.$
```
r3 = math.sqrt(x3**2 + y3**2)
phi3 = math.pi - math.atan( y3 / abs(x3) )
print('c3p = (%.3f,%.3frad)'%(r3,phi3))
```
Efectuamos la división en coordenadas polares
$\left\{
\begin{array}{l}
\displaystyle r_{f1} = \frac{r_{s1}}{r_3} \\
\phi_{f1} = \phi_{s1} - \phi_3
\end{array}
\right.$
```
r_f1 = r_s1/r3
phi_f1 = phi_s1 - phi3
```
Convertimos a coordenadas rectangulares el resultado de la fracción
$\left\{
\begin{array}{l}
x_{f1} = r_{f1} \cos \phi_{f1} \\
y_{f1} = r_{f1} \sin \phi_{f1}
\end{array}
\right.$
```
x_f1 = r_f1*math.cos(phi_f1)
y_f1 = r_f1*math.sin(phi_f1)
print('f1 = (%.3f,%.3f)'%(x_f1,y_f1))
```
Convertimos a coordenadas rectangulares $(10 \angle 30^\circ)$
$\left\{
\begin{array}{l}
x_4 = r_4 \cos \phi_4 \\
y_4 = r_4 \sin \phi_4
\end{array}
\right.$
```
x4 = r4*math.cos(phi4)
y4 = r4*math.sin(phi4)
print('c4 = (%.3f,%.3f)'%(x4,y4))
```
Sumamos en coordenadas rectangulares
```
x_res = x_f1 + x4
y_res = y_f1 + y4
# Imprime el resultado
print('Resultado = (%.3f,%.3f)'%(x_res,y_res))
%reset -s -f
```
## Transforme estas senoides en fasores
$\begin{array}{ll}
a) & i = 6 \cos (50t - 40^\circ) \, \mathrm{A} \\
b) & v = -4 \sin (30t + 50^\circ) \, \mathrm{V}
\end{array}$
### Solución
$a)$
$$ i = 6 \cos (50t - 40^\circ) \, \mathrm{A} $$
tiene el fasor $\vec{I} = (6, \angle -40^\circ) \, \mathrm{A}$
$b)$ Puesto que $-\sin A = \cos (A + 90^\circ)$:
$$ v = -4 \sin (30t + 50^\circ) \, \mathrm{V} = 4 \cos (30t + 50^\circ + 90^\circ) $$
$$ v = \cos (30t + 140^\circ) \, \mathrm{V} $$
La forma fasorial de $v$ es
$$ \vec{V} = (4, \angle 140^\circ \, \mathrm{V}) $$
## Problema de práctica
Exprese estas senoides como fasores:
$\begin{array}{ll}
a) & v = -7 \cos (2t + 40^\circ) \, \mathrm{V} \\
b) & i = 4 \sin (10t + 10^\circ) \, \mathrm{A}
\end{array}$
### Solución
$a)$ Puesto que $- \cos A = \cos (A + 180^\circ)$:
$$ v = -7 \cos (2t + 40^\circ) \, \mathrm{V} = 7 \cos (2t + 40^\circ + 180^\circ) \, \mathrm{V} $$
$$ v = 7 \cos (2t + 220^\circ) \, \mathrm{V} $$
La forma fasorial de $v$ es: $ \vec{V} = (7, \angle 220^\circ) \, \mathrm{V} $
$b)$ Sabiendo que $\sin A = \cos (A - 90^\circ)$
$$ i = 4 \sin (10t + 10^\circ) \, \mathrm{A} = 4 \cos (10t + 10^\circ - 90^\circ) \, \mathrm{A} $$
$$ i = 4 \cos (10t - 80^\circ) \, \mathrm{A} $$
La forma fasorial de $\vec{I} = (4, \angle -80^\circ) \, \mathrm{A}$
## Ejemplo 9.5
Halle las senoides representadas por estos fasores:
$\begin{array}{ll}
a) & \vec{I} = -3 + 4j \, \mathrm{A} \\
b) & \vec{V} = j8e^{-j20} \, \mathrm{V}
\end{array}$
### Solución
$a)$
```
import math
import cmath
I = complex(-3,4)
I_pol = cmath.polar(I)
r = I_pol[0]
phi_deg = I_pol[1]*180/math.pi
print('I_polar = (%.2f<%.3frad)'%I_pol)
print('I_polar = (%.2f<%.2f°)'%(r,phi_deg))
%reset -s -f
```
Transformando la dominio del tiempo
$$ i(t) = 5 \cos (\omega t + 126,87^\circ) \, \mathrm{A} $$
$b)$ Puesto que $j = 1 \angle 90^\circ$,
$$ \vec{V} = 8j \angle -20^\circ = (1 \angle 90^\circ)(8 \angle -20^\circ) $$
$$ = 8 \angle (90^\circ - 20^\circ) = 8 \angle 70^\circ \, \mathrm{V} $$
La transformación de esto al dominio temporal da por resultado
$$ v(t) = 8 \cos (\omega t + 70^\circ) \, \mathrm{V} $$
## Problema de práctica 9.5
Halle las senoides correspondientes a estos fasores:
$\begin{array}{ll}
a) & \vec{V} = -10 \angle 30^\circ \, \mathrm{V} \\
b) & \vec{I} = j(5 - j12) \, \mathrm{A}
\end{array}
$
### Solución
$a)$
$$ \vec{V} = -10 \angle 30^\circ = 10 \angle (30^\circ + 180^\circ) $$
$$ = 10 \angle 210^\circ \, \mathrm{V} $$
La transformación de esto al dominio temporal da por resultado
$$ v(t) = 10 \cos (\omega t + 210^\circ) \, \mathrm{V} $$
$b)$
$$ \vec{I} = j(5 - j12) \, \mathrm{A} = 12 + 5j \, \mathrm{A} $$
```
import math
import cmath
I = complex(12,5)
I_pol = cmath.polar(I)
r = I_pol[0]
phi = I_pol[1]
print('I_polar = (%.2f<%.3frad)'%(r,phi))
print('I_polar = (%.2f<%.2f°)'%(r,phi*180/math.pi))
```
$$ I(t) = 13 \cos(\omega t + 22,62^\circ) \, \mathrm{A} $$
```
%reset -s -f
```
## Ejemplo 9.6
Dadas
$$\begin{array}{l}
i_1(t) = 4 \cos (\omega t + 30^\circ) \, \mathrm{A} \\
i_2(t) = 5 \sin (\omega t - 20^\circ) \, \mathrm{A}
\end{array}$$
halle su suma.
### Solución
Éste es un uso importante de los fasores: para la suma de senoides de la misma frecuencia. La corriente $i_1(t)$ está en la forma estándar. Su fasor es
$$ \vec{I}_1 = 4 \angle 30^\circ $$
Se debe expresar $i_2(t)$ en la forma de coseno. La regla para convertir seno en coseno es restar $90^\circ$. Así,
$$ i_2 = 5 \cos (\omega t 20^\circ - 90^\circ) = 5 \cos (\omega t - 110^\circ) $$
y su fasor es
$$ \vec{I}_2 = 5 \angle -110^\circ $$
Si se concede que $i = i_1 + i_2$, entonces
$$ \vec{I} = \vec{I}_1 + \vec{I}_2 = 4 \angle 30^\circ + 5 \angle -110^\circ $$
```
import math
import cmath
r1 = 4 ; phi1 = 30*(math.pi/180)
r2 = 5 ; phi2 = -110*(math.pi/180)
I1 = cmath.rect(r1,phi1)
I2 = cmath.rect(r2,phi2)
print('I1 = {:.3f}'.format(I1))
print('I2 = {:.3f}'.format(I2))
I = I1 + I2
print('I = {:.3f}'.format(I))
I_polar = cmath.polar(I)
print('I_polar = (%.3f<%.3frad)'%I_polar)
r = I_polar[0]
phi = I_polar[1]*180/math.pi
print('I_polar = (%.3f<%.3f°)'%(r,phi))
```
Al transformar esto al dominio temporal se obtiene
$$ i(t) = 3,218 \cos (\omega t - 56,976^\circ) \, \mathrm{A} $$
```
%reset -s -f
```
## Problema de práctica 9.6
Si
$$\begin{array}{l}
v_1 = -10 \sin (\omega t + 30^\circ) \, \mathrm{V} \\
v_2 = 20 \cos(\omega t - 45^\circ) \, \mathrm{V}
\end{array}$$
halle $v = v_1 + v_2$.
### Solución
```
import math, cmath
```
$$ v_1 = -10 \sin (\omega t + 30^\circ) \, \mathrm{V} = 10 \cos (\omega t + 30^\circ + 90^\circ) \, \mathrm{V}$$
$$ v_1 = 10 \cos (\omega t + 120^\circ) $$
Entonces:
$$\begin{array}{l}
\vec{V}_1 = 10 \angle 120^\circ \\
\vec{V}_2 = 20 \angle -45^\circ
\end{array}$$
```
r1 = 10 ; phi1 = 120*(math.pi/180)
r2 = 20 ; phi2 = -45*(math.pi/180)
V1 = cmath.rect(r1,phi1)
V2 = cmath.rect(r2,phi2)
print('V1 = {:.3f}'.format(V1))
print('V2 = {:.3f}'.format(V2))
# Sumamos
V = V1 + V2
print('V = {:.3f}'.format(V))
V_polar = cmath.polar(V)
print('V_polar = (%.3f<%.3frad)'%V_polar)
r = V_polar[0] ; phi = V_polar[1]*180/math.pi
print('V_polar = (%.2f<%.2f°)'%(r,phi))
```
$$ \vec{V} = 10,66 (\omega t - 30,95^\circ) \, \mathrm{V} $$
```
%reset -s -f
```
## Ejemplo 9.7
Apliquemos el método fasorial, determine la corriente $i(t)$ en un circuito descrito por la ecuación integrodiferencial
$$ 4i + 8 \int i \, dt - 3 \frac{di}{dt} = 50 \cos (2t + 75^\circ) $$
### Solución
Se transforma cada término de la ecuación del dominio temporal al fasorial. Teniendo en cuenta las ecuaciones
$$\begin{array}{ccc}
\displaystyle \frac{dv}{dt} & \Leftrightarrow & j \omega \, \mathrm{V} \\
\mbox{(Dominio temporal)} & & \mbox{(Dominio fasorial)}
\end{array}$$
De igual modo, la integral $v(t)$ se transforma al dominio fasorial como $\mathrm{V}/j \omega$.
$$\begin{array}{ccc}
\displaystyle \int v \, dt & \Leftrightarrow & \displaystyle \frac{\mathrm{V}}{j \omega} \\
\mbox{(Dominio temporal)} & & \mbox{(Dominio fasorial)}
\end{array}$$
se obtiene la forma fasorial de la ecuación dada como
$$ 4 I + \frac{8I}{j \omega} - 3j \omega I = 50 \angle 75^\circ $$
Pero $\omega = 2$, así que
$$ I (4 - 4j -6j) = 50 \angle 75^\circ $$
$$ I = \frac{50 \angle 75^\circ}{4 - 10j} $$
```
import math , cmath
z2 = cmath.polar(4-10j)
r2 = z2[0] ; phi2 = z2[1]*180/math.pi
print('z2 = (%.2f<%.2frad)'%z2)
print('z2 = (%.2f<%.2f°)'%(r2,phi2))
r1 = 50 ; phi1 = 75
r = r1/r2 ; phi = phi1 - phi2
print('I = (%.3f<%.2f°)A'%(r,phi))
```
Al convertir al dominio temporal,
$$ i(t) = 4,642 \cos (2t + 143,2^\circ) \, \mathrm{A} $$
```
%reset -s -f
```
### Problema de práctica 9.7
Halle la tensión $v(t)$ en un circuito descrito por la ecuación integrodiferencial
$$ 2 \frac{dv}{dt} + 5v + 10 \int v \, dt = 20 \cos (5t - 30^\circ) $$
aplicando el método fasorial.
$$ 2 j \omega V + 5V + 10 \frac{V}{j \omega} = 20 \angle -30^\circ $$
Puesto $\omega = 5$, asi que
$$ V (10j + 5 - 2j) = 20 \angle -30^\circ $$
$$ V = \frac{20 \angle -30^\circ}{5 + 8j} $$
```
import math, cmath
z2 = cmath.polar(5 + 8j)
r1 = 20 ; phi1 = -30
r2 = z2[0] ; phi2 = z2[1]*(180/math.pi)
r = r1/r2 ; phi = phi1 - phi2
print('z2 = (%.2f<%.2f°)'%(r2,phi2))
print('V = (%.2f<%.1f°)'%(r,phi))
```
$$ V = \frac{20 \angle -30^\circ}{9,43 \angle 57,99^\circ} = 2,12 \angle -88^\circ $$
Entonces:
$$ v(t) = 2,12 \cos (5t - 88^\circ) \, \mathrm{V} $$
```
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = 'styles/aeropython.css'
HTML(open(css_file, "r").read())
```
|
github_jupyter
|
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Data for plotting
t = np.arange(0.0, 4.5*np.pi, 0.01)
s = np.sin(1 * t)
fig, ax = plt.subplots()
ax.plot(t, s)
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='Plot')
ax.grid()
# fig.savefig("test.png")
plt.show()
t = np.arange(0.0, 4.5*np.pi, 0.01)
phi = 2
v1 = np.sin(t)
v2 = np.sin(t + phi)
fig, ax = plt.subplots()
ax.plot(t, v1 , label='v1')
ax.plot(t, v2 , label='v2')
ax.set(xlabel='time (s)', ylabel='voltage (mV)',
title='Plot')
ax.grid()
plt.legend()
plt.show()
%reset -s -f
import math
Vm = 12 # V
phi = 10 # deg
omega = 50 # rad/s
T = 2*math.pi/omega
f = 1/T
print('T = %.4f'%T)
print('f = %.3f Hz'%f)
Vm = 5
phi = -60 # deg
omega = math.pi*4 # rad/s
T = 2*math.pi/omega
f = 1/T
print('omega = %1.2f rad/s'%omega)
print('T = %.2f s/rev'%T)
print('f = %.2f Hz'%f)
%reset -s -f
import math
phi1 = 50*(math.pi/180) # rad (conversión a radianes)
r1 = 40
x1 = r1*math.cos(phi1)
y1 = r1*math.sin(phi1)
print('z1 = %.2f + (%.2f)j'%(x1,y1))
phi2 = -30*(math.pi/180) # rad
r2 = 20
x2 = r2*math.cos(phi2)
y2 = r2*math.sin(phi2)
print('z2 = %.2f + (%.2f)j'%(x2,y2))
x3 = x1 + x2
y3 = y1 + y2
print('z3 = %.2f + (%.2f)j'%(x3,y3))
r3 = math.sqrt(x3**2 + y3**2)
phi3 = math.atan(y3/x3)
print('z3 = %.2f<%.2frad'%(r3,phi3))
print('z3 = %.2f<%.2f°'%(r3,phi3*180/math.pi))
r4 = math.sqrt(r3)
phi4 = phi3/2
print('z4 = %.2f<%.2frad'%(r4,phi4))
print('z4 = %.2f<%.2f°'%(r4,phi4*180/math.pi))
import cmath
r1 = 40 ; phi1 = 50*math.pi/180
r2 = 20 ; phi2 = -30*math.pi/180
# Conversión a coordenadas rectangulares
c1 = cmath.rect(r1,phi1)
c2 = cmath.rect(r2,phi2)
c3 = cmath.sqrt(c1 + c2)
c3p = cmath.polar(c3)
print('c1 = {:.2f}'.format(c1))
print('c2 = {:.2f}'.format(c2))
print('c3 = {:.2f}'.format(c3))
print('c3p = %.2f<%.2frad'%(c3p[0],c3p[1]))
%reset -s -f
import math
import cmath
r1 = 10 ; phi1 = -30*(math.pi/180)
x1 = r1*math.cos(phi1)
y1 = r1*math.sin(phi1)
print('(%.2f %.2fj)'%(x1,y1))
c1 = complex(x1, y1)
c2 = complex(3, -4)
Num = c1 + c2
Num_pol = cmath.polar(Num)
print('Num = {:.2f}'.format(Num))
print('Num_pol = %.2f<%.2frad'%Num_pol)
c3 = complex(2,4)
c4 = 3-5j.conjugate()
Den = c3*c4
Den_pol = cmath.polar(Den)
print('Den = {:.2f}'.format(Den))
print('Den_pol = %.2f<%.2frad'%Den_pol)
Res = Num/Den
Res_pol = cmath.polar(Res)
print('Res = {:.2f}'.format(Res))
print('Res_pol = %.3f<%.2frad'%Res_pol)
print('%.2frad = %s°'%(Res_pol[1],round(Res_pol[1]*180/math.pi,2)))
%reset -s -f
import math
import cmath
z1 = complex(5,2)
z2 = complex(-1,4)
z3 = cmath.rect(5,60*math.pi/180)
Resultado = (z1*z2 - z3).conjugate()
print('Resultado = {:.2f}'.format(Resultado))
x1 = 5
y1 = 2
r1 = math.sqrt(x1**2 + y1**2)
phi1 = math.atan(2/5)
z1 = complex(x1,y1)
z1p = (r1,phi1)
print('z1 = {:.2f}'.format(z1))
print('z1p = %.3f<%.3frad'%z1p)
x2 = -1
y2 = 4
r2 = math.sqrt((-1)**2 + 4**2)
phi2 = math.pi + math.atan(4/-1)
z2 = complex(x2,y2)
z2p = (r2,phi2)
print('z2 = {:.2f}'.format(z2))
print('z2p = %.3f<%.2frad'%z2p)
r3 = z1p[0]*z2p[0]
phi3 = z1p[1] + z2p[1]
z3p = (r3,phi3)
print('z3p = %.3f<%.3frad'%z3p)
x3 = r3*math.cos(phi3)
y3 = r3*math.sin(phi3)
z3 = complex(x3,y3)
print('z3 = {:.2f}'.format(z3))
r4 = 5
phi4 = 60*math.pi/180
z4p = (r4,phi4)
x4 = r4*math.cos(phi4)
y4 = r4*math.sin(phi4)
z4 = complex(x4,y4)
print('z4 = {:.2f}'.format(z4))
z5 = z3 - z4
Res = z5.conjugate()
print('Res = {:.2f}'.format(Res))
%reset -s -f
import math
import cmath
z1 = complex(10,5)
z2 = cmath.rect(3,40*math.pi/180)
z3 = complex(-3,4)
z4 = cmath.rect(10,30*math.pi/180)
Res = (z1 + z2)/z3 + z4
print('Res = {:.3f}'.format(Res))
# Datos:
x1 = 10 ; y1 = 5
r2 = 3
phi2 = 40*math.pi/180
x3 = -3 ; y3 = 4
r4 = 10
phi4 = 30*math.pi/180
x2 = r2*math.cos(phi2)
y2 = r2*math.sin(phi2)
x_s1 = x1 + x2
y_s1 = y1 + y2
print('s1 = (%.3f,%.3f)'%(x_s1,y_s1))
# Convertimos a polares
r_s1 = math.sqrt(x_s1**2 + y_s1**2)
phi_s1 = math.atan(y_s1/x_s1)
print('s1p = (%.3f,<%.3frad)'%(r_s1,phi_s1))
r3 = math.sqrt(x3**2 + y3**2)
phi3 = math.pi - math.atan( y3 / abs(x3) )
print('c3p = (%.3f,%.3frad)'%(r3,phi3))
r_f1 = r_s1/r3
phi_f1 = phi_s1 - phi3
x_f1 = r_f1*math.cos(phi_f1)
y_f1 = r_f1*math.sin(phi_f1)
print('f1 = (%.3f,%.3f)'%(x_f1,y_f1))
x4 = r4*math.cos(phi4)
y4 = r4*math.sin(phi4)
print('c4 = (%.3f,%.3f)'%(x4,y4))
x_res = x_f1 + x4
y_res = y_f1 + y4
# Imprime el resultado
print('Resultado = (%.3f,%.3f)'%(x_res,y_res))
%reset -s -f
import math
import cmath
I = complex(-3,4)
I_pol = cmath.polar(I)
r = I_pol[0]
phi_deg = I_pol[1]*180/math.pi
print('I_polar = (%.2f<%.3frad)'%I_pol)
print('I_polar = (%.2f<%.2f°)'%(r,phi_deg))
%reset -s -f
import math
import cmath
I = complex(12,5)
I_pol = cmath.polar(I)
r = I_pol[0]
phi = I_pol[1]
print('I_polar = (%.2f<%.3frad)'%(r,phi))
print('I_polar = (%.2f<%.2f°)'%(r,phi*180/math.pi))
%reset -s -f
import math
import cmath
r1 = 4 ; phi1 = 30*(math.pi/180)
r2 = 5 ; phi2 = -110*(math.pi/180)
I1 = cmath.rect(r1,phi1)
I2 = cmath.rect(r2,phi2)
print('I1 = {:.3f}'.format(I1))
print('I2 = {:.3f}'.format(I2))
I = I1 + I2
print('I = {:.3f}'.format(I))
I_polar = cmath.polar(I)
print('I_polar = (%.3f<%.3frad)'%I_polar)
r = I_polar[0]
phi = I_polar[1]*180/math.pi
print('I_polar = (%.3f<%.3f°)'%(r,phi))
%reset -s -f
import math, cmath
r1 = 10 ; phi1 = 120*(math.pi/180)
r2 = 20 ; phi2 = -45*(math.pi/180)
V1 = cmath.rect(r1,phi1)
V2 = cmath.rect(r2,phi2)
print('V1 = {:.3f}'.format(V1))
print('V2 = {:.3f}'.format(V2))
# Sumamos
V = V1 + V2
print('V = {:.3f}'.format(V))
V_polar = cmath.polar(V)
print('V_polar = (%.3f<%.3frad)'%V_polar)
r = V_polar[0] ; phi = V_polar[1]*180/math.pi
print('V_polar = (%.2f<%.2f°)'%(r,phi))
%reset -s -f
import math , cmath
z2 = cmath.polar(4-10j)
r2 = z2[0] ; phi2 = z2[1]*180/math.pi
print('z2 = (%.2f<%.2frad)'%z2)
print('z2 = (%.2f<%.2f°)'%(r2,phi2))
r1 = 50 ; phi1 = 75
r = r1/r2 ; phi = phi1 - phi2
print('I = (%.3f<%.2f°)A'%(r,phi))
%reset -s -f
import math, cmath
z2 = cmath.polar(5 + 8j)
r1 = 20 ; phi1 = -30
r2 = z2[0] ; phi2 = z2[1]*(180/math.pi)
r = r1/r2 ; phi = phi1 - phi2
print('z2 = (%.2f<%.2f°)'%(r2,phi2))
print('V = (%.2f<%.1f°)'%(r,phi))
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = 'styles/aeropython.css'
HTML(open(css_file, "r").read())
| 0.392337 | 0.986713 |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License

# Using environments
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Use curated environment](#Use-curated-environment)
1. [Create environment](#Create-environment)
1. Add Python packages
1. Specify environment variables
1. [Submit run using environment](#Submit-run-using-environment)
1. [Register environment](#Register-environment)
1. [List and get existing environments](#List-and-get-existing-environments)
1. [Other ways to create environments](#Other-ways-to-create-environments)
1. From existing Conda environment
1. From Conda or pip files
1. [Estimators and environments](#Estimators-and-environments)
1. [Using environments for inferencing](#Using-environments-for-inferencing)
1. [Docker settings](#Docker-settings)
1. [Spark and Azure Databricks settings](#Spark-and-Azure-Databricks-settings)
1. [Next steps](#Next-steps)
## Introduction
Azure ML environments are an encapsulation of the environment where your machine learning training happens. They define Python packages, environment variables, Docker settings and other attributes in declarative fashion. Environments are versioned: you can update them and retrieve old versions to revisit and review your work.
Environments allow you to:
* Encapsulate dependencies of your training process, such as Python packages and their versions.
* Reproduce the Python environment on your local computer in a remote run on VM or ML Compute cluster
* Reproduce your experimentation environment in production setting.
* Revisit and audit the environment in which an existing model was trained.
Environment, compute target and training script together form run configuration: the full specification of training run.
## Setup
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration notebook](../../../configuration.ipynb) first if you haven't.
First, let's validate Azure ML SDK version and connect to workspace.
```
import azureml.core
print(azureml.core.VERSION)
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
ws.get_details()
```
## Use curated environments
Curated environments are provided by Azure Machine Learning and are available in your workspace by default. They contain collections of Python packages and settings to help you get started different machine learning frameworks.
* The __AzureML-Minimal__ environment contains a minimal set of packages to enable run tracking and asset uploading. You can use it as a starting point for your own environment.
* The __AzureML-Tutorial__ environment contains common data science packages, such as Scikit-Learn, Pandas and Matplotlib, and larger set of azureml-sdk packages.
Curated environments are backed by cached Docker images, reducing the run preparation cost.
You can get a curated environment using
```
from azureml.core import Environment
curated_env = Environment.get(workspace=ws, name="AzureML-Minimal")
```
To list curated environments, use following code.
**Note**: The name prefixes _AzureML_ and _Microsoft_ are reserved for curated environments. Do not use them for your own environments
```
envs = Environment.list(workspace=ws)
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
```
## Create your own environment
You can create an environment by instantiating ```Environment``` object and then setting its attributes: set of Python packages, environment variables and others.
### Add Python packages
The recommended way is to specify Conda packages, as they typically come with complete set of pre-built binaries.
```
from azureml.core.environment import CondaDependencies
myenv = Environment(name="myenv")
conda_dep = CondaDependencies()
conda_dep.add_conda_package("scikit-learn")
```
You can also add pip packages, and specify the version of package
```
conda_dep.add_pip_package("pillow==5.4.1")
myenv.python.conda_dependencies=conda_dep
```
### Specify environment variables
You can add environment variables to your environment. These then become available using ```os.environ.get``` in your training script.
```
myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"}
```
## Submit run using environment
When you submit a run, you can specify which environment to use.
On the first run in given environment, Azure ML spends some time building the environment. On the subsequent runs, Azure ML keeps track of changes and uses the existing environment, resulting in faster run completion.
```
from azureml.core import ScriptRunConfig, Experiment
myexp = Experiment(workspace=ws, name = "environment-example")
```
To submit a run, create a run configuration that combines the script file and environment, and pass it to ```Experiment.submit```. In this example, the script is submitted to local computer, but you can specify other compute targets such as remote clusters as well.
```
src = ScriptRunConfig(source_directory=".",
script="example.py",
compute_target="local",
environment=myenv)
run = myexp.submit(config=src)
run.wait_for_completion(show_output=True)
```
To audit the environment used by for a run, you can use ```get_environment```.
```
run.get_environment()
```
## Register environment
You can manage environments by registering them. This allows you to track their versions, and reuse them in future runs. For example, once you've constructed an environment that meets your requirements, you can register it and use it in other experiments so as to standardize your workflow.
If you register the environment with same name, the version number is increased by one. Note that Azure ML keeps track of differences between the version, so if you re-register an identical version, the version number is not increased.
```
myenv.register(workspace=ws)
```
## List and get existing environments
Your workspace contains a dictionary of registered environments. You can then use ```Environment.get``` to retrieve a specific environment with specific version.
```
for name,env in ws.environments.items():
print("Name {} \t version {}".format(name,env.version))
restored_environment = Environment.get(workspace=ws,name="myenv",version="1")
print("Attributes of restored environment")
restored_environment
```
## Other ways to create environments
### From existing Conda environment
You can create an environment from existing conda environment. This make it easy to reuse your local interactive environment in Azure ML remote runs. For example, if you've created conda environment using
```
conda create -n mycondaenv
```
you can create Azure ML environment out of that conda environment using
```
myenv = Environment.from_existing_conda_environment(name="myenv",conda_environment_name="mycondaenv")
```
### From conda or pip files
You can create environments from conda specification or pip requirements files using
```
myenv = Environment.from_conda_specification(name="myenv", file_path="path-to-conda-specification-file")
myenv = Environment.from_pip_requirements(name="myenv", file_path="path-to-pip-requirements-file")
```
## Using environments for inferencing
You can re-use the training environment when you deploy your model as a web service, by specifying inferencing stack version, and adding then environment to ```InferenceConfig```.
```
from azureml.core.model import InferenceConfig
myenv.inferencing_stack_version = "latest"
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
```
See [Register Model and deploy as Webservice Notebook](../../deployment/deploy-to-cloud/model-register-and-deploy.ipynb) for an end-to-end example of web service deployment.
## Docker settings
Docker container provides an efficient way to encapsulate the dependencies. When you enable Docker, Azure ML builds a Docker image and creates a Python environment within that container, given your specifications. The Docker images are reused: the first run in a new environment typically takes longer as the image is build.
**Note:** For runs on local computer or attached virtual machine, that computer must have Docker installed and enabled. Machine Learning Compute has Docker pre-installed.
Attribute ```docker.enabled``` controls whether to use Docker container or host OS for execution.
```
myenv.docker.enabled = True
```
You can specify custom Docker base image and registry. This allows you to customize and control in detail the guest OS in which your training run executes. whether to use GPU, whether to use shared volumes, and shm size.
```
myenv.docker.base_image
myenv.docker.base_image_registry
```
You can also specify shared volumes, and shm size.
```
myenv.docker.shared_volumes
myenv.docker.shm_size
```
## Spark and Azure Databricks settings
In addition to Python and Docker settings, Environment also contains attributes for Spark and Azure Databricks runs. These attributes become relevant when you submit runs on those compute targets.
## Next steps
Learn more about remote runs on different compute targets:
* [Train on ML Compute](../../training/train-on-amlcompute/train-on-amlcompute.ipynb)
* [Train on remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb)
Learn more about registering and deploying a model:
* [Register Model and deploy as Webservice](../../deployment/deploy-to-cloud/model-register-and-deploy.ipynb)
|
github_jupyter
|
import azureml.core
print(azureml.core.VERSION)
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
ws.get_details()
from azureml.core import Environment
curated_env = Environment.get(workspace=ws, name="AzureML-Minimal")
envs = Environment.list(workspace=ws)
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
from azureml.core.environment import CondaDependencies
myenv = Environment(name="myenv")
conda_dep = CondaDependencies()
conda_dep.add_conda_package("scikit-learn")
conda_dep.add_pip_package("pillow==5.4.1")
myenv.python.conda_dependencies=conda_dep
myenv.environment_variables = {"MESSAGE":"Hello from Azure Machine Learning"}
from azureml.core import ScriptRunConfig, Experiment
myexp = Experiment(workspace=ws, name = "environment-example")
src = ScriptRunConfig(source_directory=".",
script="example.py",
compute_target="local",
environment=myenv)
run = myexp.submit(config=src)
run.wait_for_completion(show_output=True)
run.get_environment()
myenv.register(workspace=ws)
for name,env in ws.environments.items():
print("Name {} \t version {}".format(name,env.version))
restored_environment = Environment.get(workspace=ws,name="myenv",version="1")
print("Attributes of restored environment")
restored_environment
conda create -n mycondaenv
myenv = Environment.from_existing_conda_environment(name="myenv",conda_environment_name="mycondaenv")
myenv = Environment.from_conda_specification(name="myenv", file_path="path-to-conda-specification-file")
myenv = Environment.from_pip_requirements(name="myenv", file_path="path-to-pip-requirements-file")
from azureml.core.model import InferenceConfig
myenv.inferencing_stack_version = "latest"
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
myenv.docker.enabled = True
myenv.docker.base_image
myenv.docker.base_image_registry
myenv.docker.shared_volumes
myenv.docker.shm_size
| 0.306423 | 0.982457 |
# Multitasking in Python
Out of the box, Python works in a **sequential** way, which means, it does **one thing at a time**, one after another. If the program has to wait for some reason (e.g. a http request) it does not do anything else than waiting.
Another important fact: Python uses one **single core** only. If you have a lot to calculate, Python does not automatically distribute the work for you on multiple cores.
If we want to do things faster by implementing multitasking in Python, we first have to solve these questions:
* is the bottleneck **CPU** bound (lot of calculations, compressing, etc.)?
* is the bottleneck **IO** bound (waiting for network, slow harddisk)?
In order to understand multitasking a bit better, we need to clarify some **general concepts**:
### Parallelism
* doing calculations in parallel instead of sequential
* using more than one CPU
* hope we reach the solution in less wallclock time
* parallelism is a concept from the **solution domain**
**Examples**
* parse many files simultaneously
* distribute a computational task over many processors or nodes
### Asynchrony
* reacting to things that will happen in future
* we do not know when these things will happen
* asynchrony is **event driven**
**Examples**
* onClick mousevents in the browser
* File change notifications
* Incoming requests to a server
* Incoming packets of data to a socket
### Concurrency
* several computations are executed concurrently – during overlapping time periods – instead of sequentially
* concurrency control: coordinate access to a shared resource
* concurrency is often a part of the **problem domain**
**Examples**
* booking system for a flight
* banking account
* database updates
## Implementation concepts
To make things more challenging, Python offers various implementations. I tried to group them:
### threading
* run several «trains of thought» on a single processor
* **pre-emptive multitasking**
* The operating system knows about each thread and can interrupt it at any time to start running a different thread
* The OS decides when to switch tasks
* implemented in the [threading](https://docs.python.org/3/library/threading.html) module, using a [Queue](https://docs.python.org/3/library/queue.html#module-queue) to tackle concurrency problems
* implemented in the [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) module, using the [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor)
* suitable for typical IO problems
### asyncio
* run several «trains of thought» on a single processor
* **cooperative multitasking**
* the tasks must cooperate by announcing when they are ready to be switched out
* the tasks decide when to give up control
* this concept has been implemented in many languages
* implemented in Python using the [asyncio](https://docs.python.org/3/library/asyncio.html) module and the `async` and `await` syntax
* suitable for IO problems
### multiprocessing
* run several «trains of thought» at the same time, using **multiple processors**
* Python creates **new processes** – a collection of resources where the resources include memory, file handles etc
* each process runs in its own Python interpreter
* implemented in the [multiprocessing](https://docs.python.org/3/library/multiprocessing.html) module, using a [Queue](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue) to tackle concurrency problems
* implemented in the [concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html) module, using the [ProcessPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor)
* suitable for CPU problems
**Links** for further reading:
* https://realpython.com/python-concurrency/#what-is-concurrency
* https://realpython.com/async-io-python/#async-io-is-not-easy
## Retrieve data from websites sequentially
```
import requests
import time
def download_site(url, session):
with session.get(url) as response:
print(f"Read {len(response.content)} from {url}")
def download_all_sites(sites):
with requests.Session() as session:
for url in sites:
download_site(url, session)
if __name__ == "__main__":
sites = [
"https://www.jython.org",
"http://olympus.realpython.org/dice",
] * 80
start_time = time.time()
download_all_sites(sites)
duration = time.time() - start_time
print(f"Downloaded {len(sites)} in {duration} seconds")
```
## Retrieve data from websites using many threads: `concurrent.futures`
The new `concurrent.futures` module is a convenient way to achieve threading with not too much headaches. We simply have to define the maximum amount of workers, the queue is set up automatically for us.
Try to set the workers to 1 and see what happens. Increase the workers. What do you observe?
```
import concurrent.futures
import requests
import threading
import time
thread_local = threading.local()
def get_session():
if not hasattr(thread_local, "session"):
thread_local.session = requests.Session()
return thread_local.session
def download_site(url):
session = get_session()
with session.get(url) as response:
print(f"Read {len(response.content)} from {url}")
def download_all_sites(sites):
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
executor.map(download_site, sites)
if __name__ == "__main__":
sites = [
"https://www.jython.org",
"http://olympus.realpython.org/dice",
] * 80
start_time = time.time()
download_all_sites(sites)
duration = time.time() - start_time
print(f"Downloaded {len(sites)} in {duration} seconds")
```
## Threading problems with global variables
Not everything is thread safe. The following example illustrates this: it creates `fake_data` containing the nummbers 0 to 4999. Then, we use 500 workers in our `ThreadPoolExecutor` to process this data. The `increment_counter` cycles 100 times and just increments the global variable. The output should be `100 x 5000 = 500'000`. But is it? Run the example below a few times (you can hit CTRL+Enter to stay in the same cell). What do you observe?
```
import concurrent.futures
counter = 0
def increment_counter(fake_value):
global counter
for _ in range(100):
counter += 1
if __name__ == "__main__":
fake_data = [x for x in range(5000)]
counter = 0
with concurrent.futures.ThreadPoolExecutor(max_workers=500) as executor:
executor.map(increment_counter, fake_data)
print(counter)
```
## Showcase the `asyncio` module: a gentle introduction
The `asyncio` module implements the Asynchronous I/O framework that can be found in many languages (JavaScript, C++, Java, Raku, etc.). Two new language elements are introduced:
* `async`: declares a function to act as a **coroutine**, short for **cooperative routine**. It means, the function announces when it is going to finish
* `await`: a marker: wait here until the function returns, in the meantime you can continue.
```
import asyncio
async def worker():
print("I am working")
await asyncio.sleep(1)
print("I am working some more...")
await asyncio.sleep(2)
print("5:30 pm! Time to go home!")
async def main():
await worker()
```
**Notice:** In a normal script, you would use `asyncio.run(main())` to start the **main event loop** with our `main` method. But this throws an error:
```
asyncio.run() cannot be called from a running event loop
```
(If you are using Python 3.6, you need a [different syntax](https://docs.python.org/3.6/library/asyncio-task.html) to start the main event loop)
```
asyncio.run(main())
```
Because we are in **Jupyter**, a main event loop has already started for us, so we can use `await` directly:
```
await main()
```
Not very exciting, is it? Not any different than sequential programming. But what happens if we add **second worker**?
We can achieve this with `asyncio.gather(tasks)`
```
async def main():
asyncio.gather(worker(), worker(), worker())
await main()
```
While the first worker starts sleeping, the second starts working, goes to sleep, passes the thread back to worker 1, etc.
## Showcase the `asyncio` module: doing things independently of each other
Let's look at a more real-life example: we would like to **read a book** and **check our whatsapp** at the same time:
```
import asyncio
# convert to coroutine
async def reading_book():
print("reading page 1")
await asyncio.sleep(4)
print("reading page 2")
await asyncio.sleep(4)
print("reading page 3")
await asyncio.sleep(4)
print("reading page 4")
async def seconds():
i = 0
while True:
print(f"\t{i} seconds")
i += 1
await asyncio.sleep(1)
if i > 20:
break
# convert to coroutine
async def checking_whatsapp():
print("reading new message 1")
await asyncio.sleep(2)
print("reading new message 2")
await asyncio.sleep(4)
print("reading new message 3")
await asyncio.sleep(1)
print("reading new message 4")
async def main(tasks):
await asyncio.gather(*[task for task in tasks])
await main([reading_book(), checking_whatsapp(), seconds()])
```
## Showcase the `asyncio` module: things depend on each other
```
import asyncio
import random
import time
async def part1(n: int) -> str:
i = random.randint(0, 10)
print(f"part1({n}) sleeping for {i} seconds.")
await asyncio.sleep(i)
result = f"result{n}-1"
print(f"Returning part1({n}) == {result}.")
return result
async def part2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(f"part2{n, arg} sleeping for {i} seconds.")
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(f"Returning part2{n, arg} == {result}.")
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await part1(n)
p2 = await part2(n, p1)
end = time.perf_counter() - start
print(f"-->Chained result{n} => {p2} (took {end:0.2f} seconds).")
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
random.seed(444) # setting the seed to a specific value allows to reproduce randomness :)
args = [1,2,3]
await main(*args)
```
|
github_jupyter
|
import requests
import time
def download_site(url, session):
with session.get(url) as response:
print(f"Read {len(response.content)} from {url}")
def download_all_sites(sites):
with requests.Session() as session:
for url in sites:
download_site(url, session)
if __name__ == "__main__":
sites = [
"https://www.jython.org",
"http://olympus.realpython.org/dice",
] * 80
start_time = time.time()
download_all_sites(sites)
duration = time.time() - start_time
print(f"Downloaded {len(sites)} in {duration} seconds")
import concurrent.futures
import requests
import threading
import time
thread_local = threading.local()
def get_session():
if not hasattr(thread_local, "session"):
thread_local.session = requests.Session()
return thread_local.session
def download_site(url):
session = get_session()
with session.get(url) as response:
print(f"Read {len(response.content)} from {url}")
def download_all_sites(sites):
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
executor.map(download_site, sites)
if __name__ == "__main__":
sites = [
"https://www.jython.org",
"http://olympus.realpython.org/dice",
] * 80
start_time = time.time()
download_all_sites(sites)
duration = time.time() - start_time
print(f"Downloaded {len(sites)} in {duration} seconds")
import concurrent.futures
counter = 0
def increment_counter(fake_value):
global counter
for _ in range(100):
counter += 1
if __name__ == "__main__":
fake_data = [x for x in range(5000)]
counter = 0
with concurrent.futures.ThreadPoolExecutor(max_workers=500) as executor:
executor.map(increment_counter, fake_data)
print(counter)
import asyncio
async def worker():
print("I am working")
await asyncio.sleep(1)
print("I am working some more...")
await asyncio.sleep(2)
print("5:30 pm! Time to go home!")
async def main():
await worker()
asyncio.run() cannot be called from a running event loop
asyncio.run(main())
await main()
async def main():
asyncio.gather(worker(), worker(), worker())
await main()
import asyncio
# convert to coroutine
async def reading_book():
print("reading page 1")
await asyncio.sleep(4)
print("reading page 2")
await asyncio.sleep(4)
print("reading page 3")
await asyncio.sleep(4)
print("reading page 4")
async def seconds():
i = 0
while True:
print(f"\t{i} seconds")
i += 1
await asyncio.sleep(1)
if i > 20:
break
# convert to coroutine
async def checking_whatsapp():
print("reading new message 1")
await asyncio.sleep(2)
print("reading new message 2")
await asyncio.sleep(4)
print("reading new message 3")
await asyncio.sleep(1)
print("reading new message 4")
async def main(tasks):
await asyncio.gather(*[task for task in tasks])
await main([reading_book(), checking_whatsapp(), seconds()])
import asyncio
import random
import time
async def part1(n: int) -> str:
i = random.randint(0, 10)
print(f"part1({n}) sleeping for {i} seconds.")
await asyncio.sleep(i)
result = f"result{n}-1"
print(f"Returning part1({n}) == {result}.")
return result
async def part2(n: int, arg: str) -> str:
i = random.randint(0, 10)
print(f"part2{n, arg} sleeping for {i} seconds.")
await asyncio.sleep(i)
result = f"result{n}-2 derived from {arg}"
print(f"Returning part2{n, arg} == {result}.")
return result
async def chain(n: int) -> None:
start = time.perf_counter()
p1 = await part1(n)
p2 = await part2(n, p1)
end = time.perf_counter() - start
print(f"-->Chained result{n} => {p2} (took {end:0.2f} seconds).")
async def main(*args):
await asyncio.gather(*(chain(n) for n in args))
random.seed(444) # setting the seed to a specific value allows to reproduce randomness :)
args = [1,2,3]
await main(*args)
| 0.230833 | 0.915734 |
```
import numpy as np
import cv2
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(camera)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('In Color but not inverted',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(c)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, 1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('In Color',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(camera)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, -1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('flipped up side down',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, 1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
```
|
github_jupyter
|
import numpy as np
import cv2
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(camera)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('In Color but not inverted',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(c)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, 1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('In Color',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
camera = 0
#connecting to the webcam and initialization
cap = cv2.VideoCapture(camera)#located the integrated camera
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1000)#the length of the frame
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)#the width of the fram
cap = cv2.VideoCapture(camera)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, -1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('flipped up side down',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = cv2.flip(frame, 1)#flipping the frames laterally to give a mirror like effect
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
| 0.40157 | 0.425486 |
# Explore features/clustering
## Get data
```
ls -l ../fossilnet-png-224px/*/*/*.png | wc | awk '{print $1 " PNG files"}'
```
We're not trying to make a fantastic model here, and this is a hard dataset.
So I'm only going to use `train` and `val`, and I'm only going to use 4 classes.
Let's read the files, do a bit of processing on them (make greyscale and resize), and I'll also save a flipped version, so I'll have 2 versions of each image.
```
import numpy as np
def img_to_arr(img):
"""
Apply the same processing we used in training: greyscale and resize.
"""
img = img.convert(mode='L').resize((32, 32))
return np.asarray(img).ravel() / 255
import os
from glob import glob
from PIL import Image
from collections import defaultdict
sets = ['train', 'val']
classes = ['trilobites', 'fishes', 'forams', 'dinosaurs']
data = defaultdict(list)
labels = defaultdict(list)
for set_ in sets:
for class_ in classes:
for fname in glob(f'../fossilnet/{set_}/{class_}/*.png'):
img = Image.open(fname)
arr = img_to_arr(img)
data[set_].append(arr.ravel())
data[set_].append(np.fliplr(arr.reshape(32, 32)).ravel())
labels[set_] += 2 * [class_]
X_train = np.array(data['train'])
X_val = np.array(data['val'])
y_train = np.array(labels['train'])
y_val = np.array(labels['val'])
X_train.shape
X_train[501, 100]
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(data['train'][503].reshape(32, 32))
plt.axis('off')
plt.show()
# empty dict to hold projection results
projection = {}
```
## Visualize feature space
Start with UMAP & t-SNE
```
import umap
reducer = umap.UMAP()
projection['umap'] = reducer.fit_transform(X_train)
import seaborn as sns
import pandas as pd
colors = {'trilobites':'red', 'forams':'blue', 'fishes':'green', 'dinosaurs':'black'}
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['umap'][:, 0][mask], projection['umap'][:, 1][mask], label=label)
ax.set_title('Umap projection', fontsize=18)
ax.legend()
plt.show()
ax.set_aspect('equal', 'datalim')
ax.set_title('UMAP projection of the fossil training dataset', fontsize=24);
from sklearn.manifold import TSNE
import time
time_start = time.time()
tsne_model = TSNE(random_state=42, n_jobs=-1)
projection['tsne'] = tsne_model.fit_transform(X_train)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['tsne'][:, 0][mask], projection['tsne'][:, 1][mask], label=label)
ax.legend()
plt.show()
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
projection['pca'] = pca.fit_transform(X_train)
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['pca'][:, 0][mask], projection['pca'][:, 1][mask], label=label)
ax.legend()
plt.show()
```
## Viz with Plotly
```
import plotly.express as px
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=y_train, width=800, height=600)
fig.show()
labels = pd.DataFrame({'type': y_train, 'kmeans': kmeans_result})
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
# Load Data
# Build App
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("Fossils-JupyterDash"),
dcc.Graph(id='graph'),
html.Label([
"Projection",
dcc.Dropdown(
id='projection-dropdown', clearable=False,
value='umap', options=[
{'label': c, 'value': c}
for c in ['umap', 'tsne', 'pca']
])
]),
])
# Define callback to update graph
@app.callback(
Output('graph', 'figure'),
[Input("projection-dropdown", "value")]
)
def update_figure(proj_name):
data_x = projection[proj_name][:, 0]
data_y = projection[proj_name][:, 1]
return px.scatter(x=data_x,
y=data_y,
color=y_train,
render_mode="webgl",
title="Projected fossil features",
width=800, height=600)
# Run app and display result inline in the notebook
#app.run_server(mode='external')
#app.run_server(mode='inline')
app.run_server(debug='False',port=8080,host='0.0.0.0')
```
## Clustering
### Kmeans
```
from sklearn.cluster import KMeans, DBSCAN
kmeans = KMeans(n_clusters=4, random_state=42).fit(X_train)
kmeans_result = kmeans.predict(X_train)
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=kmeans_result, width=800, height=600)
fig.show()
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("Fossils-JupyterDash"),
dcc.Graph(id='graph'),
html.Label([
"Number of clusters",
dcc.Slider(
id='num-clusters-slider',
min=1,
max=10,
step=1,
marks={i: '{}'.format(i) for i in range(10)},
value=1
)
]),
html.Label([
"Projection",
dcc.Dropdown(
id='projection-dropdown', clearable=False,
value='umap', options=[
{'label': c, 'value': c}
for c in ['umap', 'tsne', 'pca']
])
]),
])
# Define callback to update graph
@app.callback(
Output('graph', 'figure'),
[Input("num-clusters-slider", "value"),
Input("projection-dropdown", "value")]
)
def update_figure(num_clusters, proj_name):
kmeans = KMeans(n_clusters=num_clusters, random_state=42).fit(X_train)
kmeans_result = kmeans.predict(X_train)
data_x = projection[proj_name][:, 0]
data_y = projection[proj_name][:, 1]
return px.scatter(x=data_x,
y=data_y,
color=kmeans_result,
render_mode="webgl",
title="Projected fossil features",
width=800, height=600)
# Run app and display result inline in the notebook
app.run_server(mode='inline')
#app.run_server(mode='external')
#app.run_server(debug='False',port=8080,host='0.0.0.0',mode='inline')
```
### DBScan
```
pca_model = PCA(n_components=3)
pca_data = pca_model.fit_transform(X_train)
dbscan = DBSCAN(eps=.7, min_samples=6).fit(pca_data)
dbscan_result = dbscan.labels_
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=dbscan_result, width=800, height=600)
fig.show()
```
|
github_jupyter
|
ls -l ../fossilnet-png-224px/*/*/*.png | wc | awk '{print $1 " PNG files"}'
import numpy as np
def img_to_arr(img):
"""
Apply the same processing we used in training: greyscale and resize.
"""
img = img.convert(mode='L').resize((32, 32))
return np.asarray(img).ravel() / 255
import os
from glob import glob
from PIL import Image
from collections import defaultdict
sets = ['train', 'val']
classes = ['trilobites', 'fishes', 'forams', 'dinosaurs']
data = defaultdict(list)
labels = defaultdict(list)
for set_ in sets:
for class_ in classes:
for fname in glob(f'../fossilnet/{set_}/{class_}/*.png'):
img = Image.open(fname)
arr = img_to_arr(img)
data[set_].append(arr.ravel())
data[set_].append(np.fliplr(arr.reshape(32, 32)).ravel())
labels[set_] += 2 * [class_]
X_train = np.array(data['train'])
X_val = np.array(data['val'])
y_train = np.array(labels['train'])
y_val = np.array(labels['val'])
X_train.shape
X_train[501, 100]
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(data['train'][503].reshape(32, 32))
plt.axis('off')
plt.show()
# empty dict to hold projection results
projection = {}
import umap
reducer = umap.UMAP()
projection['umap'] = reducer.fit_transform(X_train)
import seaborn as sns
import pandas as pd
colors = {'trilobites':'red', 'forams':'blue', 'fishes':'green', 'dinosaurs':'black'}
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['umap'][:, 0][mask], projection['umap'][:, 1][mask], label=label)
ax.set_title('Umap projection', fontsize=18)
ax.legend()
plt.show()
ax.set_aspect('equal', 'datalim')
ax.set_title('UMAP projection of the fossil training dataset', fontsize=24);
from sklearn.manifold import TSNE
import time
time_start = time.time()
tsne_model = TSNE(random_state=42, n_jobs=-1)
projection['tsne'] = tsne_model.fit_transform(X_train)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['tsne'][:, 0][mask], projection['tsne'][:, 1][mask], label=label)
ax.legend()
plt.show()
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
projection['pca'] = pca.fit_transform(X_train)
fig, ax = plt.subplots(figsize=(10,10))
for label in np.unique(y_train):
mask = y_train==label
ax.scatter(projection['pca'][:, 0][mask], projection['pca'][:, 1][mask], label=label)
ax.legend()
plt.show()
import plotly.express as px
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=y_train, width=800, height=600)
fig.show()
labels = pd.DataFrame({'type': y_train, 'kmeans': kmeans_result})
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
# Load Data
# Build App
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("Fossils-JupyterDash"),
dcc.Graph(id='graph'),
html.Label([
"Projection",
dcc.Dropdown(
id='projection-dropdown', clearable=False,
value='umap', options=[
{'label': c, 'value': c}
for c in ['umap', 'tsne', 'pca']
])
]),
])
# Define callback to update graph
@app.callback(
Output('graph', 'figure'),
[Input("projection-dropdown", "value")]
)
def update_figure(proj_name):
data_x = projection[proj_name][:, 0]
data_y = projection[proj_name][:, 1]
return px.scatter(x=data_x,
y=data_y,
color=y_train,
render_mode="webgl",
title="Projected fossil features",
width=800, height=600)
# Run app and display result inline in the notebook
#app.run_server(mode='external')
#app.run_server(mode='inline')
app.run_server(debug='False',port=8080,host='0.0.0.0')
from sklearn.cluster import KMeans, DBSCAN
kmeans = KMeans(n_clusters=4, random_state=42).fit(X_train)
kmeans_result = kmeans.predict(X_train)
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=kmeans_result, width=800, height=600)
fig.show()
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("Fossils-JupyterDash"),
dcc.Graph(id='graph'),
html.Label([
"Number of clusters",
dcc.Slider(
id='num-clusters-slider',
min=1,
max=10,
step=1,
marks={i: '{}'.format(i) for i in range(10)},
value=1
)
]),
html.Label([
"Projection",
dcc.Dropdown(
id='projection-dropdown', clearable=False,
value='umap', options=[
{'label': c, 'value': c}
for c in ['umap', 'tsne', 'pca']
])
]),
])
# Define callback to update graph
@app.callback(
Output('graph', 'figure'),
[Input("num-clusters-slider", "value"),
Input("projection-dropdown", "value")]
)
def update_figure(num_clusters, proj_name):
kmeans = KMeans(n_clusters=num_clusters, random_state=42).fit(X_train)
kmeans_result = kmeans.predict(X_train)
data_x = projection[proj_name][:, 0]
data_y = projection[proj_name][:, 1]
return px.scatter(x=data_x,
y=data_y,
color=kmeans_result,
render_mode="webgl",
title="Projected fossil features",
width=800, height=600)
# Run app and display result inline in the notebook
app.run_server(mode='inline')
#app.run_server(mode='external')
#app.run_server(debug='False',port=8080,host='0.0.0.0',mode='inline')
pca_model = PCA(n_components=3)
pca_data = pca_model.fit_transform(X_train)
dbscan = DBSCAN(eps=.7, min_samples=6).fit(pca_data)
dbscan_result = dbscan.labels_
fig = px.scatter(x=projection['umap'][:, 0], y=projection['umap'][:, 1], color=dbscan_result, width=800, height=600)
fig.show()
| 0.544317 | 0.848816 |
# Huggingface Sagemaker-sdk - Spot instances example
### Binary Classification with `Trainer` and `imdb` dataset
# Introduction
Welcome to our end-to-end binary Text-Classification example. In this demo, we will use the Hugging Faces `transformers` and `datasets` library together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on binary text classification. In particular, the pre-trained model will be fine-tuned using the `imdb` dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on. This demo will also show you can use spot instances and continue training.

_**NOTE: You can run this demo in Sagemaker Studio, your local machine or Sagemaker Notebook Instances**_
# Development Environment and Permissions
## Installation
_*Note:* we only install the required libraries from Hugging Face and AWS. You also need PyTorch or Tensorflow, if you haven´t it installed_
```
!pip install "sagemaker>=2.48.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgrade
import sagemaker.huggingface
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
```
# Preprocessing
We are using the `datasets` library to download and preprocess the `imdb` dataset. After preprocessing, the dataset will be uploaded to our `sagemaker_session_bucket` to be used within our training job. The [imdb](http://ai.stanford.edu/~amaas/data/sentiment/) dataset consists of 25000 training and 25000 testing highly polar movie reviews.
## Tokenization
```
from datasets import load_dataset
from transformers import AutoTokenizer
# tokenizer used in preprocessing
tokenizer_name = 'distilbert-base-uncased'
# dataset used
dataset_name = 'imdb'
# s3 key prefix for the data
s3_prefix = 'samples/datasets/imdb'
# load dataset
dataset = load_dataset(dataset_name)
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
train_dataset, test_dataset = load_dataset('imdb', split=['train', 'test'])
test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
```
## Uploading data to `sagemaker_session_bucket`
After we processed the `datasets` we are going to use the new `FileSystem` [integration](https://huggingface.co/docs/datasets/filesystems.html) to upload our dataset to S3.
```
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path,fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path,fs=s3)
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
```
# Fine-tuning & starting Sagemaker Training Job
In order to create a sagemaker training job we need an `HuggingFace` Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In a Estimator we define, which fine-tuning script should be used as `entry_point`, which `instance_type` should be used, which `hyperparameters` are passed in .....
```python
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters = {'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
})
```
When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the `huggingface` container, uploads the provided fine-tuning script `train.py` and downloads the data from our `sagemaker_session_bucket` into the container at `/opt/ml/input/data`. Then, it starts the training job by running.
```python
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
```
The `hyperparameters` you define in the `HuggingFace` estimator are passed in as named arguments.
Sagemaker is providing useful properties about the training environment through various environment variables, including the following:
* `SM_MODEL_DIR`: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting.
* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.
* `SM_CHANNEL_XXXX:` A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named `train` and `test`, the environment variables `SM_CHANNEL_TRAIN` and `SM_CHANNEL_TEST` are set.
To run your training job locally you can define `instance_type='local'` or `instance_type='local_gpu'` for gpu usage. _Note: this does not working within SageMaker Studio_
```
!pygmentize ./scripts/train.py
```
## Creating an Estimator and start a training job
```
from sagemaker.huggingface import HuggingFace
import time
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 1, # number of training epochs
'train_batch_size': 32, # batch size for training
'eval_batch_size': 64, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':'distilbert-base-uncased', # pre-trained model
'fp16': True, # Whether to use 16-bit (mixed) precision training
'output_dir':'/opt/ml/checkpoints', # output_dir where our checkpoints will be saved
}
# s3 uri where our checkpoints will be uploaded during training
job_name = f'huggingface-workshop-using-spot-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}'
# s3 directory for our uploaded checkpoints
checkpoint_s3_uri = f's3://{sess.default_bucket()}/{job_name}/checkpoints'
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # fine-tuning script used in training jon
source_dir = './scripts', # directory where fine-tuning script is stored
instance_type = 'ml.p3.2xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
transformers_version = '4.6.1', # the transformers version used in the training job
pytorch_version = '1.7.1', # the pytorch_version version used in the training job
py_version = 'py36', # the python version used in the training job
hyperparameters = hyperparameters, # the hyperparameter used for running the training job
checkpoint_s3_uri = checkpoint_s3_uri, # s3 directory for our uploaded checkpoints
use_spot_instances = True, # Wether to use spot instances or not
max_wait = 3600, # This should be equal to or greater than max_run in seconds'
max_run = 1000, # expected max run in seconds
)
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data)
# Training seconds: 874
# Billable seconds: 262
# Managed Spot Training savings: 70.0%
```
## Deploying the endpoint
To deploy our endpoint, we call `deploy()` on our HuggingFace estimator object, passing in our desired number of instances and instance type.
```
predictor = huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
```
Then, we use the returned predictor object to call the endpoint.
```
sentiment_input= {"inputs":"I love using the new Inference DLC."}
predictor.predict(sentiment_input)
```
Finally, we delete the endpoint again.
```
predictor.delete_endpoint()
```
|
github_jupyter
|
!pip install "sagemaker>=2.48.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgrade
import sagemaker.huggingface
import sagemaker
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")
from datasets import load_dataset
from transformers import AutoTokenizer
# tokenizer used in preprocessing
tokenizer_name = 'distilbert-base-uncased'
# dataset used
dataset_name = 'imdb'
# s3 key prefix for the data
s3_prefix = 'samples/datasets/imdb'
# load dataset
dataset = load_dataset(dataset_name)
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# tokenizer helper function
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True)
# load dataset
train_dataset, test_dataset = load_dataset('imdb', split=['train', 'test'])
test_dataset = test_dataset.shuffle().select(range(10000)) # smaller the size for test dataset to 10k
# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)
# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format('torch', columns=['input_ids', 'attention_mask', 'labels'])
import botocore
from datasets.filesystems import S3FileSystem
s3 = S3FileSystem()
# save train_dataset to s3
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
train_dataset.save_to_disk(training_input_path,fs=s3)
# save test_dataset to s3
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
test_dataset.save_to_disk(test_input_path,fs=s3)
training_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/train'
test_input_path = f's3://{sess.default_bucket()}/{s3_prefix}/test'
huggingface_estimator = HuggingFace(entry_point='train.py',
source_dir='./scripts',
base_job_name='huggingface-sdk-extension',
instance_type='ml.p3.2xlarge',
instance_count=1,
transformers_version='4.4',
pytorch_version='1.6',
py_version='py36',
role=role,
hyperparameters = {'epochs': 1,
'train_batch_size': 32,
'model_name':'distilbert-base-uncased'
})
/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --train_batch_size 32
!pygmentize ./scripts/train.py
from sagemaker.huggingface import HuggingFace
import time
# hyperparameters, which are passed into the training job
hyperparameters={'epochs': 1, # number of training epochs
'train_batch_size': 32, # batch size for training
'eval_batch_size': 64, # batch size for evaluation
'learning_rate': 3e-5, # learning rate used during training
'model_id':'distilbert-base-uncased', # pre-trained model
'fp16': True, # Whether to use 16-bit (mixed) precision training
'output_dir':'/opt/ml/checkpoints', # output_dir where our checkpoints will be saved
}
# s3 uri where our checkpoints will be uploaded during training
job_name = f'huggingface-workshop-using-spot-{time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime())}'
# s3 directory for our uploaded checkpoints
checkpoint_s3_uri = f's3://{sess.default_bucket()}/{job_name}/checkpoints'
# create the Estimator
huggingface_estimator = HuggingFace(
entry_point = 'train.py', # fine-tuning script used in training jon
source_dir = './scripts', # directory where fine-tuning script is stored
instance_type = 'ml.p3.2xlarge', # instances type used for the training job
instance_count = 1, # the number of instances used for training
base_job_name = job_name, # the name of the training job
role = role, # Iam role used in training job to access AWS ressources, e.g. S3
transformers_version = '4.6.1', # the transformers version used in the training job
pytorch_version = '1.7.1', # the pytorch_version version used in the training job
py_version = 'py36', # the python version used in the training job
hyperparameters = hyperparameters, # the hyperparameter used for running the training job
checkpoint_s3_uri = checkpoint_s3_uri, # s3 directory for our uploaded checkpoints
use_spot_instances = True, # Wether to use spot instances or not
max_wait = 3600, # This should be equal to or greater than max_run in seconds'
max_run = 1000, # expected max run in seconds
)
# define a data input dictonary with our uploaded s3 uris
data = {
'train': training_input_path,
'test': test_input_path
}
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit(data)
# Training seconds: 874
# Billable seconds: 262
# Managed Spot Training savings: 70.0%
predictor = huggingface_estimator.deploy(1,"ml.g4dn.xlarge")
sentiment_input= {"inputs":"I love using the new Inference DLC."}
predictor.predict(sentiment_input)
predictor.delete_endpoint()
| 0.4917 | 0.987351 |
# Lecture 7: Bias-Variance Tradeoff, Regularization
## 4/1/19
### Hosted by and maintained by the [Statistics Undergraduate Students Association (SUSA)](https://susa.berkeley.edu). Authored by [Ajay Raj](mailto:[email protected]), [Nichole Sun](mailto:[email protected]), [Rosa Choe](mailto:[email protected]), [Calvin Chen](mailto:[email protected]), and [Roland Chin](mailto:[email protected]).
### Table Of Contents
* [Recap](#recap)
* [Bias-Variance Tradeoff](#bv-tradeoff)
* [Bias](#bias)
* [Variance](#variance)
* [The Tradeoff](#the-tradeoff)
* [Polynomial Regression](#polynomial-regression)
* [Regularization](#regularization)
* [Ridge](#ridge)
* [LASSO](#lasso)
* [Visualizing Ridge and Lasso](#visualizing-ridge-and-lasso)
* [Regularization and Bias Variance](#regularization-and-bias-variance)
* [Lambda](#lambda)
* [Validation on Lambda](#validation-on-lambda)
* [Exercises](#exercises)
```
import matplotlib.pyplot as plt
import random
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from plotting import overfittingDemo, plot_multiple_linear_regression, ridgeRegularizationDemo, lassoRegularizationDemo
from scipy.optimize import curve_fit
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
```
<a id='recap'></a>
# Recap

High bias corresponds to underfitting. If we look at the first model, the points seem to follow some sort of curve, but our predictor is linear and therefore, unable to capture all the points. In this case, we have chosen a model which is not complex enough to accurately capture all the information from our data set.
If we look at the last model, the predictor is now overly complex because it adjusts based on every point in order to get as close to every data point as possible. In this case, the model changes too much based on small fluctuations caused by insignificant details in the data. This model is fitting more to noise than signal.
<a id='bv-tradeoff'></a>
# Bias-Variance Tradeoff
Today we'll perform **model evaluation**, where we'll judge how our linear regression models actually perform. Last week, we talked about **loss functions**, which describes a numerical value for how far your model is from the true values.
$$\text{Mean Squared Error: } \frac{1}{n}\sum_{i=1}^n \left(y_i - f(x_i)\right)^2$$
In this loss function, $y_i$ is a scalar, and $x_i$ is a $p$ dimensional vector, because there are $p$ features. This loss is called **mean squared error**, or **MSE**.
Now, we'll talk about other ways to evaluate a model.
**First, let's define some terms.**
We can say that everything in the universe can be described with the following equation:
$$y = h(x) + \epsilon$$
- $y$ is the quantity you are trying to model or approximate
- $x$ are the features (independent variables)
- $h$ is the **true model** for $y$ in terms of $x$
- $\epsilon$ represents **noise**, a random variable which has mean zero
Let $f$ be your model for $y$ in terms of $x$.
<a id='bias'></a>
## Bias
When evaluating a model, the most intuitive first step is to look at how well the model performs. For classification, this may be the percentage of data points correctly classified, or for regression it may be how close the predicted values are to actual. The **bias** of a model is *a measure of how close our prediction is to the actual value on average from an average model*.
Note that bias is **not** a measure of a single model, it encapuslates the scenario in which we collect many datasets, create models for each dataset, and average the error over all of models. Bias is not a measure of error for a single model, but a more abstract concept describing the average error over all errors. A low value for the bias of a model describes that on average, our predictions are similar to the actual values.
<a id='variance'></a>
## Variance
The **variance** of a model relates to the variance of the distribution of all models. In the previous section about bias, we envisoned the scenario of collecting many datasets, creating models for each dataset, and averaging the error overall the datasets. Instead, the variance of a model describes the variance in prediction. While we might be able to predict a value very well on average, if the variance of predictions is very high this may not be very helpful, as when we train a model we only have one such instance, and a high model variance tells us little about the true nature of the predictions. A low variance describes that our model will not predict very different values for different datasets.
**We can take a look at how bias and variance differ and can be explained in a dataset with the following diagram:**

Image from http://scott.fortmann-roe.com/docs/BiasVariance.html
The image describes what bias and variance are in a more simplified example. Consider that we would like to create a model that selects a point close to the center. The models on the top row have low bias, meaning the center of the cluster is close to the red dot on the target. The models on the left column have low variance, the clusters are quite tight, meaning our predictions are close together.
**Question: What do the blue dots represent? What about the bullseye?**
**Question: What is the order of best scenarios?**
<a id='the-tradeoff'></a>
## The Tradeoff
We are trying to minimize **expected error**, or the average **MSE** over all datasets. It turns out (with some advanced probability gymnastics), that:
$$\text{Mean Squared Error} = \text{Noise Variance} + \text{Bias}^2 + \text{Variance}$$
Note that $\text{Noise Variance}$ is constant: we assume there is some noise, and $\text{moise variance}$ is simply a value that describes how noisy your dataset will be on average. This is often also called "irreducible noise", as it is literally irreducible, we cannot avoid this.
Furthermore, notice that the equation above is the sum of (squared) bias and variance. Thus there is a literal tradeoff between these two, since decreasing one increases the other. This defines what is known as the **bias variance tradeoff**.

Image from http://scott.fortmann-roe.com/docs/BiasVariance.html
**Why does this happen?**
At some point, as we decrease **bias**, instead of getting closer to the **true model** $h$, we go past and try to fit to the $\epsilon$ (noise) that is part of our current dataset. This is equivalent to making our model more noisy, or **overfit** on our dataset, which means that over all datasets, it has more **variance**.
**Questions for understanding**:
> 1. Where does underfitting and overfitting lie in the graph above? How do they relate to bias and variance?
> 2. Why can't we usually just make a bunch of models with low bias and high variance and average them?
> 3. Why is low variance important in models?
<a id='polynonmial-regression'></a>
## Polynomial Regression
Let's revisit the polynomial problem that we have discussed.
In this case, if our model has degree $d$, we have $d + 1$ features: $x = [x^0, x^1, ..., x^d]$. Now, we have a linear model with $d + 1$ features:
$$\hat{f}(x) = \sum_{i=0}^{d} a_i x^i$$
Model complexity in this case is the degree of the polynomial. As we saw last week, as $d$ increases, model complexity increases. The model gets better, but then gets erratic. This directly corresponds to the bias-variance graph above.
```
overfittingDemo()
```
As we saw from last time, the best model was a degree 3 model.
<a id='regularization'></a>
# Regularization
We talked about **validation** as a means of combating overfitting. However, this is not the only method to combat overfitting. Another method to do so is to add *regularization* terms to our loss function. **Regularization** basically penalizes complexity in our models. This allows us to add explanatory variables to our model without worrying as much about overfitting. Here's what our ordinary least squares model looks like with a regularization term:
$$\hat{\boldsymbol{\theta}} = \arg\!\min_\theta \sum_{i=1}^n (y_i - f_\boldsymbol{\theta}(x_i))^2 + \lambda R(\boldsymbol{\theta})$$
We've written the model a little differently here, but the first term is the same as the ordinary least squares regression model you learned last week. This time it's just generalized to any function of $x$ where $\theta$ is a list of parameters, or weights on our explanatory variables, such as coefficients to a polynomial. We're minimizing a loss function to find the best coefficients for our model.
The second term is the **regularization** term. The $\lambda$ parameter in front of it dictates how much we care about our regularization term – the higher $\lambda$ is, the more we penalize large weights, and the more the regularization makes our weights deviate from OLS.
**Question**: What happens when $\lambda = 0$?
So, what is $R(\theta)$ in the equation? There are a variety of different regularization functions that could take its place, but today, we'll just talk about the two most common types of functions: **ridge regression** and **LASSO regression**.
$$\begin{align}\text{ Ridge (L2 Norm)}: &\ R(\boldsymbol{\theta}) = \|\theta\|_2^2 = \sum_{i=1}^p \theta_i^2\\
\text{ LASSO (L1 Norm)}: &\ R(\boldsymbol{\theta}) = \|\theta\|_1=\sum_{i=1}^p \lvert \theta_i\rvert\end{align}$$
<a id='ridge'></a>
## Ridge
$$\hat{\boldsymbol{\theta}} = \arg\!\min_\theta \sum_{i=1}^n (y_i - f_\boldsymbol{\theta}(x_i))^2 + \lambda \|\theta\|_2^2$$
In **ridge** regression, the regularization function is the sum of squared weights. One nice thing about ridge regression is that there is always a unique, mathematical solution to minimizing the loss of that term. The solution involves some linear algebra, which we won't get into in this notebook, but the existence of this formula makes this minimization computationally easy to solve!
$$\hat{\boldsymbol{\theta}} = \left(\boldsymbol{X}^T \boldsymbol{X} + \lambda\boldsymbol{I}\right)^{-1}\boldsymbol{X}^T\boldsymbol{Y}$$
If you recall, the solution to linear regression was of the form:
$$\hat{\boldsymbol{\theta}} = \left(\boldsymbol{X}^T \boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{Y}$$
And we said that the $\boldsymbol{X}^T\boldsymbol{X}$ isn't always invertible. **What about $\boldsymbol{X}^T \boldsymbol{X} + \lambda\boldsymbol{I}$?**
Turns out, this is always invertible! If you are familiar with linear algebra, this is equivalent to adding $\lambda$ to all the eigenvalues of $X^TX$.
**To see this in practice**, we'll first create a regular linear regression model, and compare how it does against models using regularization on the `mpg` dataset we used from last week! We'll be constructing models of `displacement` vs. `mpg`, and seeing the difference from there!
First, let's construct the `mpg_train` dataset!
```
mpg = pd.read_csv("mpg.csv", index_col='name')# load mpg dataset
mpg = mpg.loc[mpg["horsepower"] != '?'].astype(float) # remove columns with missing horsepower values
mpg_train, mpg_test = train_test_split(mpg, test_size = .2, random_state = 0) # split into training set and test set
mpg_train, mpg_validation = train_test_split(mpg_train, test_size = .5, random_state = 0)
mpg_train.head()
```
**Exercise:** Now, let's create a regular linear regression model using the same process we've learned before (fitting, predicting, finding the loss)!
```
from sklearn.linear_model import LinearRegression
x_train = np.vander(mpg_train["displacement"], 13)
y_train = mpg_train[["mpg"]]
x_validation = np.vander(mpg_validation["displacement"], 13)
y_validation = mpg_validation[["mpg"]]
# instantiate your model
linear_model = ...
# fit the model
...
# make predictions on validation set
linear_prediction = ...
# find mean squared error
linear_loss = ...
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
```
**Exercise:** Using what you did above as reference, do the same using a Ridge regression model!
```
from sklearn.linear_model import Ridge
...
ridge_loss = ... # mean squared error of ridge model
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
print("Root Mean Squared Error of Ridge Model: {:.2f}".format(ridge_loss))
```
<a id='lasso'></a>
## LASSO
$$\hat{\boldsymbol{\theta}} = \arg\!\min_\theta \sum_{i=1}^n (y_i - f_\boldsymbol{\theta}(x_i))^2 + \lambda \|\theta\|_1$$
In **LASSO** regression, the regularization function is **the sum of absolute values of the weights**. One key thing to note about **LASSO** is that it is **sparsity inducing**, meaning it forces weights to be zero values rather than really small values (which can happen in **Ridge Regression**), leaving you with fewer explanatory variables in the resulting model! Unlike Ridge Regression, LASSO doesn't necessarily have a unique solution that can be solved for with linear algebra, so there's no formula that determines what the optimal weights should be.
```
from sklearn.linear_model import Lasso
...
lasso_loss = ... # mean squared error of lasso model
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
print("Root Mean Squared Error of LASSO Model: {:.2f}".format(lasso_loss))
```
As we can see, both **Ridge Regression and LASSO Regression** minimized the loss of our linear regression models, so maybe penalizing some features allowed us to prevent overfitting on our dataset!
<a id='visualizing-ridge-and-lasso'></a>
## Visualizing Ridge and LASSO
We went through a lot about **ridge** and **LASSO**, but we didn't really get into what they look like for understanding! And so, here are some visualizations that might help build the intution behind some of the characteristics of these two regularization methods.
Another way to describe the modified minimization function above is that it's the same loss function as before, with the *additional constraint* that $R(\boldsymbol{\theta}) \leq t$. Now, $t$ is related to $\lambda$ but the exact relationship between the two parameters depends on your data. Regardless, let's take a look at what this means in the two-dimensional case. For ridge,
$$\theta_0^2 + \theta_1^2 = t$$
Lasso is of the form $$\left|\theta_0\right| + \left|\theta_1\right| =t$$
<a id='norm-balls'></a>
### Norm Balls
Let's take at another visualization that may help build some intuition behind how both of these regularization methods work!
<img src='https://upload.wikimedia.org/wikipedia/commons/f/f8/L1_and_L2_balls.svg' width=400/>
Image from https://upload.wikimedia.org/wikipedia/commons/f/f8/L1_and_L2_balls.svg.
<img src='norm_balls.png' width=400/>
Image from https://towardsdatascience.com/regression-analysis-lasso-ridge-and-elastic-net-9e65dc61d6d3.
The rhombus and circle as a visualization of the regularization term, while the blue circles are the topological curves/level sets representing the loss function based on the weights. You want to minimize the sum of these, which means you want to minimize each of those. The point that minimizes the sum is the minimum point at which they intersect.
**Question**: Based on these visualizations, could you explain why LASSO is sparsity-inducing?
Turns out that the $L2-norm$ is always some sort of smooth surface, from a circle in 2D to a sphere in 3D. On the other hand, LASSO always has sharp corners. In higher dimensions, it forms an octahedron. This is exactly the feature that makes it sparsiy-inducing. As you might imagine, just as humans are more likely to bump into sharp corners than smooth surfaces, the loss term is also most likely to intersect the $L2-norm$ at one of the corners.
<a id='regularization-and-bias-variance'></a>
## Regularization and Bias Variance
As we mentioned earlier, **bias** is the average linear squares loss term across multiple models of the same family (e.g. same degree polynomial) trained on separate datasets. **Variance** is the average variance of the weight vectors (coefficients) on your features.
Without the regularization term, we’re just minimizing bias; the regularization term means we won’t get the lowest possible bias, but we’re exchanging that for some lower variance so that our model does better at generalizing to data points outside of our training data.
<a id='lambda'></a>
## Lambda
We said that $\lambda$ is how much we care about the regularization term, but what does that look like? Let's return to the polynomial example from last week, and see what the resulting models look like with different values of $\lambda$ given a degree 8 polynomial.
```
ridgeRegularizationDemo([0, 0.5, 1.0, 5.0, 10.0], 8)
```
From the diagram above, it's difficult to determine which lambda value help fit our model the closest to the true data distribution. So, **how do we know what to use for $\lambda$ (or `alpha` in the `sklearn.linear_model` constructors)?**
That's right, let's use the process of **validation** here! In this case, we'd be finding the value for lambda that **minimizes the loss for ridge regression, and then the one that minimizes the loss for LASSO regression**!
<a id='validation-on-lambda'></a>
## Validation on Lambda
Let's try to find the best $\lambda$ for the degree 20 polynomial on `displacement` from above.
```
lambdas = np.arange(0, 200) # create a list of potential lambda values
# create a list containing the corresponding mean_squared_error for each lambda usinb both ridge and lasso regression
ridge_errors = []
lasso_errors = []
for l in lambdas:
ridge_errors.append(...)
lasso_errors.append(...)
# finds the index of the minimum value in each list
answer = ridge_errors.index(min(ridge_errors)), lasso_errors.index(min(lasso_errors))
answer
```
As we can see from above, we've been able to determine which lambdas minimizes our ridge regression model and our LASSO regression model through validation by iterating through potential lambda values and finding the ones that minimize our loss for each model!
<a id='conclusion'></a>
# Conclusion
Through the course of the notebook, we introduced two main concepts, **bias** and **variance**, and how the two relate with one another when it comes to finding the best model for our dataset! We also went into different methods we can use to minimize overfitting our model, and in turn lower variance, by taking look at a process called **regularization**. We saw that the two main regression models, **ridge regression** and **LASSO regression**, and saw the difference between the two (ridge -> penalize large weights, LASSO -> make weights sparse). We also took a look at different visualizations between the two to build up some more intuition behind how they work, through **graphs** and **norm balls**. Finally, we went through a familiar process (**validation**) to determine what the best values of lambda were for our models, officially ending our journey of **bias** and **variance**, and how we can minimze both in our models!
# Congratulations! You are now a Bias + Variance master!
<a id='exercises'></a>
## Exercises
1. What happens as $\lambda$ increases?
1. bias increases, variance increases
2. bias increases, variance decreases
3. bias decreases, variance increases
4. bias decreases, variance decreases
**Insert answer here**:
2. **True** or **False**? Bias is how much error your model makes.
**Insert answer here:**
3. What is **sparsity**?
**Insert answer here:**
4. For each of the following, choose **ridge**, **lasso**, **both**, or **neither**:
1. L1-norm
2. L2-norm
3. Induces sparsity
4. Has analytic (mathematical) solution
5. Increases bias
6. Increases variance
**Insert answer here:**
5. Which one is better to use: Ridge Regression or LASSO Regression?
**Insert answer here:**
### Congrats! You've finished our few conceptual questions, now you can help out the rest of your peers and use the rest of the time to work on the intermediate project with your project group!
|
github_jupyter
|
import matplotlib.pyplot as plt
import random
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from plotting import overfittingDemo, plot_multiple_linear_regression, ridgeRegularizationDemo, lassoRegularizationDemo
from scipy.optimize import curve_fit
from sklearn.metrics import mean_squared_error
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
overfittingDemo()
mpg = pd.read_csv("mpg.csv", index_col='name')# load mpg dataset
mpg = mpg.loc[mpg["horsepower"] != '?'].astype(float) # remove columns with missing horsepower values
mpg_train, mpg_test = train_test_split(mpg, test_size = .2, random_state = 0) # split into training set and test set
mpg_train, mpg_validation = train_test_split(mpg_train, test_size = .5, random_state = 0)
mpg_train.head()
from sklearn.linear_model import LinearRegression
x_train = np.vander(mpg_train["displacement"], 13)
y_train = mpg_train[["mpg"]]
x_validation = np.vander(mpg_validation["displacement"], 13)
y_validation = mpg_validation[["mpg"]]
# instantiate your model
linear_model = ...
# fit the model
...
# make predictions on validation set
linear_prediction = ...
# find mean squared error
linear_loss = ...
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
from sklearn.linear_model import Ridge
...
ridge_loss = ... # mean squared error of ridge model
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
print("Root Mean Squared Error of Ridge Model: {:.2f}".format(ridge_loss))
from sklearn.linear_model import Lasso
...
lasso_loss = ... # mean squared error of lasso model
print("Root Mean Squared Error of Linear Model: {:.2f}".format(linear_loss))
print("Root Mean Squared Error of LASSO Model: {:.2f}".format(lasso_loss))
ridgeRegularizationDemo([0, 0.5, 1.0, 5.0, 10.0], 8)
lambdas = np.arange(0, 200) # create a list of potential lambda values
# create a list containing the corresponding mean_squared_error for each lambda usinb both ridge and lasso regression
ridge_errors = []
lasso_errors = []
for l in lambdas:
ridge_errors.append(...)
lasso_errors.append(...)
# finds the index of the minimum value in each list
answer = ridge_errors.index(min(ridge_errors)), lasso_errors.index(min(lasso_errors))
answer
| 0.75274 | 0.992961 |
# Start simple, start with a baseline !
- toc: true
## Motivation
Don't start with something fancy, complicated as your first solution.
Always start with a baseline (simple solution). Why ? You might ask, well there's two reasons for that :
- So you have something to compare to.
- Because complicated does not imply effective (your simple solution might be better then the fancy one).
> If you are using colab, make sure to change your runtime to GPU.
```
#hide
!pip install -Uqq fastbook
import fastbook
#hide
from fastai.vision.all import *
from fastbook import *
matplotlib.rc('image', cmap='Greys')
```
## What will we do ?
We will to implement a simple approach to handwritten digit classification **without Deep Learning**. We'll see that it is pretty accurate, and we'll learn some fastai, pytorch, computer vision basics along the way.
## How will we do it ?
We will use a dataset called **MNIST**, which is a pretty popular dataset containing images of handwritten digits but we'll use a sample of it containing only the digits 3 and 7 (which is provided by fastai).
Our approach consists of two steps:
- Obtain the image of perfect 3 and 7.
- Compare every picture we have to these perfect 3 and 7, and to whom ever it is closest, we'll attribute it the corresponding label.
## Let's do it !
### 1. Create our perfect 3 and 7
Okay, before we start let's download our dataset. For that we will use the fastai method *untar_data*, which given an *url*, downloads the dataset (if not already downloaded), and extracts it (if not already exctracted).
We position our selves in the dataset folder by setting the *BASE_PATH*, to the path returned by the method.
```
path = untar_data(URLs.MNIST_SAMPLE)
Path.BASE_PATH = path
```
Inside that folder we can see two other folders *valid* and *train*, that contain the validation and training set respectively. You'll see this pretty frequently when you download publically availble datasets (they will be already be split into a *valid* and *training* sets for you).
```
path.ls()
```
Now inside the train folder, we have also two other folders 3 and 7 that as you probably guess it contain the respective images of the digits (This is also pretty common in public datasets, the different images will be put in folders named with the corresponding label).
```
(path/'train').ls()
```
We get the paths of the different images (sorted), to open them for later. If you notice, there is *(#6131)* before the list, this is because it's not an actual list but rather a fastai object called L, which is a list but with some additional features (another feature is that it displays only the first ten elements of the list).
```
threes = (path/'train'/'3').ls().sorted()
sevens = (path/'train'/'7').ls().sorted()
threes
```
Let's open one image for each label, to ensure that they are actuall images of 3 and 7.
```
im3_path = threes[1]
im3 = Image.open(im3_path)
im3
im7_path = sevens[4]
im7 = Image.open(im7_path)
im7
```
#### Images are numbers ?
Before we go any further, i need to tell you that an image for a computer is just a bunch of numbers.
To be more precise, it is a composed of pixels. To simplify things (and this is the case for our dataset), we'll take an example of gray scale images (images containing only black, white, and shades of grey in between).
Every pixel makes a tiny bit of the image where the pixel takes a value (between 0 and 255) and that value tells to the computer how to display that particular pixel. Aseembled together they form the image.
Okay, let's turn our image into a *Numpy* array (which is just an array but wth **super powers**, and *Numpy* is just the most used python module for *Numerical computing*).
The thing between brackets is called **slicing**, and it's telling *Numpy* that we want to display the fourth row/column up to but not including the thenth one.
As we can see, an image is just a bunch of numbers.
```
array(im3)[4:10,4:10]
```
We'll do the same thing, but instead of an array, we'll turn our image into a *PyTorch* **tensor**, which behaves much like a *Numpy* array but has the benefit to do our *operations* in the **GPU** (it's a lot, i mean a lot **faster**).
And the *dtype* in the end is the type of data in the tensor/array, which in this case is an unsigned 8 bit integer (0 to 255).
```
tensor(im3)[4:10,4:10]
```
To better visualize how a computer display an image, we'll turn our tensor containing the pixel values of our image into a *DataFrame* (it's an object provided by the **Pandas** module. For now, you just need to know that it's a table of values). With each value (pixel) in the table displayed with the corresponding color (0 for white, 255 for black, values in between for shades of grey).
```
im3_t = tensor(im3)
df = pd.DataFrame(im3_t[4:15,4:22])
df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')
```
Let's open all our images, turn them into tensors and put them in the corresponding list. To speed things up, we'll use a list comprehension (for more details a quick Google search will do it).
Let's ensure that we have all our images ready by checking the length of the two lists.
```
seven_tensors = [tensor(Image.open(o)) for o in sevens]
three_tensors = [tensor(Image.open(o)) for o in threes]
len(three_tensors),len(seven_tensors)
```
So far we have seen how to turn our images into tensors/arrays (a bunch of numbers), but how do we do the reverse operation ?
Well it is pretty simple, thanks to fastai *show_image* method, we can do just that.
```
show_image(three_tensors[1]);
```
Remember we have to find the perfect 3 and 7, one way of doing it (our approach) is to find the mean of every pixel of all the 3/7 images of our dataset.
To do that we'll **stack** our second order tensors (second order means that it's a two dimensionnal object or matrix, more generally we say that we have a **k-th order tensor**) to form a third order tensor (three dimesionnal object).
And cast it to float (turn our integers in the tensor into float, in other words change the type of the data in our tensor).
We also divide by 255 because when images are represented by floats, we generally expect the values to be between 0 and 255.
To ensure that we've done this right we print the **shape** of the tensor (which is the number of elements in each axis/dimension) and we check that we have indeed 6131 images of 28x28.
> Note that nothing about this tensor tells us explicitly that the first axis is the number of images and so on. This is because it is entirely up to us and how we construct the it. For the tensor it is just a bunch of numbers in memory.
```
stacked_sevens = torch.stack(seven_tensors).float()/255
stacked_threes = torch.stack(three_tensors).float()/255
stacked_threes.shape
```
We can also check that we did not screw something, by checking the number of dimensions/axis of the tensor (which in this case is three).
> The number of dimmesions/axis of the tensor is also the length of the shape.
```
stacked_threes.ndim
```
Okay, now we have everything we need, to calculate our perfect 3/7 we'll use the *mean* method provided by *PyTorch* and we give it as an argument the index along which we want to calculate the mean, in our case it'll be the first axis (0-indexed) because we want the mean of every pixel across all images.
```
mean3 = stacked_threes.mean(0)
show_image(mean3);
mean7 = stacked_sevens.mean(0)
show_image(mean7);
```
> As you can notice, the parts where images "agree" (similar pixel values) that this is how a handwritten 3/7 is supposed to look like are darker then the parts where they disagree-- kinda of blurry (due to different pixel values).
### 2. Measure the distance (similarity)
Our final step will consist of calculating the distance of a given image and our perfect 3/7 and we'll say that it is a 3/7 if it is closest to the perfect 3/7 respectively.
Okay, but what's the distance between two images. For that purpose we can use either :
- **L1 norm**: which is the mean absolute difference, in other words, we take the absolute value of the difference between each pixel of the two images and average over all pixels.
- **L2 norm**: root mean squared error, same as above but instead of taking the absolute value, we square the differences, then average and finally take the square root (which will "undo" the squaring).
```
a_3 = stacked_threes[1]
dist_3_abs = (a_3 - mean3).abs().mean()
dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()
dist_3_abs, dist_3_sqr
dist_7_abs = (a_3 - mean7).abs().mean()
dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()
dist_7_abs, dist_7_sqr
```
Taking a random 3, we see that the distance from the perfect 3 is less then the distance from the perfect 7, which is exactly what we wanted. Our solution looks promising !
> *PyTorch* provides methods to calculate **L1 norm & MSE** (**MSE** is the just L2 norm whitout the square root), they are inside the *torch.nn.functional*, which is already imported by fastai as **F**.
> L1 & L2 norm are **loss functions**.
```
F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()
```
> The **MSE** penalyzes mistakes more heavily then **L1 norm** (on the other hand it is more merciful towards small mistakes).
To test how good our model is, we'll need a validation set (images he has not seen before), recall that in our dataset folder this is already provided by **MNIST** in the valid folder. So we'll just do the same operations we did for the training set seen above to create our *tensors*.
> It's good practise to check your tensor's shape to ensure that you've done everything properly.
```
valid_3_tens = torch.stack([tensor(Image.open(o))
for o in (path/'valid'/'3').ls()])
valid_3_tens = valid_3_tens.float()/255
valid_7_tens = torch.stack([tensor(Image.open(o))
for o in (path/'valid'/'7').ls()])
valid_7_tens = valid_7_tens.float()/255
valid_3_tens.shape,valid_7_tens.shape
```
Let's create a function that does what we've discussed earlier (calculate distances), the *(-1, -2)* means that we calculate the mean along the last axis and the second last axis (which represents the width and the height of the image).
```
def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))
mnist_distance(a_3, mean3)
```
Cool this was for one image tho but how about all the validation set (because this is what we need to calulate the *metric*). Well i'am glad you ask, this shows that you are paying attention (or just that i'am talking to myself ...).
Well thanks to *PyTorch* we don't have to re-write *mnist_distance* function because it uses a neat trick called **Broadcasting**. It means that rather then complaining about the shapes being different, when given a rank 3 tensor (validation set images) and rank 2 tensor (ideal 3), it treats the rank 2 tensor as if it was a rank 3 one (you can imagine in your head that it kinda of duplicates the ideal 3 1010 times and it substracts it from each image from the validation set element wise, but it does not actually duplicate it-- no more memory is allocated).
Finally it takes the mean over the last and second last axis, which is the height and width of each image, which leaves us with rank 1 tensor (array), of the distance of each image in the validation set from the ideal 3.
```
valid_3_dist = mnist_distance(valid_3_tens, mean3)
valid_3_dist, valid_3_dist.shape
```
Now all is left to do is create a function that tells us whether an image is a 3 or a 7. By comparing the distance between the ideal 3/7 respectively (True for 3, False for 7).
> This function will automaticlly do **Broadcasting** and be applied **element wise**, just like all *PyTorch* functions and operators.
```
def is_3(x): return mnist_distance(x, mean3) < mnist_distance(x, mean7)
is_3(valid_3_tens)
```
> If we convert a boolean tensor into float we get a 1. for True and 0. for False (this will come in handy to calculate the accuracy).
```
is_3(a_3), is_3(a_3).float()
```
Let's see how good our simple solution is. It has **95%** overall accuracy which is more then acceptable !
```
accuracy_3s = is_3(valid_3_tens).float() .mean()
accuracy_7s = (1 - is_3(valid_7_tens).float()).mean()
accuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2
```
|
github_jupyter
|
#hide
!pip install -Uqq fastbook
import fastbook
#hide
from fastai.vision.all import *
from fastbook import *
matplotlib.rc('image', cmap='Greys')
path = untar_data(URLs.MNIST_SAMPLE)
Path.BASE_PATH = path
path.ls()
(path/'train').ls()
threes = (path/'train'/'3').ls().sorted()
sevens = (path/'train'/'7').ls().sorted()
threes
im3_path = threes[1]
im3 = Image.open(im3_path)
im3
im7_path = sevens[4]
im7 = Image.open(im7_path)
im7
array(im3)[4:10,4:10]
tensor(im3)[4:10,4:10]
im3_t = tensor(im3)
df = pd.DataFrame(im3_t[4:15,4:22])
df.style.set_properties(**{'font-size':'6pt'}).background_gradient('Greys')
seven_tensors = [tensor(Image.open(o)) for o in sevens]
three_tensors = [tensor(Image.open(o)) for o in threes]
len(three_tensors),len(seven_tensors)
show_image(three_tensors[1]);
stacked_sevens = torch.stack(seven_tensors).float()/255
stacked_threes = torch.stack(three_tensors).float()/255
stacked_threes.shape
stacked_threes.ndim
mean3 = stacked_threes.mean(0)
show_image(mean3);
mean7 = stacked_sevens.mean(0)
show_image(mean7);
a_3 = stacked_threes[1]
dist_3_abs = (a_3 - mean3).abs().mean()
dist_3_sqr = ((a_3 - mean3)**2).mean().sqrt()
dist_3_abs, dist_3_sqr
dist_7_abs = (a_3 - mean7).abs().mean()
dist_7_sqr = ((a_3 - mean7)**2).mean().sqrt()
dist_7_abs, dist_7_sqr
F.l1_loss(a_3.float(),mean7), F.mse_loss(a_3,mean7).sqrt()
valid_3_tens = torch.stack([tensor(Image.open(o))
for o in (path/'valid'/'3').ls()])
valid_3_tens = valid_3_tens.float()/255
valid_7_tens = torch.stack([tensor(Image.open(o))
for o in (path/'valid'/'7').ls()])
valid_7_tens = valid_7_tens.float()/255
valid_3_tens.shape,valid_7_tens.shape
def mnist_distance(a,b): return (a-b).abs().mean((-1,-2))
mnist_distance(a_3, mean3)
valid_3_dist = mnist_distance(valid_3_tens, mean3)
valid_3_dist, valid_3_dist.shape
def is_3(x): return mnist_distance(x, mean3) < mnist_distance(x, mean7)
is_3(valid_3_tens)
is_3(a_3), is_3(a_3).float()
accuracy_3s = is_3(valid_3_tens).float() .mean()
accuracy_7s = (1 - is_3(valid_7_tens).float()).mean()
accuracy_3s,accuracy_7s,(accuracy_3s+accuracy_7s)/2
| 0.506103 | 0.986044 |
<a href="https://colab.research.google.com/github/BachiLi/redner/blob/master/tutorials/batch_rendering.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
In deep learning or multi-view reconstruction it is common to deal with images batches. While you can always render multiple images in a loop in redner and stack them together, redner provides functionality that makes this slightly easier. Basically you can just pass in a list of scenes to the rendering functions.
```
!pip install --upgrade redner-gpu
import torch
import pyredner
```
This time we will download the famous [Stanford bunny](https://en.wikipedia.org/wiki/Stanford_bunny) from, again, Morgan McGuire's awesome [website](https://casual-effects.com/data/):
```
import urllib
import zipfile
# wget
filedata = urllib.request.urlretrieve('https://casual-effects.com/g3d/data10/research/model/bunny/bunny.zip', 'bunny.zip')
# unzip
zip_ref = zipfile.ZipFile('bunny.zip', 'r')
zip_ref.extractall('bunny/')
objects = pyredner.load_obj('bunny/bunny.obj', return_objects=True)
```
We setup four cameras looking at the bunny:
```
camera0 = pyredner.automatic_camera_placement(objects, resolution=(512, 512))
print(camera0.position)
print(camera0.look_at)
camera1 = pyredner.Camera(position = camera0.look_at + torch.tensor([0.0, 0.0, 2.5]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
camera2 = pyredner.Camera(position = camera0.look_at + torch.tensor([2.5, 0.0, 0.0]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
camera3 = pyredner.Camera(position = camera0.look_at + torch.tensor([-2.5, 0.0, 0.0]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
```
Now we setup four scenes for rendering the batch:
```
scene0 = pyredner.Scene(camera = camera0, objects = objects)
scene1 = pyredner.Scene(camera = camera1, objects = objects)
scene2 = pyredner.Scene(camera = camera2, objects = objects)
scene3 = pyredner.Scene(camera = camera3, objects = objects)
scenes = [scene0, scene1, scene2, scene3]
```
The rendering functions in redner can take a list of scenes, which returns a batch of images. Internally it just sequentially renders the images.
```
imgs = pyredner.render_albedo(scenes)
print(imgs.shape) # The shape is [N, H, W, C]
# Visualize imgs
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.imshow(torch.pow(imgs[0, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[1, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[2, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[3, :, :, :], 1.0/2.2).cpu())
plt.show()
```
This also works for other rendering modes. For example we can batch render in the deferred rendering mode. Batch rendering might be faster in the deferred rendering mode since we batch the lighting if the user provides the same lights for all scenes.
```
# lights can be a list or list of lists. The latter provides a different list of lights for each scene.
lights = [pyredner.DirectionalLight(torch.tensor([1.0, 1.0, 1.0], device = pyredner.get_device()), torch.tensor([5.0, 5.0, 5.0], device = pyredner.get_device())),
pyredner.DirectionalLight(torch.tensor([0.0, 0.0, -1.0], device = pyredner.get_device()), torch.tensor([2.0, 2.0, 2.0], device = pyredner.get_device()))]
imgs = pyredner.render_deferred(scene = scenes, lights = lights)
plt.figure()
plt.imshow(torch.pow(imgs[0, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[1, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[2, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[3, :, :, :], 1.0/2.2).cpu())
plt.show()
```
Note that you can change the geometries, materials, and lighting arbitrariliy in batch rendering. The only parameter that needs to be the same across the batch is the output size for each scene.
|
github_jupyter
|
!pip install --upgrade redner-gpu
import torch
import pyredner
import urllib
import zipfile
# wget
filedata = urllib.request.urlretrieve('https://casual-effects.com/g3d/data10/research/model/bunny/bunny.zip', 'bunny.zip')
# unzip
zip_ref = zipfile.ZipFile('bunny.zip', 'r')
zip_ref.extractall('bunny/')
objects = pyredner.load_obj('bunny/bunny.obj', return_objects=True)
camera0 = pyredner.automatic_camera_placement(objects, resolution=(512, 512))
print(camera0.position)
print(camera0.look_at)
camera1 = pyredner.Camera(position = camera0.look_at + torch.tensor([0.0, 0.0, 2.5]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
camera2 = pyredner.Camera(position = camera0.look_at + torch.tensor([2.5, 0.0, 0.0]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
camera3 = pyredner.Camera(position = camera0.look_at + torch.tensor([-2.5, 0.0, 0.0]),
look_at = camera0.look_at,
up = camera0.up,
fov = camera0.fov,
resolution = camera0.resolution)
scene0 = pyredner.Scene(camera = camera0, objects = objects)
scene1 = pyredner.Scene(camera = camera1, objects = objects)
scene2 = pyredner.Scene(camera = camera2, objects = objects)
scene3 = pyredner.Scene(camera = camera3, objects = objects)
scenes = [scene0, scene1, scene2, scene3]
imgs = pyredner.render_albedo(scenes)
print(imgs.shape) # The shape is [N, H, W, C]
# Visualize imgs
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure()
plt.imshow(torch.pow(imgs[0, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[1, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[2, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[3, :, :, :], 1.0/2.2).cpu())
plt.show()
# lights can be a list or list of lists. The latter provides a different list of lights for each scene.
lights = [pyredner.DirectionalLight(torch.tensor([1.0, 1.0, 1.0], device = pyredner.get_device()), torch.tensor([5.0, 5.0, 5.0], device = pyredner.get_device())),
pyredner.DirectionalLight(torch.tensor([0.0, 0.0, -1.0], device = pyredner.get_device()), torch.tensor([2.0, 2.0, 2.0], device = pyredner.get_device()))]
imgs = pyredner.render_deferred(scene = scenes, lights = lights)
plt.figure()
plt.imshow(torch.pow(imgs[0, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[1, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[2, :, :, :], 1.0/2.2).cpu())
plt.figure()
plt.imshow(torch.pow(imgs[3, :, :, :], 1.0/2.2).cpu())
plt.show()
| 0.624523 | 0.984531 |
```
from NYTAnalysis import *
DATESnHEADLINESnSNIPPETS = load_obj('DATESnHEADLINESnSNIPPETS')
import re
def pad_with_zero(number,like_number=10):
like_len=len('%d'%like_number)
num_str='%d'%number
while len(num_str) < like_len:
num_str='0'+num_str
return num_str
def string_to_word_list(astring):
return [word.strip(string.punctuation) for word in astring.split()]
def standard_form(astring):
astring=astring.lower()
astring=astring.strip()
astring=re.sub(r'[^\w\s]','',astring)
astring=re.sub(r'[0-9]','',astring)
return astring
def choose_elements(alist):
isstop=False
for idx in xrange(len(alist)):
print '%d:%s'%(idx,str(alist[idx]))
usr_input=raw_input('Provide list to add/drop: ')
if len(usr_input.strip())==0:
bad_idxs=[]
elif usr_input.strip()=='done':
bad_idxs=[]
isstop=True
else:
bad_idxs=eval(usr_input)
good_idxs=[]
for idx in xrange(len(alist)):
if idx in bad_idxs:
continue
good_idxs.append(idx)
return good_idxs, bad_idxs, isstop
def interactive_reduction(alist,at_a_time=10):
removed_els=[]
good_els=[]
for idx in xrange(0,len(alist),at_a_time):
if idx+at_a_time >= len(alist):
last_idx=len(alist)
else:
last_idx=idx+at_a_time
curr_list=alist[idx:last_idx]
good_idxs, bad_idxs, isstop=choose_elements(curr_list)
if len(bad_idxs)>0:
removed_els.extend([curr_list[bad_idx] for bad_idx in bad_idxs])
if len(good_idxs)>0:
good_els.extend([curr_list[good_idx] for good_idx in good_idxs])
return good_els, removed_els
def interactive_selection(alist,at_a_time=10):
removed_els=[]
good_els=[]
for idx in xrange(0,len(alist),at_a_time):
if idx+at_a_time >= len(alist):
last_idx=len(alist)
else:
last_idx=idx+at_a_time
curr_list=alist[idx:last_idx]
bad_idxs, good_idxs, isstop=choose_elements(curr_list)
if len(bad_idxs)>0:
removed_els.extend([curr_list[bad_idx] for bad_idx in bad_idxs])
if len(good_idxs)>0:
good_els.extend([curr_list[good_idx] for good_idx in good_idxs])
if isstop:
break
return good_els, removed_els
stop_words=[]
with open('stop_words.txt', 'r') as f:
stop_words=f.read().splitlines()
stop_words=[standard_form(word) for word in stop_words]
long_stop_words=[]
for word in stop_words:
if len(word)>2:
long_stop_words.append(word)
print long_stop_words
Trump={'Start':[1,20,2017],'End':[0,0,0000]}
Obama={'Start':[1,20,2009],'End':[1,20,2017]}
BushJr={'Start':[1,20,2001],'End':[1,20,2009]}
Clinton={'Start':[1,20,1993],'End':[1,20,2001]}
BushSr={'Start':[1,20,1989],'End':[1,20,1993]}
Reagan={'Start':[1,20,1981],'End':[1,20,1989]}
Carter={'Start':[1,20,1977],'End':[1,20,1981]}
Ford={'Start':[8,9,1974],'End':[1,20,1977]}
Nixon={'Start':[1,20,1969],'End':[8,9,1974]}
Johnson={'Start':[11,22,1963],'End':[1,20,1969]}
Kennedy={'Start':[1,20,1961],'End':[11,22,1963]}
Eisenhower={'Start':[1,20,1953],'End':[1,20,1961]}
Truman={'Start':[4,12,1945],'End':[1,20,1953]}
Presidents=['Trump','Obama','BushJr','Clinton','BushSr','Reagan','Carter','Ford','Nixon','Johnson','Kennedy','Eisenhower','Truman']
AdministrationYears={President:eval(President) for President in Presidents}
ArticleYears={}
BufferYears=2
for President in AdministrationYears:
AdminStart=AdministrationYears[President]['Start']
AdminEnd=AdministrationYears[President]['End']
ArticleStart=AdminStart
ArticleStart[2]-=BufferYears
ArticleEnd=AdminEnd
if President == 'Trump' or President == 'Obama':
ArticleEnd=[12,31,2017]
else:
ArticleEnd[2]+=BufferYears
Day=pad_with_zero(ArticleStart[1])
Month=pad_with_zero(ArticleStart[0])
Year=pad_with_zero(ArticleStart[2],1000)
NYTDate='%s-%s-%sT00:00:00Z' % (Year,Month,Day)
StartInDays=date_to_days(NYTDate)
Day=pad_with_zero(ArticleEnd[1])
Month=pad_with_zero(ArticleEnd[0])
Year=pad_with_zero(ArticleEnd[2],1000)
NYTDate='%s-%s-%sT00:00:00Z' % (Year,Month,Day)
EndInDays=date_to_days(NYTDate)
ArticleYears[President]={'Start':StartInDays,'End':EndInDays}
print ArticleYears
article_dates=np.array([date_to_days(date) for date in DATESnHEADLINESnSNIPPETS['pub_date']])
PresidentTopics={President:{} for President in Presidents}
ArticleCounts={President:0 for President in Presidents}
for President in Presidents:
print President
ArticleIdxs=np.argwhere((article_dates >= ArticleYears[President]['Start'])&\
(article_dates < ArticleYears[President]['End'])).flatten()
Count=0
for Idx in ArticleIdxs:
Headline=DATESnHEADLINESnSNIPPETS['headline']['main'][Idx]
if word_mention(President,Headline):
Count+=1
Snippet=DATESnHEADLINESnSNIPPETS['snippet'][Idx]
content=string_to_word_list(Headline)
try:
content.extend(string_to_word_list(Snippet))
except AttributeError:
print 'No snippet in %d' % Idx
already_counted=[]
for word in content:
word=standard_form(word)
if len(word) <= 2:
continue
# if word in long_stop_words:
# continue
if word not in already_counted:
already_counted.append(word)
try:
PresidentTopics[President][word]+=1
except KeyError:
PresidentTopics[President][word]=1
ArticleCounts[President]=Count
for President in Presidents:
for word in stop_words:
try:
del PresidentTopics[President][word]
except KeyError:
# do nothing
a=1
print len(PresidentTopics['Trump'].keys())
print ArticleCounts['Nixon']
PresidentTopics['Trump'].keys()[0:10]
TopHits={President:[] for President in Presidents}
NHits=300
for President in Presidents:
sorted_results=sorted(PresidentTopics[President].iteritems(), key=lambda (k,v): (v,k), reverse=True)
TopHits[President]=[el[0] for el in sorted_results[1:NHits+1]]
for President in Presidents:
print President
NArticles=ArticleCounts[President]
PresMat=np.zeros([NArticles,NHits])
ArticleIdxs=np.argwhere((article_dates >= ArticleYears[President]['Start'])&\
(article_dates < ArticleYears[President]['End'])).flatten()
article=-1
for Idx in ArticleIdxs:
Headline=DATESnHEADLINESnSNIPPETS['headline']['main'][Idx]
if word_mention(President,Headline):
article+=1
Snippet=DATESnHEADLINESnSNIPPETS['snippet'][Idx]
content=string_to_word_list(Headline)
try:
content.extend(string_to_word_list(Snippet))
except AttributeError:
pass
wc=0
for word in content:
word=standard_form(word)
try:
idx=TopHits[President].index(word)
PresMat[article,idx]+=1
wc+=1
except ValueError:
pass
if wc==0:
article-=1
NArticles-=1
else:
PresMat[article,:]=PresMat[article,:]/float(wc)
PresMat=PresMat[0:NArticles,:]
np.savetxt(President+'.csv', PresMat, delimiter=',')
for President in Presidents:
with open(President+'_labels.csv','w') as f:
for entry in TopHits[President]:
f.write(entry+', ')
President='Nixon'
print PresidentTopics[President]['watergate']/float(ArticleCounts[President])
President='Trump'
print PresidentTopics[President]['russia']/float(ArticleCounts[President])
print PresidentTopics[President]['tweet']/float(ArticleCounts[President])
maxkeys=[]
amax=0
for key in PresidentTopics['Trump'].keys():
if key in ['trump','the','to','a','it','and','of','donald','j','in','on','for','president','his']:
continue
if PresidentTopics[President][key] > amax:
amax=PresidentTopics[President][key]
maxkeys=[key]
elif PresidentTopics[President][key] == amax:
maxkeys.append(key)
print maxkeys
good_keys, bad_keys=interactive_reduction(PresidentTopics['Trump'].keys())
curr_list=['a','b','c']
# g, b = choose_elements(curr_list)
g, b = interactive_reduction(curr_list)
print g
print b
sortedtrump=sorted(PresidentTopics['Trump'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
print sortedtrump[0:300]
# g, b=interactive_selection(sortedtrump,100)
# print g
goodtrump=g
badtrump=b
# [10,36,46,53,68,77,78,80,83,93,98]
# [29,35,45,46,52,54,61,65,66,70]
# [10,22]
sortednixon=sorted(PresidentTopics['Nixon'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
goodnixon, badnixon=interactive_selection(sortednixon,100)
# [4,36,41,48,61,62,77]
# [2,35]
sortedclinton=sorted(PresidentTopics['Clinton'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
goodclinton, badclinton=interactive_selection(sortedclinton,100)
[5,40,57,69,76,84]
[14,9]
```
|
github_jupyter
|
from NYTAnalysis import *
DATESnHEADLINESnSNIPPETS = load_obj('DATESnHEADLINESnSNIPPETS')
import re
def pad_with_zero(number,like_number=10):
like_len=len('%d'%like_number)
num_str='%d'%number
while len(num_str) < like_len:
num_str='0'+num_str
return num_str
def string_to_word_list(astring):
return [word.strip(string.punctuation) for word in astring.split()]
def standard_form(astring):
astring=astring.lower()
astring=astring.strip()
astring=re.sub(r'[^\w\s]','',astring)
astring=re.sub(r'[0-9]','',astring)
return astring
def choose_elements(alist):
isstop=False
for idx in xrange(len(alist)):
print '%d:%s'%(idx,str(alist[idx]))
usr_input=raw_input('Provide list to add/drop: ')
if len(usr_input.strip())==0:
bad_idxs=[]
elif usr_input.strip()=='done':
bad_idxs=[]
isstop=True
else:
bad_idxs=eval(usr_input)
good_idxs=[]
for idx in xrange(len(alist)):
if idx in bad_idxs:
continue
good_idxs.append(idx)
return good_idxs, bad_idxs, isstop
def interactive_reduction(alist,at_a_time=10):
removed_els=[]
good_els=[]
for idx in xrange(0,len(alist),at_a_time):
if idx+at_a_time >= len(alist):
last_idx=len(alist)
else:
last_idx=idx+at_a_time
curr_list=alist[idx:last_idx]
good_idxs, bad_idxs, isstop=choose_elements(curr_list)
if len(bad_idxs)>0:
removed_els.extend([curr_list[bad_idx] for bad_idx in bad_idxs])
if len(good_idxs)>0:
good_els.extend([curr_list[good_idx] for good_idx in good_idxs])
return good_els, removed_els
def interactive_selection(alist,at_a_time=10):
removed_els=[]
good_els=[]
for idx in xrange(0,len(alist),at_a_time):
if idx+at_a_time >= len(alist):
last_idx=len(alist)
else:
last_idx=idx+at_a_time
curr_list=alist[idx:last_idx]
bad_idxs, good_idxs, isstop=choose_elements(curr_list)
if len(bad_idxs)>0:
removed_els.extend([curr_list[bad_idx] for bad_idx in bad_idxs])
if len(good_idxs)>0:
good_els.extend([curr_list[good_idx] for good_idx in good_idxs])
if isstop:
break
return good_els, removed_els
stop_words=[]
with open('stop_words.txt', 'r') as f:
stop_words=f.read().splitlines()
stop_words=[standard_form(word) for word in stop_words]
long_stop_words=[]
for word in stop_words:
if len(word)>2:
long_stop_words.append(word)
print long_stop_words
Trump={'Start':[1,20,2017],'End':[0,0,0000]}
Obama={'Start':[1,20,2009],'End':[1,20,2017]}
BushJr={'Start':[1,20,2001],'End':[1,20,2009]}
Clinton={'Start':[1,20,1993],'End':[1,20,2001]}
BushSr={'Start':[1,20,1989],'End':[1,20,1993]}
Reagan={'Start':[1,20,1981],'End':[1,20,1989]}
Carter={'Start':[1,20,1977],'End':[1,20,1981]}
Ford={'Start':[8,9,1974],'End':[1,20,1977]}
Nixon={'Start':[1,20,1969],'End':[8,9,1974]}
Johnson={'Start':[11,22,1963],'End':[1,20,1969]}
Kennedy={'Start':[1,20,1961],'End':[11,22,1963]}
Eisenhower={'Start':[1,20,1953],'End':[1,20,1961]}
Truman={'Start':[4,12,1945],'End':[1,20,1953]}
Presidents=['Trump','Obama','BushJr','Clinton','BushSr','Reagan','Carter','Ford','Nixon','Johnson','Kennedy','Eisenhower','Truman']
AdministrationYears={President:eval(President) for President in Presidents}
ArticleYears={}
BufferYears=2
for President in AdministrationYears:
AdminStart=AdministrationYears[President]['Start']
AdminEnd=AdministrationYears[President]['End']
ArticleStart=AdminStart
ArticleStart[2]-=BufferYears
ArticleEnd=AdminEnd
if President == 'Trump' or President == 'Obama':
ArticleEnd=[12,31,2017]
else:
ArticleEnd[2]+=BufferYears
Day=pad_with_zero(ArticleStart[1])
Month=pad_with_zero(ArticleStart[0])
Year=pad_with_zero(ArticleStart[2],1000)
NYTDate='%s-%s-%sT00:00:00Z' % (Year,Month,Day)
StartInDays=date_to_days(NYTDate)
Day=pad_with_zero(ArticleEnd[1])
Month=pad_with_zero(ArticleEnd[0])
Year=pad_with_zero(ArticleEnd[2],1000)
NYTDate='%s-%s-%sT00:00:00Z' % (Year,Month,Day)
EndInDays=date_to_days(NYTDate)
ArticleYears[President]={'Start':StartInDays,'End':EndInDays}
print ArticleYears
article_dates=np.array([date_to_days(date) for date in DATESnHEADLINESnSNIPPETS['pub_date']])
PresidentTopics={President:{} for President in Presidents}
ArticleCounts={President:0 for President in Presidents}
for President in Presidents:
print President
ArticleIdxs=np.argwhere((article_dates >= ArticleYears[President]['Start'])&\
(article_dates < ArticleYears[President]['End'])).flatten()
Count=0
for Idx in ArticleIdxs:
Headline=DATESnHEADLINESnSNIPPETS['headline']['main'][Idx]
if word_mention(President,Headline):
Count+=1
Snippet=DATESnHEADLINESnSNIPPETS['snippet'][Idx]
content=string_to_word_list(Headline)
try:
content.extend(string_to_word_list(Snippet))
except AttributeError:
print 'No snippet in %d' % Idx
already_counted=[]
for word in content:
word=standard_form(word)
if len(word) <= 2:
continue
# if word in long_stop_words:
# continue
if word not in already_counted:
already_counted.append(word)
try:
PresidentTopics[President][word]+=1
except KeyError:
PresidentTopics[President][word]=1
ArticleCounts[President]=Count
for President in Presidents:
for word in stop_words:
try:
del PresidentTopics[President][word]
except KeyError:
# do nothing
a=1
print len(PresidentTopics['Trump'].keys())
print ArticleCounts['Nixon']
PresidentTopics['Trump'].keys()[0:10]
TopHits={President:[] for President in Presidents}
NHits=300
for President in Presidents:
sorted_results=sorted(PresidentTopics[President].iteritems(), key=lambda (k,v): (v,k), reverse=True)
TopHits[President]=[el[0] for el in sorted_results[1:NHits+1]]
for President in Presidents:
print President
NArticles=ArticleCounts[President]
PresMat=np.zeros([NArticles,NHits])
ArticleIdxs=np.argwhere((article_dates >= ArticleYears[President]['Start'])&\
(article_dates < ArticleYears[President]['End'])).flatten()
article=-1
for Idx in ArticleIdxs:
Headline=DATESnHEADLINESnSNIPPETS['headline']['main'][Idx]
if word_mention(President,Headline):
article+=1
Snippet=DATESnHEADLINESnSNIPPETS['snippet'][Idx]
content=string_to_word_list(Headline)
try:
content.extend(string_to_word_list(Snippet))
except AttributeError:
pass
wc=0
for word in content:
word=standard_form(word)
try:
idx=TopHits[President].index(word)
PresMat[article,idx]+=1
wc+=1
except ValueError:
pass
if wc==0:
article-=1
NArticles-=1
else:
PresMat[article,:]=PresMat[article,:]/float(wc)
PresMat=PresMat[0:NArticles,:]
np.savetxt(President+'.csv', PresMat, delimiter=',')
for President in Presidents:
with open(President+'_labels.csv','w') as f:
for entry in TopHits[President]:
f.write(entry+', ')
President='Nixon'
print PresidentTopics[President]['watergate']/float(ArticleCounts[President])
President='Trump'
print PresidentTopics[President]['russia']/float(ArticleCounts[President])
print PresidentTopics[President]['tweet']/float(ArticleCounts[President])
maxkeys=[]
amax=0
for key in PresidentTopics['Trump'].keys():
if key in ['trump','the','to','a','it','and','of','donald','j','in','on','for','president','his']:
continue
if PresidentTopics[President][key] > amax:
amax=PresidentTopics[President][key]
maxkeys=[key]
elif PresidentTopics[President][key] == amax:
maxkeys.append(key)
print maxkeys
good_keys, bad_keys=interactive_reduction(PresidentTopics['Trump'].keys())
curr_list=['a','b','c']
# g, b = choose_elements(curr_list)
g, b = interactive_reduction(curr_list)
print g
print b
sortedtrump=sorted(PresidentTopics['Trump'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
print sortedtrump[0:300]
# g, b=interactive_selection(sortedtrump,100)
# print g
goodtrump=g
badtrump=b
# [10,36,46,53,68,77,78,80,83,93,98]
# [29,35,45,46,52,54,61,65,66,70]
# [10,22]
sortednixon=sorted(PresidentTopics['Nixon'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
goodnixon, badnixon=interactive_selection(sortednixon,100)
# [4,36,41,48,61,62,77]
# [2,35]
sortedclinton=sorted(PresidentTopics['Clinton'].iteritems(), key=lambda (k,v): (v,k), reverse=True)
goodclinton, badclinton=interactive_selection(sortedclinton,100)
[5,40,57,69,76,84]
[14,9]
| 0.097013 | 0.177579 |
# Libraries
```
import os
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
import pandas as pd
from collections import namedtuple
import requests as req
import io
import eventlet
import datetime
import requests
```
# Scrapper
```
bashCommand = "curl https://18853.live.streamtheworld.com/BLURADIO_SC --output ./audio_scrapping/somess.mp3 --max-time 400"
os.system(bashCommand)
```
## Get the audio file
```
import google.cloud.storage as storage
bashCommand = "curl https://18853.live.streamtheworld.com/BLURADIO_SC --output ./blue.mp3 --max-time 400"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./blue.mp3.mp3 -c:v libx264 ./audio_scrapping/long.flac"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./audio_scrapping/long.flac -ac 1 ./audio_scrapping/mono_long.flac"
os.system(bashCommand)
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
upload_blob("radioscrapping", "./audio_scrapping/mono_long.flac", "jupyter_tries/mono_long.flac")
from pydub import AudioSegment
song = AudioSegment.from_mp3("somess.mp3")
ss = song.export(format="flac", parameters=["-ac", "1"])
```
## Two commands to convert it
```
bashCommand = "ffmpeg -i ./audio_scrapping/somess.mp3 -c:v libx264 ./audio_scrapping/long.flac"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./audio_scrapping/long.flac -ac 1 ./audio_scrapping/mono_long.flac"
os.system(bashCommand)
bashCommand = "cp ./audio_scrapping/mono_long.flac gs://radioscrapping/mono_long.flac"
os.system(bashCommand)
```
## Upload to bucket
### It has to read from a storage as the example is set like that
```
import google.cloud.storage as storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
upload_blob("radioscrapping", "./audio_scrapping/mono_long.flac", "jupyter_tries/mono_long.flac")
```
# Run the NLP speech to text
```
def transcribe_gcs(gcs_uri):
"""Asynchronously transcribes the audio file specified by the gcs_uri."""
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.FLAC,
language_code='es-CO')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=90)
# Each result is for a consecutive portion of the audio. Iterate through
# them to get the transcripts for the entire audio file.
return response
```
# Results NLP
```
ss = transcribe_gcs("gs://radioscrapping/jupyter_tries/mono_long.flac")
text1 = ss.results[0].alternatives[0].transcript
text2 = ss.results[1].alternatives[0].transcript
text3 = ss.results[2].alternatives[0].transcript
tex4 = ss.results[3].alternatives[0].transcript
```
# NER
```
import six
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import sys
def getsentimental(text):
client = language.LanguageServiceClient()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
document = types.Document(
content=text.encode('utf-8'),
type=enums.Document.Type.PLAIN_TEXT)
# Detect and send native Python encoding to receive correct word offsets.
encoding = enums.EncodingType.UTF32
if sys.maxunicode == 65535:
encoding = enums.EncodingType.UTF16
result = client.analyze_entity_sentiment(document, encoding)
return result.entities
```
### results
```
s = getsentimental(text1)
```
### The output is a class this is a function that makes it a json
```
def jsonit(result):
listdict = []
for entity in result:
dictd = {}
dictd["name"] = entity.name
dictd["type"] = entity.type
dictd["Salience"] = entity.salience
mentionss = []
for mention in entity.mentions:
dict_m = {}
dict_m["Content"] = mention.text.content
dict_m["Magnitude"] = mention.sentiment.magnitude
dict_m["Sentiment"] = mention.sentiment.score
dict_m["Salience"] = entity.salience
mentionss.append(dict_m)
dictd["mentions"] = mentionss
listdict.append(dictd)
return listdict
sss = jsonit(s)
```
### loop to make it a dataframe
```
pre_df = []
for mention in sss:
Salienceg = mention["Salience"]
types = mention["type"]
name = mention["name"]
men = mention['mentions']
for mentis in men:
Magnitude = mentis['Magnitude']
Saliencei = mentis['Salience']
Sentiment = mentis['Sentiment']
tup = (Salienceg, types, name, Magnitude, Saliencei, Sentiment)
pre_df.append(tup)
col = [ "Salienceg",
"types",
"namei",
"Magnitude",
"Saliencei",
"Sentiment"
]
dfObj = pd.DataFrame(pre_df , columns=col)
dfObj
```
### Send that thing to big query
```
full_table_id = 'R_NER.Radio_tests'
project_id = 'proyecto-emiliano-isaza'
dfObj.to_gbq(full_table_id, project_id=project_id)
from datetime import datetime
from google.cloud import bigquery
from pydub import AudioSegment
from io import BytesIO
from google.cloud import storage
import os
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
```
|
github_jupyter
|
import os
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
import pandas as pd
from collections import namedtuple
import requests as req
import io
import eventlet
import datetime
import requests
bashCommand = "curl https://18853.live.streamtheworld.com/BLURADIO_SC --output ./audio_scrapping/somess.mp3 --max-time 400"
os.system(bashCommand)
import google.cloud.storage as storage
bashCommand = "curl https://18853.live.streamtheworld.com/BLURADIO_SC --output ./blue.mp3 --max-time 400"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./blue.mp3.mp3 -c:v libx264 ./audio_scrapping/long.flac"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./audio_scrapping/long.flac -ac 1 ./audio_scrapping/mono_long.flac"
os.system(bashCommand)
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
upload_blob("radioscrapping", "./audio_scrapping/mono_long.flac", "jupyter_tries/mono_long.flac")
from pydub import AudioSegment
song = AudioSegment.from_mp3("somess.mp3")
ss = song.export(format="flac", parameters=["-ac", "1"])
bashCommand = "ffmpeg -i ./audio_scrapping/somess.mp3 -c:v libx264 ./audio_scrapping/long.flac"
os.system(bashCommand)
bashCommand = "ffmpeg -i ./audio_scrapping/long.flac -ac 1 ./audio_scrapping/mono_long.flac"
os.system(bashCommand)
bashCommand = "cp ./audio_scrapping/mono_long.flac gs://radioscrapping/mono_long.flac"
os.system(bashCommand)
import google.cloud.storage as storage
def upload_blob(bucket_name, source_file_name, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_filename(source_file_name)
print('File {} uploaded to {}.'.format(
source_file_name,
destination_blob_name))
upload_blob("radioscrapping", "./audio_scrapping/mono_long.flac", "jupyter_tries/mono_long.flac")
def transcribe_gcs(gcs_uri):
"""Asynchronously transcribes the audio file specified by the gcs_uri."""
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.FLAC,
language_code='es-CO')
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=90)
# Each result is for a consecutive portion of the audio. Iterate through
# them to get the transcripts for the entire audio file.
return response
ss = transcribe_gcs("gs://radioscrapping/jupyter_tries/mono_long.flac")
text1 = ss.results[0].alternatives[0].transcript
text2 = ss.results[1].alternatives[0].transcript
text3 = ss.results[2].alternatives[0].transcript
tex4 = ss.results[3].alternatives[0].transcript
import six
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
import sys
def getsentimental(text):
client = language.LanguageServiceClient()
if isinstance(text, six.binary_type):
text = text.decode('utf-8')
document = types.Document(
content=text.encode('utf-8'),
type=enums.Document.Type.PLAIN_TEXT)
# Detect and send native Python encoding to receive correct word offsets.
encoding = enums.EncodingType.UTF32
if sys.maxunicode == 65535:
encoding = enums.EncodingType.UTF16
result = client.analyze_entity_sentiment(document, encoding)
return result.entities
s = getsentimental(text1)
def jsonit(result):
listdict = []
for entity in result:
dictd = {}
dictd["name"] = entity.name
dictd["type"] = entity.type
dictd["Salience"] = entity.salience
mentionss = []
for mention in entity.mentions:
dict_m = {}
dict_m["Content"] = mention.text.content
dict_m["Magnitude"] = mention.sentiment.magnitude
dict_m["Sentiment"] = mention.sentiment.score
dict_m["Salience"] = entity.salience
mentionss.append(dict_m)
dictd["mentions"] = mentionss
listdict.append(dictd)
return listdict
sss = jsonit(s)
pre_df = []
for mention in sss:
Salienceg = mention["Salience"]
types = mention["type"]
name = mention["name"]
men = mention['mentions']
for mentis in men:
Magnitude = mentis['Magnitude']
Saliencei = mentis['Salience']
Sentiment = mentis['Sentiment']
tup = (Salienceg, types, name, Magnitude, Saliencei, Sentiment)
pre_df.append(tup)
col = [ "Salienceg",
"types",
"namei",
"Magnitude",
"Saliencei",
"Sentiment"
]
dfObj = pd.DataFrame(pre_df , columns=col)
dfObj
full_table_id = 'R_NER.Radio_tests'
project_id = 'proyecto-emiliano-isaza'
dfObj.to_gbq(full_table_id, project_id=project_id)
from datetime import datetime
from google.cloud import bigquery
from pydub import AudioSegment
from io import BytesIO
from google.cloud import storage
import os
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
| 0.370339 | 0.340938 |
# 1. Example of graph injection attack using GRB
GRB provides a unified evaluation scenario for fair comparisons between attacks and defenses. The scenario is **Black-box, Evasion, Inductive, Injection**. Take the case of a citation-graph classification system for example. The platform collects labeled data from previous papers and trains a GNN model. When a batch of new papers are submitted, it updates the graph and uses the trained model to predict labels for them.
* **Black-box**: Both the attacker and the defender have no knowledge about the applied methods each other uses.
* **Evasion**: GNNs are already trained in trusted data (e.g. authenticated users), which are untouched by the attackers but might have natural noises. Thus, attacks will only happen during the inference phase.
* **Inductive**: GNNs are used to classify unseen data (e.g. new users), i.e. validation or test data are unseen during training, which requires GNNs to generalize to out of distribution data.
* **Injection**: The attackers can only inject new nodes but not modify the target nodes directly. Since it is usually hard to hack into users' accounts and modify their profiles. However, it is easier to create fake accounts and connect them to existing users.
```
import os
import torch
import grb.utils as utils
```
## 1.1. Load Dataset
GRB datasets are named by the prefix *grb-*. There are four *mode* ('easy', 'medium', 'hard', 'full') for test set, representing different average degrees of test nodes, thus different difficulty for attacking them. The node features are processed by *arctan* normalization (first standardization then arctan function), which makes node features fall in the same scale.
```
from grb.dataset import Dataset
dataset_name = 'grb-cora'
dataset = Dataset(name=dataset_name,
data_dir="../../data/",
mode='full',
feat_norm='arctan')
adj = dataset.adj
features = dataset.features
labels = dataset.labels
num_features = dataset.num_features
num_classes = dataset.num_classes
test_mask = dataset.test_mask
```
## 1.2. Graph Injection Attack
For graph injection attack under the black box setting, we need to first train a surrogate model, then transfer the generated attack nodes to a target model. Note that the attacker doesn't have any information of the target model, neither the model architecture nor the parameters. Here is an example of training GCN as the surrogate model, and transfer to other models.
### 1.2.1 Train surrogate model
```
from grb.model.torch import GCN
from grb.utils.normalize import GCNAdjNorm
model_name = "gcn"
model_sur = GCN(in_features=dataset.num_features,
out_features=dataset.num_classes,
hidden_features=64,
n_layers=2,
adj_norm_func=GCNAdjNorm,
layer_norm=False,
residual=False,
dropout=0.5)
print(model_sur)
save_dir = "./saved_models/{}/{}".format(dataset_name, model_name)
save_name = "model_sur.pt"
device = "cuda:0"
feat_norm = None
train_mode = "inductive" # "transductive"
from grb.trainer.trainer import Trainer
trainer = Trainer(dataset=dataset,
optimizer=torch.optim.Adam(model_sur.parameters(), lr=0.01),
loss=torch.nn.functional.cross_entropy,
lr_scheduler=False,
early_stop=True,
early_stop_patience=500,
feat_norm=feat_norm,
device=device)
trainer.train(model=model_sur,
n_epoch=2000,
eval_every=1,
save_after=0,
save_dir=save_dir,
save_name=save_name,
train_mode=train_mode,
verbose=False)
# by trainer
test_score = trainer.evaluate(model_sur, dataset.test_mask)
print("Test score of surrogate model: {:.4f}".format(test_score))
```
### 1.2.2. Injection Attack
**Rules and constraints for attackers**: they have knowledge about the entire graph (including all nodes, edges and labels, excluding labels of the test nodes to attack), but do NOT have knowledge about the target model or the defense mechanism; they are allowed to inject a limited number of new nodes with limited edges, but are NOT allowed to modify the original graph; they are allowed to generate features of injected nodes as long as they remain unnoticeable by defenders (e.g. nodes with features that exceed the range can be easily detected); they are allowed to get the classification results from the target model through limited number of queries.
#### FGSM (Fast Gradient Sign Method)
```
from grb.attack.injection import FGSM
attack = FGSM(epsilon=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
```
#### PGD (Projected Gradient Descent)
```
from grb.attack.injection import PGD
attack = PGD(epsilon=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
```
#### RAND (Random)
```
from grb.attack.injection import RAND
attack = RAND(n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
```
#### SPEIT
```
from grb.attack.injection import SPEIT
attack = SPEIT(lr=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
```
#### TDGIA (Topological Defective Graph Injection Attack)
```
from grb.attack.injection import TDGIA
attack = TDGIA(lr=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
```
### 1.2.3. Apply injection attack
```
adj_attack, features_attack = attack.attack(model=model_sur,
adj=adj,
features=features,
target_mask=test_mask,
adj_norm_func=model_sur.adj_norm_func)
features_attacked = torch.cat([features.to(device), features_attack])
test_score = utils.evaluate(model_sur,
features=features_attacked,
adj=adj_attack,
labels=dataset.labels,
adj_norm_func=model_sur.adj_norm_func,
mask=dataset.test_mask,
device=device)
print("Test score after attack for surrogate model: {:.4f}.".format(test_score))
```
### 1.2.4. Transfer to target model
```
model_name = "gcn"
save_dir = "./saved_models/{}/{}".format(dataset_name, model_name)
save_name = "model.pt"
device = "cuda:0"
model = torch.load(os.path.join(save_dir, save_name))
model = model.to(device)
model.eval()
test_score = utils.evaluate(model,
features=features_attacked,
adj=adj_attack,
labels=dataset.labels,
adj_norm_func=model.adj_norm_func,
mask=dataset.test_mask,
device=device)
print("Test score after attack for target model: {:.4f}.".format(test_score))
```
|
github_jupyter
|
import os
import torch
import grb.utils as utils
from grb.dataset import Dataset
dataset_name = 'grb-cora'
dataset = Dataset(name=dataset_name,
data_dir="../../data/",
mode='full',
feat_norm='arctan')
adj = dataset.adj
features = dataset.features
labels = dataset.labels
num_features = dataset.num_features
num_classes = dataset.num_classes
test_mask = dataset.test_mask
from grb.model.torch import GCN
from grb.utils.normalize import GCNAdjNorm
model_name = "gcn"
model_sur = GCN(in_features=dataset.num_features,
out_features=dataset.num_classes,
hidden_features=64,
n_layers=2,
adj_norm_func=GCNAdjNorm,
layer_norm=False,
residual=False,
dropout=0.5)
print(model_sur)
save_dir = "./saved_models/{}/{}".format(dataset_name, model_name)
save_name = "model_sur.pt"
device = "cuda:0"
feat_norm = None
train_mode = "inductive" # "transductive"
from grb.trainer.trainer import Trainer
trainer = Trainer(dataset=dataset,
optimizer=torch.optim.Adam(model_sur.parameters(), lr=0.01),
loss=torch.nn.functional.cross_entropy,
lr_scheduler=False,
early_stop=True,
early_stop_patience=500,
feat_norm=feat_norm,
device=device)
trainer.train(model=model_sur,
n_epoch=2000,
eval_every=1,
save_after=0,
save_dir=save_dir,
save_name=save_name,
train_mode=train_mode,
verbose=False)
# by trainer
test_score = trainer.evaluate(model_sur, dataset.test_mask)
print("Test score of surrogate model: {:.4f}".format(test_score))
from grb.attack.injection import FGSM
attack = FGSM(epsilon=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
from grb.attack.injection import PGD
attack = PGD(epsilon=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
from grb.attack.injection import RAND
attack = RAND(n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
from grb.attack.injection import SPEIT
attack = SPEIT(lr=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
from grb.attack.injection import TDGIA
attack = TDGIA(lr=0.01,
n_epoch=1000,
n_inject_max=100,
n_edge_max=200,
feat_lim_min=-1,
feat_lim_max=1,
device=device)
adj_attack, features_attack = attack.attack(model=model_sur,
adj=adj,
features=features,
target_mask=test_mask,
adj_norm_func=model_sur.adj_norm_func)
features_attacked = torch.cat([features.to(device), features_attack])
test_score = utils.evaluate(model_sur,
features=features_attacked,
adj=adj_attack,
labels=dataset.labels,
adj_norm_func=model_sur.adj_norm_func,
mask=dataset.test_mask,
device=device)
print("Test score after attack for surrogate model: {:.4f}.".format(test_score))
model_name = "gcn"
save_dir = "./saved_models/{}/{}".format(dataset_name, model_name)
save_name = "model.pt"
device = "cuda:0"
model = torch.load(os.path.join(save_dir, save_name))
model = model.to(device)
model.eval()
test_score = utils.evaluate(model,
features=features_attacked,
adj=adj_attack,
labels=dataset.labels,
adj_norm_func=model.adj_norm_func,
mask=dataset.test_mask,
device=device)
print("Test score after attack for target model: {:.4f}.".format(test_score))
| 0.603815 | 0.941277 |
Data Source: https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant
Features consist of hourly average ambient variables
Temperature (T) in the range 1.81°C and 37.11°C,
Ambient Pressure (AP) in the range 992.89-1033.30 milibar,
Relative Humidity (RH) in the range 25.56% to 100.16%
Exhaust Vacuum (V) in teh range 25.36-81.56 cm Hg
Net hourly electrical energy output (EP) 420.26-495.76 MW
The averages are taken from various sensors located around the plant that record the ambient variables every second. The variables are given without normalization.
Dataset Information:
The dataset contains 9568 data points collected from a Combined Cycle Power Plant over 6 years (2006-2011), when the power plant was set to work with full load. Features consist of hourly average ambient variables Temperature (T), Ambient Pressure (AP), Relative Humidity (RH) and Exhaust Vacuum (V) to predict the net hourly electrical energy output (EP) of the plant.
A combined cycle power plant (CCPP) is composed of gas turbines (GT), steam turbines (ST) and heat recovery steam generators. In a CCPP, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another. While the Vacuum is colected from and has effect on the Steam Turbine, he other three of the ambient variables effect the GT performance.
```
!ls -ltr /data
spark
```
# Load Data
```
df = spark.read.format("csv").option("header","true")\
.option("inferSchema","true").load("/data/Combined_Cycle_Power_Plant.csv")
df.show()
df.cache()
```
# Convert Spark Dataframe to Pandas Dataframe
```
df.limit(10).toPandas().head()
```
## Verctorize the features
```
from pyspark.ml.feature import *
vectorizer = VectorAssembler()
vectorizer.setInputCols(["AT", "V", "AP", "RH"])
vectorizer.setOutputCol("features")
df_vect = vectorizer.transform(df)
df_vect.show(10, False)
print(vectorizer.explainParams())
```
## Fit Linear Regression Model
```
from pyspark.ml.regression import LinearRegression
lr = LinearRegression()
print(lr.explainParams())
lr.setLabelCol("EP")
lr.setFeaturesCol("features")
model = lr.fit(df_vect)
type(model)
```
### View model summary
```
print("R2:", model.summary.r2)
print("Intercept: ", model.intercept, "Coefficients", model.coefficients)
```
### Predict
```
df_pred = model.transform(df_vect)
df_pred.show()
```
### Evaluate
```
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator()
print(evaluator.explainParams())
evaluator = RegressionEvaluator(labelCol = "EP",
predictionCol = "prediction",
metricName = "rmse")
evaluator.evaluate(df_pred)
```
## Build a pipeline
```
from pyspark.ml.pipeline import Pipeline, PipelineModel
pipeline = Pipeline()
print(pipeline.explainParams())
pipeline.setStages([vectorizer, lr])
pipelineModel = pipeline.fit(df)
pipeline.getStages()
lr_model = pipelineModel.stages[1]
lr_model .coefficients
pipelineModel.transform(df).show()
evaluator.evaluate(pipelineModel.transform(df))
```
## Save the pipeline to disk to persist the model
```
pipelineModel.save("/tmp/lr-pipeline")
!tree /tmp/lr-pipeline
```
### Load the persisted model from the disk
```
saved_model = PipelineModel.load("/tmp/lr-pipeline")
saved_model.stages[1].coefficients
saved_model.transform(df).show()
df_train, df_test = df.randomSplit(weights=[0.7, 0.3], seed = 200)
pipelineModel = pipeline.fit(df_train)
evaluator.evaluate(pipelineModel.transform(df_test))
```
# Tune the model
```
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.01]) \
.addGrid(lr.fitIntercept, [False, True])\
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
.build()
# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
trainRatio=0.8)
tuned_model = tvs.fit(vectorizer.transform(df_train))
tuned_model.bestModel, tuned_model.validationMetrics
df_test_pred = tuned_model.transform(vectorizer.transform(df_test))
df_test_pred.show()
evaluator.evaluate(df_test_pred)
```
|
github_jupyter
|
!ls -ltr /data
spark
df = spark.read.format("csv").option("header","true")\
.option("inferSchema","true").load("/data/Combined_Cycle_Power_Plant.csv")
df.show()
df.cache()
df.limit(10).toPandas().head()
from pyspark.ml.feature import *
vectorizer = VectorAssembler()
vectorizer.setInputCols(["AT", "V", "AP", "RH"])
vectorizer.setOutputCol("features")
df_vect = vectorizer.transform(df)
df_vect.show(10, False)
print(vectorizer.explainParams())
from pyspark.ml.regression import LinearRegression
lr = LinearRegression()
print(lr.explainParams())
lr.setLabelCol("EP")
lr.setFeaturesCol("features")
model = lr.fit(df_vect)
type(model)
print("R2:", model.summary.r2)
print("Intercept: ", model.intercept, "Coefficients", model.coefficients)
df_pred = model.transform(df_vect)
df_pred.show()
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator()
print(evaluator.explainParams())
evaluator = RegressionEvaluator(labelCol = "EP",
predictionCol = "prediction",
metricName = "rmse")
evaluator.evaluate(df_pred)
from pyspark.ml.pipeline import Pipeline, PipelineModel
pipeline = Pipeline()
print(pipeline.explainParams())
pipeline.setStages([vectorizer, lr])
pipelineModel = pipeline.fit(df)
pipeline.getStages()
lr_model = pipelineModel.stages[1]
lr_model .coefficients
pipelineModel.transform(df).show()
evaluator.evaluate(pipelineModel.transform(df))
pipelineModel.save("/tmp/lr-pipeline")
!tree /tmp/lr-pipeline
saved_model = PipelineModel.load("/tmp/lr-pipeline")
saved_model.stages[1].coefficients
saved_model.transform(df).show()
df_train, df_test = df.randomSplit(weights=[0.7, 0.3], seed = 200)
pipelineModel = pipeline.fit(df_train)
evaluator.evaluate(pipelineModel.transform(df_test))
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.01]) \
.addGrid(lr.fitIntercept, [False, True])\
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
.build()
# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
trainRatio=0.8)
tuned_model = tvs.fit(vectorizer.transform(df_train))
tuned_model.bestModel, tuned_model.validationMetrics
df_test_pred = tuned_model.transform(vectorizer.transform(df_test))
df_test_pred.show()
evaluator.evaluate(df_test_pred)
| 0.603114 | 0.977306 |
# 1. Establish the network
```
from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten,MaxPooling2D
from keras.models import Model, Sequential
from keras.callbacks import Callback, ModelCheckpoint, LearningRateScheduler, TerminateOnNaN
from util import EvaluateAccuracy
from keras.regularizers import l2
from keras import backend as K
from keras.optimizers import SGD,Adam
from keras.losses import binary_crossentropy
import numpy.random as rng
import numpy as np
import cv2
import os
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
%matplotlib inline
def W_init(shape,name=None):
"""Initialize weights as in paper"""
values = rng.normal(loc=0,scale=1e-2,size=shape)
return K.variable(values,name=name)
#//TODO: figure out how to initialize layer biases in keras.
def b_init(shape,name=None):
"""Initialize bias as in paper"""
values=rng.normal(loc=0.5,scale=1e-2,size=shape)
return K.variable(values,name=name)
input_shape = (105, 105, 1)
left_input = Input(input_shape)
right_input = Input(input_shape)
#build convnet to use in each siamese 'leg'
convnet = Sequential()
convnet.add(Conv2D(64,(10,10),activation='relu',input_shape=input_shape,
kernel_initializer=W_init,kernel_regularizer=l2(2e-4)))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(128,(7,7),activation='relu',
kernel_regularizer=l2(2e-4),kernel_initializer=W_init,bias_initializer=b_init))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(128,(4,4),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(256,(4,4),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
convnet.add(Flatten())
convnet.add(Dense(2048,activation="sigmoid",kernel_regularizer=l2(1e-3),kernel_initializer=W_init,bias_initializer=b_init))
#call the convnet Sequential model on each of the input tensors so params will be shared
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
#layer to merge two encoded inputs with the l1 distance between them
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
#call this layer on list of two input tensors.
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid',bias_initializer=b_init)(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
optimizer = Adam(0.00006)
# Load some weights into the model.
weights_path = 'weights/siamese_net_epoch-136_loss-0.0125_val_loss-0.0918.h5'
siamese_net.load_weights(weights_path, by_name=True)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss="binary_crossentropy",optimizer=optimizer)
siamese_net.count_params()
```
# 2. Load test images
```
#load reference image
ref_image = cv2.imread('test/181122205344_L17/google_181122205344.png', cv2.IMREAD_GRAYSCALE)
ref_height, ref_width = ref_image.shape
#load match image
mark_image = cv2.imread('test/181122205344_L17/landmark.png', cv2.IMREAD_GRAYSCALE)
mark_height, mark_width = mark_image.shape
#resize the image to match the network input
resized_mark = cv2.resize(mark_image, (105, 105))
resize_rate = (105/float(mark_height), 105/float(mark_width))
ref_size = (int(ref_width*resize_rate[1]), int(ref_height*resize_rate[0]))
resized_ref = cv2.resize(ref_image, ref_size)
print(ref_height*resize_rate[0], ref_width*resize_rate[1])
#show the images
plt.figure('reference',figsize=(10,10))
plt.title('reference')
plt.imshow(resized_ref, cmap='gray')
plt.figure("landmark", figsize=(3,3))
plt.title('landmark')
plt.imshow(resized_mark, cmap='gray')
```
# Start matching process
```
#set the step size
step = [5, 5]
#numbers of pairs per batch
batchsize = 128
#initialize a numpy array to store scores
scores = np.zeros(((resized_ref.shape[0]-105)//step[0]+1, (resized_ref.shape[1]-105)//step[1]+1))
pair_num = scores.shape[0]*scores.shape[1]
best_score = 0
best_location = (0, 0)
batch_cnt = 0
cnt = 0
pair_list = []
location_list = []
for i in range(0, resized_ref.shape[0]-105, step[0]):
for j in range(0, resized_ref.shape[1]-105, step[1]):
#make a batch pairs
if batch_cnt < batchsize:
ref_patch = resized_ref[i:i+105, j:j+105]
#print([i,j])
pair_list.append(ref_patch)
location_list.append((j,i))
batch_cnt += 1
cnt += 1
continue
pairs = [np.concatenate([resized_mark,]*batchsize, axis=0).reshape(batchsize,105,105,1),
np.concatenate(pair_list, axis=0).reshape(batchsize,105,105,1)]
probs = siamese_net.predict(pairs)
print("score for %d/%d pair: %f"%(cnt, pair_num, probs[-1]))
for k, prob in enumerate(probs):
if prob > best_score:
best_score = prob
best_location = location_list[k]
batch_cnt = 0
location_list = []
pair_list = []
print('best score is: %f'%best_score)
print(best_location)
#concatenate to make a 3-channel image
show_ref = np.concatenate([resized_ref[:,:,np.newaxis],]*3, axis=-1)
result = cv2.rectangle(show_ref, best_location, (best_location[0]+105,best_location[1]+105), (0,255,0), thickness=2)
plt.figure('result',figsize=(10,10))
plt.title('result')
plt.imshow(result)
```
|
github_jupyter
|
from keras.layers import Input, Conv2D, Lambda, merge, Dense, Flatten,MaxPooling2D
from keras.models import Model, Sequential
from keras.callbacks import Callback, ModelCheckpoint, LearningRateScheduler, TerminateOnNaN
from util import EvaluateAccuracy
from keras.regularizers import l2
from keras import backend as K
from keras.optimizers import SGD,Adam
from keras.losses import binary_crossentropy
import numpy.random as rng
import numpy as np
import cv2
import os
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.utils import shuffle
%matplotlib inline
def W_init(shape,name=None):
"""Initialize weights as in paper"""
values = rng.normal(loc=0,scale=1e-2,size=shape)
return K.variable(values,name=name)
#//TODO: figure out how to initialize layer biases in keras.
def b_init(shape,name=None):
"""Initialize bias as in paper"""
values=rng.normal(loc=0.5,scale=1e-2,size=shape)
return K.variable(values,name=name)
input_shape = (105, 105, 1)
left_input = Input(input_shape)
right_input = Input(input_shape)
#build convnet to use in each siamese 'leg'
convnet = Sequential()
convnet.add(Conv2D(64,(10,10),activation='relu',input_shape=input_shape,
kernel_initializer=W_init,kernel_regularizer=l2(2e-4)))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(128,(7,7),activation='relu',
kernel_regularizer=l2(2e-4),kernel_initializer=W_init,bias_initializer=b_init))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(128,(4,4),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
convnet.add(MaxPooling2D())
convnet.add(Conv2D(256,(4,4),activation='relu',kernel_initializer=W_init,kernel_regularizer=l2(2e-4),bias_initializer=b_init))
convnet.add(Flatten())
convnet.add(Dense(2048,activation="sigmoid",kernel_regularizer=l2(1e-3),kernel_initializer=W_init,bias_initializer=b_init))
#call the convnet Sequential model on each of the input tensors so params will be shared
encoded_l = convnet(left_input)
encoded_r = convnet(right_input)
#layer to merge two encoded inputs with the l1 distance between them
L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1]))
#call this layer on list of two input tensors.
L1_distance = L1_layer([encoded_l, encoded_r])
prediction = Dense(1,activation='sigmoid',bias_initializer=b_init)(L1_distance)
siamese_net = Model(inputs=[left_input,right_input],outputs=prediction)
optimizer = Adam(0.00006)
# Load some weights into the model.
weights_path = 'weights/siamese_net_epoch-136_loss-0.0125_val_loss-0.0918.h5'
siamese_net.load_weights(weights_path, by_name=True)
#//TODO: get layerwise learning rates and momentum annealing scheme described in paperworking
siamese_net.compile(loss="binary_crossentropy",optimizer=optimizer)
siamese_net.count_params()
#load reference image
ref_image = cv2.imread('test/181122205344_L17/google_181122205344.png', cv2.IMREAD_GRAYSCALE)
ref_height, ref_width = ref_image.shape
#load match image
mark_image = cv2.imread('test/181122205344_L17/landmark.png', cv2.IMREAD_GRAYSCALE)
mark_height, mark_width = mark_image.shape
#resize the image to match the network input
resized_mark = cv2.resize(mark_image, (105, 105))
resize_rate = (105/float(mark_height), 105/float(mark_width))
ref_size = (int(ref_width*resize_rate[1]), int(ref_height*resize_rate[0]))
resized_ref = cv2.resize(ref_image, ref_size)
print(ref_height*resize_rate[0], ref_width*resize_rate[1])
#show the images
plt.figure('reference',figsize=(10,10))
plt.title('reference')
plt.imshow(resized_ref, cmap='gray')
plt.figure("landmark", figsize=(3,3))
plt.title('landmark')
plt.imshow(resized_mark, cmap='gray')
#set the step size
step = [5, 5]
#numbers of pairs per batch
batchsize = 128
#initialize a numpy array to store scores
scores = np.zeros(((resized_ref.shape[0]-105)//step[0]+1, (resized_ref.shape[1]-105)//step[1]+1))
pair_num = scores.shape[0]*scores.shape[1]
best_score = 0
best_location = (0, 0)
batch_cnt = 0
cnt = 0
pair_list = []
location_list = []
for i in range(0, resized_ref.shape[0]-105, step[0]):
for j in range(0, resized_ref.shape[1]-105, step[1]):
#make a batch pairs
if batch_cnt < batchsize:
ref_patch = resized_ref[i:i+105, j:j+105]
#print([i,j])
pair_list.append(ref_patch)
location_list.append((j,i))
batch_cnt += 1
cnt += 1
continue
pairs = [np.concatenate([resized_mark,]*batchsize, axis=0).reshape(batchsize,105,105,1),
np.concatenate(pair_list, axis=0).reshape(batchsize,105,105,1)]
probs = siamese_net.predict(pairs)
print("score for %d/%d pair: %f"%(cnt, pair_num, probs[-1]))
for k, prob in enumerate(probs):
if prob > best_score:
best_score = prob
best_location = location_list[k]
batch_cnt = 0
location_list = []
pair_list = []
print('best score is: %f'%best_score)
print(best_location)
#concatenate to make a 3-channel image
show_ref = np.concatenate([resized_ref[:,:,np.newaxis],]*3, axis=-1)
result = cv2.rectangle(show_ref, best_location, (best_location[0]+105,best_location[1]+105), (0,255,0), thickness=2)
plt.figure('result',figsize=(10,10))
plt.title('result')
plt.imshow(result)
| 0.612426 | 0.810479 |
<img style="float: right;" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOIAAAAjCAYAAACJpNbGAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAABR0RVh0Q3JlYXRpb24gVGltZQAzLzcvMTNND4u/AAAAHHRFWHRTb2Z0d2FyZQBBZG9iZSBGaXJld29ya3MgQ1M26LyyjAAACMFJREFUeJztnD1y20gWgD+6nJtzAsPhRqKL3AwqwQdYDpXDZfoEppNNTaWbmD7BUEXmI3EPMFCR2YI1UDQpdAPqBNzgvRZA/BGUZEnk9FeFIgj0z2ugX7/XP+jGer2mLv/8b6d+4Efgf/8KG0+Zn8XyXLx+bgEslqegcfzxSY3Irrx6bgEsFssBWsRGowGufwHAYtq7u+H6fUCOxTTWax4wBAbr+SRqNDKesOv3gN/133sW0yh927j1mucIaFWINl7PJ+OcvMcfW8Bol3iN44+mLIOsTCp3UJFfAETr+WRQcG8EOJpunEnTyDlYzycbeWr5xxq3jOF6PglK8ix9buv5xCsrAzBkMV1l5OwD/aJ4BXzV3+8F9z4gz/hTSbz8cxc84FuNvDc4VIsYA7+qohmGwAnycA194G22YqUYlZxv4vpN4AuwBv4oON5m8k3TVLnK4sYFcRyN86dWvCwnlCvFCeUVvwX8CkSZZ5eWs5mLJWE/VZThBMgpfirPk5J4f1SU4QsQ6LNP4+j9OkSUKdRiGlD87CWe3PcyR5PFdAhc1cz/joOziMoIeVF95GX1EGVY6bWhvsAeZQrm+kON80PDneD6PRbTi4LQpmJfsZieFaR1qXlXURh3y2BaBPyG63sspv0t6e+CKJTrf2YxHe8Qr6z8AXBdGbMoHgCTshgr4AiItfxljenPJGv5roCi+rGVw1TExTTWl99ThRsglfYHUnF7SMv+Bhjn4idxbhFLGiAu6gjXD3LuUBF5VzWi3CoAfMP1kxe7mNYZMT5DLFgf13eAXi3ZtvMOsUb3V3J5/mmqy+/66RbnTC1LFdfIu/kd8Qx2bTQeg2GBTPfiUF1TgHNE0QaIq/JDX9RKr/WBy/V8EhfEHWncWMO2EKV8S7UypYnYdE2r+o8gyj5MHXVYsZh+JnG7A+3LPQxR5g9II/UJ148ockmrybqm2+Qapo6gppwB8J7EM6jqaz8u0lhfkXgB58BKPam6rvEdh2kRARbTMa7/HXEfVqnW8hxxWwE+5+JJRTYd9CM90gxw/XFuMKMo/yTNDzUkLnbr6rCYnuH6N8igQ3CvNPJproDPuH6MKMd4Z5kMUjnrh98tn1if72/Ie729Vzq708L0YV3/HGmgB4iHsjOProhhd1lrEr4zaz/FvM4lolTnqWum/6jKmeuDmFb1jHylNg96hPQbhcU0wPVBXESvQI4W5aNshsK4jeOPhSOcOaThMVb48dhU8m2UlR+29ZHzrqyhLL0EaTROteGt67EYIsT6F1HXC/ikcvS00dl51PRwLaIwQtzCxGWRFnRMkT8v/SyAy8I+iliHJtDUsHHq7imipE42GtJanxdcB6mgQcm9MmKNs1m5F9MI13+n+cXZSEpAeV8mQgZqNkmU/HsuT7kf4PrGhXcK0h1SXv7iPKsJKCrDYvoV17+meMqhiDFlll7GEb4U3iseAf+k7mqksmU9qUoaj73E7TEtol3iZnks7Moai8WylUN3TS0WANbzyYv2rqxFtFheANYi7iGNRoPOrO2QGTQIu8vhU8vSmbWNDAHQD7vLYWfWbgFx2F3ee3FBZ9ZuIgMpTWAQdpeRXm9pPoPOrD3UMCtkQM4BRmF3ubG6ZZdxkOfCWsT9pU96CuX56KfOjeIFVC8Ar8NI0xuyOQJsVkWl8xzptQGPNY/6xFiLuL+0gIu0FVTrNESmbK7C7tLrzNpmPW0EeGF32UyFN19UnCAT4ZHGWWnYqDNrB4jViZBK/kbD9sLuMiBZSD8AVp1Z+0LD/NmZta+BIzOS3pm1xwBhd9kvkeEGUbQeqSmIdHhkXnGs5fIQRUxPV1x0Zm2zMuoq7C69rU/yBWAt4v7iAd86s/ZaDweZP+wBvwBOZ9b2SCrrmPzk+AWizA09j1QxMK4gZumcWKUWMvkdA56mfxN2l7GmHWk6V2F32Qi7yxaIsmnYHvkJ9zEQqAwBotQXwK2m0c+EN/Kk8zPTZiOkIWrp/xNTnpeOtYh7iFauN+k5W+0vXab6UsbyecAw229SxWiG3aVZ7NBCKrGHuneazy2iyBeIuxkjk9UDE1bzOtJ4IzbdwysNN0D6dnf9Rk3/iKSBWOnhUbASSWW+DbvLWM+HKreZ3O/r77gza5u842w6LxFrEfcTj+Jv3mK4q7Co63hE+fI6E94hUaT0cry+XushSuvoNZO2CdsCrlXJHDYVMUIUJso2BmhfL+wuV6rMvVR6AXnS1428XupaE7Hwnrqkg4cMGD0lr3NfpVegrUw1m2sN0+crNirEX1uTqiPbPoyI/QSKKmqA9I9aer+fcR2zxIj7GiMV+EYVIkZc3r5eH2rYI+0vnpBYIE/vGwUCdYM7s3agbqXJu58VIOwug86sfd2ZtSPNKwi7S9PHy4UnscCmXKuUZQRdsqbPwCHp2754pKYnW0akcZBO/x2df29XnvA//6iV8T3TSluBmOQlR+v5JNvaHixlDZRalRZifbZaAg3vIIrkmP6YVu6owI1M9x2r0vVIFCBGXNLS96Ph45IGY2ey6e1DY20UMaLGItUXoIhVvCv5tvDg2MWLqYNaoKBKWe6Z7gBR8OwAzZOyD4poBmtidlwt/gIxw/QHz0+oWKIoj19fRz8p3YOjoV8195F5l31ltZ5PfnluISyW+/IK6SPstRIiH/FaLHvLa2R+6F6f978AVsD7v0vf0HK4vNK9VfbVojSBceP4o/PcglgsD8GMmjaRbRCc1PEQIrbv45nlIfleIrs778XkrcWSZXMcXPZyqbvfxy7ckuyqHJPslJzH9c3We2ZRbx1O/07ziJbDI1FE2Qwp4n4DNzHJhkZF16+3bnwrCmi40U2eWoj7KZvobn7+YtKO1vPJVyyWPSZrER1kNU0TqfienpvlaWZR7oX+3tba6lxcX7MK3tNfo2RlpNc8tthsIFbAKYtpsA+TtRbLNp5/H4/EFXX0MOfbOGUxvbCKaDkEnl8Rq0jc1ayFjhFFjKwiWg6B/wNk+JCXXNBIXQAAAABJRU5ErkJggg==">
# Running a simulation with PCSE/LINTUL3
The LINTUL model (Light INTerception and UtiLisation) is a simple generic crop model, which simulates dry
matter production as the result of light interception and utilization with a constant light use efficiency.
In PCSE the LINTUL family of models has been implemented including the LINTUL3 model which is used for
simulation of crop production under water-limited and nitrogen-limited conditions.
For the third example, we will use LINTUL3 for simulating spring-wheat in the Netherlands under water-limited
and nitrogen-limited conditions. For the example we will assume that data files are in the `data` directory within the directory where this notebook is located. This will be the case if you downloaded the notebooks from github.
First we will import the necessary modules and define the data directory. We assume that you have the `pcse`, `matplotlib` and `pandas` packages installed on your system.
```
%matplotlib inline
import os, sys
import pcse
import matplotlib.pyplot as plt
import pandas as pd
import yaml
data_dir = os.path.join(os.getcwd(), "data")
import pcse
print("This notebook was built with:")
print("python version: %s " % sys.version)
print("PCSE version: %s" % pcse.__version__)
```
## Input requirements
For running the PCSE/LINTUL3 (and PCSE models in general), you need three types of inputs:
1. Model parameters that parameterize the different model components. These parameters usually
consist of a set of crop parameters (or multiple sets in case of crop rotations), a set of soil parameters
and a set of site parameters. The latter provide ancillary parameters that are specific for a location.
2. Driving variables represented by weather data which can be derived from various sources.
3. Agromanagement actions which specify the farm activities that will take place on the field that is simulated
by PCSE. For defining the agromanagement we will use the new `AgroManager` which replaces the `timerdata`
definition that was used previously.
Reading model parameters
------------------------
Model parameters can be easily read from the input files using the `PCSEFileReader`. However, PCSE models expect a single set of parameters and therefore they need to be combined using the
`ParameterProvider`::
```
from pcse.fileinput import PCSEFileReader
from pcse.base import ParameterProvider
crop = PCSEFileReader(os.path.join(data_dir, "crop", "lintul3_springwheat.crop"))
soil = PCSEFileReader(os.path.join(data_dir, "soil", "lintul3_springwheat.soil"))
site = PCSEFileReader(os.path.join(data_dir, "site", "lintul3_springwheat.site"))
parameterprovider = ParameterProvider(soildata=soil, cropdata=crop, sitedata=site)
```
Reading weather data
--------------------
For reading weather data we will use the ExcelWeatherDataProvider. This WeatherDataProvider uses nearly the same
file format as is used for the CABO weather files but stores its data in an MicroSoft Excel file which makes the
weather files easier to create and update:
```
from pcse.fileinput import ExcelWeatherDataProvider
weatherdataprovider = ExcelWeatherDataProvider(os.path.join(data_dir, "meteo", "nl1.xls"))
print(weatherdataprovider)
```
Defining agromanagement
---------------------------
Defining agromanagement needs a bit more explanation because agromanagement is a relatively
complex piece of PCSE. The agromanagement definition for PCSE is written in a format called `YAML` and
for the current example looks like this:
Version: 1.0
AgroManagement:
- 2006-01-01:
CropCalendar:
crop_name: wheat
variety_name: spring-wheat-1
crop_start_date: 2006-03-31
crop_start_type: emergence
crop_end_date: 2006-08-20
crop_end_type: earliest
max_duration: 300
TimedEvents:
- event_signal: apply_n
name: Nitrogen application table
comment: All nitrogen amounts in g N m-2
events_table:
- 2006-04-10: {amount: 10, recovery: 0.7}
- 2006-05-05: {amount: 5, recovery: 0.7}
StateEvents: null
The agromanagement definition starts with `Version:` indicating the version number of the agromanagement file
while the actual definition starts after the label `AgroManagement:`. Next a date must be provide which sets the
start date of the campaign (and the start date of the simulation). Each campaign is defined by zero or one
CropCalendars and zero or more TimedEvents and/or StateEvents. The CropCalendar defines the crop type, date of sowing,
date of harvesting, etc. while the Timed/StateEvents define actions that are either connected to a date or
to a model state.
In the current example, the campaign starts on 2006-01-01, there is a crop calendar for spring-wheat starting on
2006-03-31 with a harvest date of 2006-08-20 or earlier if the crop reaches maturity before this date.
Next there are timed events defined for applying N fertilizer at 2006-04-10 and 2006-05-05. The current example
has no state events. For a thorough description of all possibilities see the section on AgroManagement in the
Reference Guide.
Loading the agromanagement definition must by done with the YAMLAgroManagementReader::
```
from pcse.fileinput import YAMLAgroManagementReader
agromanagement = YAMLAgroManagementReader(os.path.join(data_dir, "agro", "lintul3_springwheat.agro"))
print(agromanagement)
```
## Starting and running the LINTUL3 model
We have now all parameters, weather data and agromanagement information available to start the LINTUL3 model:
```
from pcse.models import LINTUL3
lintul3 = LINTUL3(parameterprovider, weatherdataprovider, agromanagement)
lintul3.run_till_terminate()
```
## Getting and visualizing results
Next, we can easily get the output from the model using the get_output() method and turn it into a pandas DataFrame:
```
output = lintul3.get_output()
df = pd.DataFrame(output).set_index("day")
df.tail()
```
Finally, we can visualize the results from the pandas DataFrame with a few commands:
```
titles = {"DVS":("Development stage", "-"),
"TGROWTH": ("Total biomass (above and below ground)", "g/m2"),
"LAI": ("Leaf area Index", "-"),
"NUPTT": ("Total nitrogen uptake", "gN/m2"),
"TRAN": ("Transpiration", "mm/day"),
"TIRRIG": ("Total irrigation", "mm"),
"TNSOIL": ("Total soil inorganic nitrogen", "gN/m2"),
"TRAIN": ("Total rainfall", "mm"),
"TRANRF": ("Transpiration reduction factor", "-"),
"TRUNOF": ("Total runoff", "mm"),
"TAGBM": ("Total aboveground biomass", "g/m2"),
"TTRAN": ("Total transpiration", "mm"),
"WC": ("Soil water content", "m3/m3"),
"WLVD": ("Weight dead leaves", "g/m2"),
"WLVG": ("Weight green leaves", "g/m2"),
"WRT": ("Weight roots", "g/m2"),
"WSO": ("Weight storage organs", "g/m2"),
"WST": ("Weight stems", "g/m2")
}
fig, axes = plt.subplots(nrows=9, ncols=2, figsize=(16,40))
for key, axis in zip(df.columns, axes.flatten()):
name, unit = titles[key]
title = f"{key} - {name}"
df[key].plot(ax=axis, title=title)
axis.set_ylabel(f"[{unit}]")
fig.autofmt_xdate()
```
|
github_jupyter
|
%matplotlib inline
import os, sys
import pcse
import matplotlib.pyplot as plt
import pandas as pd
import yaml
data_dir = os.path.join(os.getcwd(), "data")
import pcse
print("This notebook was built with:")
print("python version: %s " % sys.version)
print("PCSE version: %s" % pcse.__version__)
from pcse.fileinput import PCSEFileReader
from pcse.base import ParameterProvider
crop = PCSEFileReader(os.path.join(data_dir, "crop", "lintul3_springwheat.crop"))
soil = PCSEFileReader(os.path.join(data_dir, "soil", "lintul3_springwheat.soil"))
site = PCSEFileReader(os.path.join(data_dir, "site", "lintul3_springwheat.site"))
parameterprovider = ParameterProvider(soildata=soil, cropdata=crop, sitedata=site)
from pcse.fileinput import ExcelWeatherDataProvider
weatherdataprovider = ExcelWeatherDataProvider(os.path.join(data_dir, "meteo", "nl1.xls"))
print(weatherdataprovider)
from pcse.fileinput import YAMLAgroManagementReader
agromanagement = YAMLAgroManagementReader(os.path.join(data_dir, "agro", "lintul3_springwheat.agro"))
print(agromanagement)
from pcse.models import LINTUL3
lintul3 = LINTUL3(parameterprovider, weatherdataprovider, agromanagement)
lintul3.run_till_terminate()
output = lintul3.get_output()
df = pd.DataFrame(output).set_index("day")
df.tail()
titles = {"DVS":("Development stage", "-"),
"TGROWTH": ("Total biomass (above and below ground)", "g/m2"),
"LAI": ("Leaf area Index", "-"),
"NUPTT": ("Total nitrogen uptake", "gN/m2"),
"TRAN": ("Transpiration", "mm/day"),
"TIRRIG": ("Total irrigation", "mm"),
"TNSOIL": ("Total soil inorganic nitrogen", "gN/m2"),
"TRAIN": ("Total rainfall", "mm"),
"TRANRF": ("Transpiration reduction factor", "-"),
"TRUNOF": ("Total runoff", "mm"),
"TAGBM": ("Total aboveground biomass", "g/m2"),
"TTRAN": ("Total transpiration", "mm"),
"WC": ("Soil water content", "m3/m3"),
"WLVD": ("Weight dead leaves", "g/m2"),
"WLVG": ("Weight green leaves", "g/m2"),
"WRT": ("Weight roots", "g/m2"),
"WSO": ("Weight storage organs", "g/m2"),
"WST": ("Weight stems", "g/m2")
}
fig, axes = plt.subplots(nrows=9, ncols=2, figsize=(16,40))
for key, axis in zip(df.columns, axes.flatten()):
name, unit = titles[key]
title = f"{key} - {name}"
df[key].plot(ax=axis, title=title)
axis.set_ylabel(f"[{unit}]")
fig.autofmt_xdate()
| 0.2676 | 0.397588 |
```
print("Doing the machine learning...")
from keras.models import Sequential
from keras.layers import Dense
import numpy
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# fix random seed for reproducibility
numpy.random.seed(7)
LAYERS = [8,16,16,1]
dataset = numpy.loadtxt("MakeathonAccept.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:LAYERS[0]]
Y = dataset[:,LAYERS[0]:LAYERS[0] + 1]
# create model
model = Sequential()
model.add(Dense(LAYERS[0], input_dim=LAYERS[0], activation='relu'))
model.add(Dense(LAYERS[1], activation='relu'))
model.add(Dense(LAYERS[2], activation='relu'))
model.add(Dense(LAYERS[3], activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=50, batch_size=10, shuffle=True, validation_split=.25)
# USER APPLY TEST
# scores = model.evaluate(testX, testY)
# print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# Predict your chance of getting into CoLab!
name = input('Enter your name: ')
print('Hello', name)
gradYear = int(input('When did or will you graduate? (enter 0 if before 2010)'))
gradYear = gradYear%10
levelEdu = int(input('What is your level of education? (0 = undergrad, 1 = master, 2 = doctoral, 3 = no school) '))
skill = int(input('Are you a Business Designer(0), Designer(1), Technologist(2), or something else(3)?'))
craft = int(input('What is your craft score? (inputted by HR)'))
r1 = int(input('What is your Resume Score? (NLP)'))
r2 = int(input('What is your Cover Letter Score? (NLP)'))
totalR = r1+r2
totalAll = totalR + craft
userData = numpy.array([[gradYear, levelEdu, skill, craft, r1, r2, totalR, totalAll]])
# calculate predictions
# testY = testdataset[:,8]
yourPrediction = model.predict(userData, batch_size=1, verbose=0)
print("Your Prediction: ", yourPrediction[0][0]*100)
# TEST DATASET
testdataset = numpy.loadtxt("MakeathonFinalTest.csv", delimiter=",")
testX = testdataset[:,0:LAYERS[0]]
testY = testdataset[:,LAYERS[0]]
predictions = model.predict(testX, batch_size=1, verbose=0)
print("Other Predictions: ", predictions)
# INTERACTIVE TEST
gradScore = widgets.FloatSlider(min=2011, max=2020, step=1, value=0)
educationLevelScore = widgets.FloatSlider(min=0, max=3, step=1, value=0)
skillScore = widgets.FloatSlider(min=0, max=3, step=1, value=0)
craftScore = widgets.FloatSlider(min=0, max=3, step=.25, value=0)
r1Score = widgets.FloatSlider(min=0, max=3, step=.5, value=0)
r2Score = widgets.FloatSlider(min=0, max=3, step=.5, value=0)
@interact(grad = gradScore, eduLevel = educationLevelScore, skill = skillScore, craft = craftScore,
r1 = r1Score, r2 = r2Score)
def test(grad, eduLevel, skill, craft, r1, r2):
totalRScore = r1+r2
totalAllScore = totalRScore + craft
grad = grad%10
userData = numpy.array([[grad, eduLevel, skill, craft, r1, r2, totalRScore, totalAllScore]])
yourPrediction = model.predict(userData, batch_size=1, verbose=0)
print('eduLevel: 0 = undergrad, 1 = masters, 2 = doctoral, 3 = none')
print('skill: 0 = BD, 1 = D, 2 = T, 3 = W\n')
print("Your Prediction: ", yourPrediction[0][0]*100)
```
|
github_jupyter
|
print("Doing the machine learning...")
from keras.models import Sequential
from keras.layers import Dense
import numpy
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# fix random seed for reproducibility
numpy.random.seed(7)
LAYERS = [8,16,16,1]
dataset = numpy.loadtxt("MakeathonAccept.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:LAYERS[0]]
Y = dataset[:,LAYERS[0]:LAYERS[0] + 1]
# create model
model = Sequential()
model.add(Dense(LAYERS[0], input_dim=LAYERS[0], activation='relu'))
model.add(Dense(LAYERS[1], activation='relu'))
model.add(Dense(LAYERS[2], activation='relu'))
model.add(Dense(LAYERS[3], activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(X, Y, epochs=50, batch_size=10, shuffle=True, validation_split=.25)
# USER APPLY TEST
# scores = model.evaluate(testX, testY)
# print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
# Predict your chance of getting into CoLab!
name = input('Enter your name: ')
print('Hello', name)
gradYear = int(input('When did or will you graduate? (enter 0 if before 2010)'))
gradYear = gradYear%10
levelEdu = int(input('What is your level of education? (0 = undergrad, 1 = master, 2 = doctoral, 3 = no school) '))
skill = int(input('Are you a Business Designer(0), Designer(1), Technologist(2), or something else(3)?'))
craft = int(input('What is your craft score? (inputted by HR)'))
r1 = int(input('What is your Resume Score? (NLP)'))
r2 = int(input('What is your Cover Letter Score? (NLP)'))
totalR = r1+r2
totalAll = totalR + craft
userData = numpy.array([[gradYear, levelEdu, skill, craft, r1, r2, totalR, totalAll]])
# calculate predictions
# testY = testdataset[:,8]
yourPrediction = model.predict(userData, batch_size=1, verbose=0)
print("Your Prediction: ", yourPrediction[0][0]*100)
# TEST DATASET
testdataset = numpy.loadtxt("MakeathonFinalTest.csv", delimiter=",")
testX = testdataset[:,0:LAYERS[0]]
testY = testdataset[:,LAYERS[0]]
predictions = model.predict(testX, batch_size=1, verbose=0)
print("Other Predictions: ", predictions)
# INTERACTIVE TEST
gradScore = widgets.FloatSlider(min=2011, max=2020, step=1, value=0)
educationLevelScore = widgets.FloatSlider(min=0, max=3, step=1, value=0)
skillScore = widgets.FloatSlider(min=0, max=3, step=1, value=0)
craftScore = widgets.FloatSlider(min=0, max=3, step=.25, value=0)
r1Score = widgets.FloatSlider(min=0, max=3, step=.5, value=0)
r2Score = widgets.FloatSlider(min=0, max=3, step=.5, value=0)
@interact(grad = gradScore, eduLevel = educationLevelScore, skill = skillScore, craft = craftScore,
r1 = r1Score, r2 = r2Score)
def test(grad, eduLevel, skill, craft, r1, r2):
totalRScore = r1+r2
totalAllScore = totalRScore + craft
grad = grad%10
userData = numpy.array([[grad, eduLevel, skill, craft, r1, r2, totalRScore, totalAllScore]])
yourPrediction = model.predict(userData, batch_size=1, verbose=0)
print('eduLevel: 0 = undergrad, 1 = masters, 2 = doctoral, 3 = none')
print('skill: 0 = BD, 1 = D, 2 = T, 3 = W\n')
print("Your Prediction: ", yourPrediction[0][0]*100)
| 0.547827 | 0.537041 |
# Loss Landscapes on CIFAR
```
import os
import json
import copy
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import models
import ops.tests as tests
import ops.datasets as datasets
import ops.loss_landscapes as lls
# config_path = "configs/cifar10_general.json"
config_path = "configs/cifar100_general.json"
with open(config_path) as f:
args = json.load(f)
print("args: \n", args)
dataset_args = copy.deepcopy(args).get("dataset")
train_args = copy.deepcopy(args).get("train")
val_args = copy.deepcopy(args).get("val")
model_args = copy.deepcopy(args).get("model")
optim_args = copy.deepcopy(args).get("optim")
env_args = copy.deepcopy(args).get("env")
dataset_train, dataset_test = datasets.get_dataset(**dataset_args, download=True)
dataset_name = dataset_args["name"]
num_classes = len(dataset_train.classes)
dataset_train = DataLoader(dataset_train,
shuffle=True,
num_workers=train_args.get("num_workers", 4),
batch_size=train_args.get("batch_size", 128))
dataset_test = DataLoader(dataset_test,
num_workers=val_args.get("num_workers", 4),
batch_size=val_args.get("batch_size", 128))
print("Train: %s, Test: %s, Classes: %s" % (
len(dataset_train.dataset),
len(dataset_test.dataset),
num_classes
))
```
## Model
```
# VGG
# name = "vgg_dnn_19"
# name = "vgg_dnn_smoothing_19"
# name = "vgg_mcdo_19"
# name = "vgg_mcdo_smoothing_19"
# ResNet
name = "resnet_dnn_18"
# name = "resnet_dnn_smoothing_18"
# name = "resnet_mcdo_18"
# name = "resnet_mcdo_smoothing_18"
# name = "resnet_dnn_50"
# name = "resnet_mcdo_50"
# name = "resnet_dnn_smoothing_50"
# name = "resnet_mcdo_smoothing_50"
# Preact ResNet
# name = "preresnet_dnn_50"
# name = "preresnet_mcdo_50"
# name = "preresnet_dnn_smoothing_50"
# name = "preresnet_mcdo_smoothing_50"
# ResNeXt
# name = "resnext_dnn_50"
# name = "resnext_mcdo_50"
# name = "resnext_dnn_smoothing_50"
# name = "resnext_mcdo_smoothing_50"
# WideResNet
# name = "wideresnet_dnn_50"
# name = "wideresnet_mcdo_50"
# name = "wideresnet_dnn_smoothing_50"
# name = "wideresnet_mcdo_smoothing_50"
uid = "" # Model UID required
model = models.get_model(name, num_classes=num_classes,
stem=model_args.get("stem", False))
models.load(model, dataset_name, uid)
gpu = torch.cuda.is_available()
model = model.cuda() if gpu else model.cpu()
metrics_list = []
for n_ff in [1]:
print("N: %s, " % n_ff, end="")
*metrics, cal_diag = tests.test(model, n_ff, dataset_test, verbose=False, gpu=gpu)
metrics_list.append([n_ff, *metrics])
```
## Investigate the Loss Landscape
```
scale = 1e-1
n = 21
metrics_grid = lls.get_loss_landscape(
model, 1, dataset_train,
x_min=-1.0 * scale, x_max=1.0 * scale, n_x=n, y_min=-1.0 * scale, y_max=1.0 * scale, n_y=n,
)
leaderboard_path = os.path.join("leaderboard", "logs", dataset_name, model.name)
Path(leaderboard_path).mkdir(parents=True, exist_ok=True)
metrics_dir = os.path.join(leaderboard_path, "%s_%s_%s_x%s_losslandscape.csv" % (dataset_name, model.name, uid, int(1 / scale)))
metrics_list = [[*grid, *metrics] for grid, metrics in metrics_grid.items()]
tests.save_metrics(metrics_dir, metrics_list)
```
|
github_jupyter
|
import os
import json
import copy
from pathlib import Path
import torch
from torch.utils.data import DataLoader
import models
import ops.tests as tests
import ops.datasets as datasets
import ops.loss_landscapes as lls
# config_path = "configs/cifar10_general.json"
config_path = "configs/cifar100_general.json"
with open(config_path) as f:
args = json.load(f)
print("args: \n", args)
dataset_args = copy.deepcopy(args).get("dataset")
train_args = copy.deepcopy(args).get("train")
val_args = copy.deepcopy(args).get("val")
model_args = copy.deepcopy(args).get("model")
optim_args = copy.deepcopy(args).get("optim")
env_args = copy.deepcopy(args).get("env")
dataset_train, dataset_test = datasets.get_dataset(**dataset_args, download=True)
dataset_name = dataset_args["name"]
num_classes = len(dataset_train.classes)
dataset_train = DataLoader(dataset_train,
shuffle=True,
num_workers=train_args.get("num_workers", 4),
batch_size=train_args.get("batch_size", 128))
dataset_test = DataLoader(dataset_test,
num_workers=val_args.get("num_workers", 4),
batch_size=val_args.get("batch_size", 128))
print("Train: %s, Test: %s, Classes: %s" % (
len(dataset_train.dataset),
len(dataset_test.dataset),
num_classes
))
# VGG
# name = "vgg_dnn_19"
# name = "vgg_dnn_smoothing_19"
# name = "vgg_mcdo_19"
# name = "vgg_mcdo_smoothing_19"
# ResNet
name = "resnet_dnn_18"
# name = "resnet_dnn_smoothing_18"
# name = "resnet_mcdo_18"
# name = "resnet_mcdo_smoothing_18"
# name = "resnet_dnn_50"
# name = "resnet_mcdo_50"
# name = "resnet_dnn_smoothing_50"
# name = "resnet_mcdo_smoothing_50"
# Preact ResNet
# name = "preresnet_dnn_50"
# name = "preresnet_mcdo_50"
# name = "preresnet_dnn_smoothing_50"
# name = "preresnet_mcdo_smoothing_50"
# ResNeXt
# name = "resnext_dnn_50"
# name = "resnext_mcdo_50"
# name = "resnext_dnn_smoothing_50"
# name = "resnext_mcdo_smoothing_50"
# WideResNet
# name = "wideresnet_dnn_50"
# name = "wideresnet_mcdo_50"
# name = "wideresnet_dnn_smoothing_50"
# name = "wideresnet_mcdo_smoothing_50"
uid = "" # Model UID required
model = models.get_model(name, num_classes=num_classes,
stem=model_args.get("stem", False))
models.load(model, dataset_name, uid)
gpu = torch.cuda.is_available()
model = model.cuda() if gpu else model.cpu()
metrics_list = []
for n_ff in [1]:
print("N: %s, " % n_ff, end="")
*metrics, cal_diag = tests.test(model, n_ff, dataset_test, verbose=False, gpu=gpu)
metrics_list.append([n_ff, *metrics])
scale = 1e-1
n = 21
metrics_grid = lls.get_loss_landscape(
model, 1, dataset_train,
x_min=-1.0 * scale, x_max=1.0 * scale, n_x=n, y_min=-1.0 * scale, y_max=1.0 * scale, n_y=n,
)
leaderboard_path = os.path.join("leaderboard", "logs", dataset_name, model.name)
Path(leaderboard_path).mkdir(parents=True, exist_ok=True)
metrics_dir = os.path.join(leaderboard_path, "%s_%s_%s_x%s_losslandscape.csv" % (dataset_name, model.name, uid, int(1 / scale)))
metrics_list = [[*grid, *metrics] for grid, metrics in metrics_grid.items()]
tests.save_metrics(metrics_dir, metrics_list)
| 0.351534 | 0.518851 |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
training_set = pd.read_csv('titanic/train.csv')
training_set.head()
```
# Исследование данных
```
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
sns.set_style('darkgrid')
sns.countplot(x='Survived', data=training_set)
sns.countplot(x='Survived', data=training_set, hue='Sex')
sns.countplot(x='Survived', data=training_set, hue='Pclass')
sns.displot(training_set['Age'].dropna(), bins=30)
training_set['Age'].plot.hist(bins=30)
training_set.info()
sns.countplot(x='SibSp', data=training_set)
training_set['Fare'].hist(bins=40, figsize=(10, 4))
```
# Подготовка данных
## Imputation
https://en.wikipedia.org/wiki/Imputation_(statistics)
```
sns.boxplot(x='Pclass', y='Age', data=training_set)
def impute_age(columns):
""" Функция для импутации Age"""
Age = columns[0]
Pclass = columns[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass ==2:
return 29
else:
return 24
else:
return Age
training_set['Age'] = training_set[['Age', 'Pclass']].apply(impute_age, axis=1)
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
training_set.drop('Cabin', axis=1, inplace=True)
training_set.head()
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
training_set.dropna(inplace=True) # Избавляемся от оставшихся NaN
```
## Создание фиктивной (dummy) переменной для Sex и Embarked
```
# Избавляемся от мультиколлинеарности
# Т. к. столбцы male и female являются идеальными предсказателями друг друга (испозльуем drop_first)
sex = pd.get_dummies(training_set['Sex'], drop_first=True)
sex.head()
embark = pd.get_dummies(training_set['Embarked'], drop_first=True)
embark.head()
training_set = pd.concat([training_set, sex, embark], axis=1)
training_set.head()
training_set.drop(['Sex', 'Embarked', 'Name', 'Ticket'], axis=1, inplace=True)
training_set.drop('PassengerId', axis=1, inplace=True)
training_set.head()
```
# Работа с моделью
```
# Разделяем данные
X = training_set.drop('Survived', axis=1)
y = training_set['Survived']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=333)
from sklearn.linear_model import LogisticRegression
lrm = LogisticRegression()
lrm.fit(X_train, y_train)
predictions = lrm.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, predictions)
```
|
github_jupyter
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
training_set = pd.read_csv('titanic/train.csv')
training_set.head()
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
sns.set_style('darkgrid')
sns.countplot(x='Survived', data=training_set)
sns.countplot(x='Survived', data=training_set, hue='Sex')
sns.countplot(x='Survived', data=training_set, hue='Pclass')
sns.displot(training_set['Age'].dropna(), bins=30)
training_set['Age'].plot.hist(bins=30)
training_set.info()
sns.countplot(x='SibSp', data=training_set)
training_set['Fare'].hist(bins=40, figsize=(10, 4))
sns.boxplot(x='Pclass', y='Age', data=training_set)
def impute_age(columns):
""" Функция для импутации Age"""
Age = columns[0]
Pclass = columns[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass ==2:
return 29
else:
return 24
else:
return Age
training_set['Age'] = training_set[['Age', 'Pclass']].apply(impute_age, axis=1)
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
training_set.drop('Cabin', axis=1, inplace=True)
training_set.head()
sns.heatmap(training_set.isnull(), yticklabels=False, cbar=False, cmap='plasma')
training_set.dropna(inplace=True) # Избавляемся от оставшихся NaN
# Избавляемся от мультиколлинеарности
# Т. к. столбцы male и female являются идеальными предсказателями друг друга (испозльуем drop_first)
sex = pd.get_dummies(training_set['Sex'], drop_first=True)
sex.head()
embark = pd.get_dummies(training_set['Embarked'], drop_first=True)
embark.head()
training_set = pd.concat([training_set, sex, embark], axis=1)
training_set.head()
training_set.drop(['Sex', 'Embarked', 'Name', 'Ticket'], axis=1, inplace=True)
training_set.drop('PassengerId', axis=1, inplace=True)
training_set.head()
# Разделяем данные
X = training_set.drop('Survived', axis=1)
y = training_set['Survived']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=333)
from sklearn.linear_model import LogisticRegression
lrm = LogisticRegression()
lrm.fit(X_train, y_train)
predictions = lrm.predict(X_test)
from sklearn.metrics import classification_report
print(classification_report(y_test, predictions))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, predictions)
| 0.366363 | 0.84241 |
```
# <!-- collapse=True -->
import numpy as np
np.random.seed(0)
import matplotlib
matplotlib.use("svg")
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve, CalibratedClassifierCV
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score, log_loss)
from sklearn.cross_validation import train_test_split
# <!-- collapse=True -->
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=2)
train_samples = 100 # Samples used for training the models
X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)
plt.figure(figsize=(9, 9))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(svc, 'Support Vector Classification'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
# <!-- collapse=True -->
n_samples = 50000
n_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
centers = [(-5, -5), (0, 0), (5, 5)]
X, y = datasets.make_blobs(n_samples=n_samples, n_features=2, cluster_std=1.0,
centers=centers, shuffle=False, random_state=42)
y[:n_samples // 2] = 0
y[n_samples // 2:] = 1
sample_weight = np.random.RandomState(42).rand(y.shape[0])
# split train, test for calibration
X_train, X_test, y_train, y_test, sw_train, sw_test = \
train_test_split(X, y, sample_weight, test_size=0.9, random_state=42)
plt.figure()
y_unique = np.unique(y)
colors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))
for this_y, color in zip(y_unique, colors):
this_X = X_train[y_train == this_y]
this_sw = sw_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], s=this_sw * 50, c=color, alpha=0.5,
label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
# <!-- collapse=True -->
# Gaussian Naive-Bayes with no calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train, sw_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with sigmoid calibration
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method='sigmoid')
clf_sigmoid.fit(X_train, y_train, sw_train)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
print("Brier scores: (the smaller the better)")
clf_score = brier_score_loss(y_test, prob_pos_clf, sw_test)
print("No calibration: %1.3f" % clf_score)
clf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sw_test)
print("With isotonic calibration: %1.3f" % clf_isotonic_score)
clf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sw_test)
print("With sigmoid calibration: %1.3f" % clf_sigmoid_score)
# <!-- collapse=True -->
plt.figure()
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration (%1.3f)' % clf_score)
plt.plot(prob_pos_isotonic[order], 'g', linewidth=3,
label='Isotonic calibration (%1.3f)' % clf_isotonic_score)
plt.plot(prob_pos_sigmoid[order], 'b', linewidth=3,
label='Sigmoid calibration (%1.3f)' % clf_sigmoid_score)
plt.plot(np.linspace(0, y_test.size, 51)[1::2],
y_test[order].reshape(25, -1).mean(1),
'k', linewidth=3, label=r'Empirical')
plt.ylim([-0.05, 1.05])
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
```
|
github_jupyter
|
# <!-- collapse=True -->
import numpy as np
np.random.seed(0)
import matplotlib
matplotlib.use("svg")
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve, CalibratedClassifierCV
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score, log_loss)
from sklearn.cross_validation import train_test_split
# <!-- collapse=True -->
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=2)
train_samples = 100 # Samples used for training the models
X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)
plt.figure(figsize=(9, 9))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(svc, 'Support Vector Classification'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
# <!-- collapse=True -->
n_samples = 50000
n_bins = 3 # use 3 bins for calibration_curve as we have 3 clusters here
# Generate 3 blobs with 2 classes where the second blob contains
# half positive samples and half negative samples. Probability in this
# blob is therefore 0.5.
centers = [(-5, -5), (0, 0), (5, 5)]
X, y = datasets.make_blobs(n_samples=n_samples, n_features=2, cluster_std=1.0,
centers=centers, shuffle=False, random_state=42)
y[:n_samples // 2] = 0
y[n_samples // 2:] = 1
sample_weight = np.random.RandomState(42).rand(y.shape[0])
# split train, test for calibration
X_train, X_test, y_train, y_test, sw_train, sw_test = \
train_test_split(X, y, sample_weight, test_size=0.9, random_state=42)
plt.figure()
y_unique = np.unique(y)
colors = cm.rainbow(np.linspace(0.0, 1.0, y_unique.size))
for this_y, color in zip(y_unique, colors):
this_X = X_train[y_train == this_y]
this_sw = sw_train[y_train == this_y]
plt.scatter(this_X[:, 0], this_X[:, 1], s=this_sw * 50, c=color, alpha=0.5,
label="Class %s" % this_y)
plt.legend(loc="best")
plt.title("Data")
# <!-- collapse=True -->
# Gaussian Naive-Bayes with no calibration
clf = GaussianNB()
clf.fit(X_train, y_train) # GaussianNB itself does not support sample-weights
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with isotonic calibration
clf_isotonic = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_isotonic.fit(X_train, y_train, sw_train)
prob_pos_isotonic = clf_isotonic.predict_proba(X_test)[:, 1]
# Gaussian Naive-Bayes with sigmoid calibration
clf_sigmoid = CalibratedClassifierCV(clf, cv=2, method='sigmoid')
clf_sigmoid.fit(X_train, y_train, sw_train)
prob_pos_sigmoid = clf_sigmoid.predict_proba(X_test)[:, 1]
print("Brier scores: (the smaller the better)")
clf_score = brier_score_loss(y_test, prob_pos_clf, sw_test)
print("No calibration: %1.3f" % clf_score)
clf_isotonic_score = brier_score_loss(y_test, prob_pos_isotonic, sw_test)
print("With isotonic calibration: %1.3f" % clf_isotonic_score)
clf_sigmoid_score = brier_score_loss(y_test, prob_pos_sigmoid, sw_test)
print("With sigmoid calibration: %1.3f" % clf_sigmoid_score)
# <!-- collapse=True -->
plt.figure()
order = np.lexsort((prob_pos_clf, ))
plt.plot(prob_pos_clf[order], 'r', label='No calibration (%1.3f)' % clf_score)
plt.plot(prob_pos_isotonic[order], 'g', linewidth=3,
label='Isotonic calibration (%1.3f)' % clf_isotonic_score)
plt.plot(prob_pos_sigmoid[order], 'b', linewidth=3,
label='Sigmoid calibration (%1.3f)' % clf_sigmoid_score)
plt.plot(np.linspace(0, y_test.size, 51)[1::2],
y_test[order].reshape(25, -1).mean(1),
'k', linewidth=3, label=r'Empirical')
plt.ylim([-0.05, 1.05])
plt.xlabel("Instances sorted according to predicted probability "
"(uncalibrated GNB)")
plt.ylabel("P(y=1)")
plt.legend(loc="upper left")
plt.title("Gaussian naive Bayes probabilities")
| 0.654122 | 0.649863 |
# Loading data in sktime
Data for use with sktime should be stored in pandas DataFrame objects with cases represented by rows and series data for each dimension of a problem stored in columns (the specifics of the data structure are described in more detail in the section below). Data can be loaded into the sktime format through various means, such as loading directly from a bespoke sktime file format (.ts) or supported file formats provided by other existing data sources (such as ARFF and .tsv). Further, data can also be loaded through other means into a long-table format and then converted to the sktime format using a provided method.
Below is a brief description of the .ts file format, an introduction of how data are stored in dataframes for sktime, and examples of loading data from a variety of file formats.
## Representing data with .ts files
The most typical use case is to load data from a locally stored .ts file. The .ts file format has been created for representing problems in a standard format for use with sktime. These files include two main parts:
* header information
* data
The header information is used to facilitate simple representation of the data through including metadata about the structure of the problem. The header contains the following:
@problemName <problem name>
@timeStamps <true/false>
@univariate <true/false>
@classLabel <true/false> <space delimited list of possible class values>
@data
The data for the problem should begin after the @data tag. In the simplest case where @timestamps is false, values for a series are expressed in a comma-separated list and the index of each value is relative to its position in the list (0, 1, ..., m). A _case_ may contain 1 to many dimensions, where cases are line-delimited and dimensions within a case are colon (:) delimited. For example:
2,3,2,4:4,3,2,2
13,12,32,12:22,23,12,32
4,4,5,4:3,2,3,2
This example data has 3 _cases_, where each case has 2 _dimensions_ with 4 observations per dimension. Missing readings can be specified using ?, or for sparse datasets, readings can be specified by setting @timestamps to true and representing the data with tuples in the form of (timestamp, value). For example, the first case in the example above could be specified in this representation as:
(0,2),(1,3)(2,2)(3,4):(0,4),(1,3),(2,2),(3,2)
Equivalently,
2,5,?,?,?,?,?,5,?,?,?,?,4
could be represnted with timestamps as:
(0,2),(0,5),(7,5),(12,4)
For classification problems, the class label for a case should be specified in the last dimension and @classLabel should be in the header information to specify the set of possible class values. For example, if a case consists of a single dimension and has a class value of 1 it would be specified as:
1,4,23,34:1
## Storing data in a pandas DataFrame
The core data structure for storing datasets in sktime is a pandas DataFrame, where rows of the dataframe correspond to cases, and columns correspond to dimensions of the problem. The readings within each column of the dataframe are stored as pandas Series objects; the use of Series facilitates simple storage of sparse data or series with non-integer timestamps (such as dates). Further, if the loaded problem is a classification problem, the standard loading functionality within sktime will return the class values in a separate index-aligned numpy array (with an option to combine X and Y into a single dataframe for high-level task construction). For example, for a problem with n cases that each have data across c dimensions:
DataFrame:
index | dim_0 | dim_1 | ... | dim_c-1
0 | pd.Series | pd.Series | pd.Series | pd.Series
1 | pd.Series | pd.Series | pd.Series | pd.Series
... | ... | ... | ... | ...
n | pd.Series | pd.Series | pd.Series | pd.Series
And if the data is a classification problem, a separate (index-aligned) array will be returned with the class labels:
index | class_val
0 | int
1 | int
... | ...
n | int
## Loading from .ts file to pandas DataFrame
A dataset can be loaded from a .ts file using the following method in sktime.utils.load_data.py:
load_from_tsfile_to_dataframe(full_file_path_and_name, replace_missing_vals_with='NaN')
This can be demonstrated using the Arrow Head problem that is included in sktime under sktime/datasets/data
```
from sktime.utils.load_data import load_from_tsfile_to_dataframe
import os
import sktime
DATA_PATH = os.path.join(os.path.dirname(sktime.__file__), "datasets/data")
train_x, train_y = load_from_tsfile_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.ts"))
test_x, test_y = load_from_tsfile_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TEST.ts"))
```
Train and test partitions of the ArrowHead problem have been loaded into dataframes with associated arrays for class values. As an example, below are the first 5 rows from the train_x and train_y:
```
train_x.head()
train_y[0:5]
```
## Loading from Weka ARFF files
It is also possible to load data from Weka's attribute-relation file format (ARFF) files. This is the data format used by researchers at the University of East Anglia (available from www.timeseriesclassification.com ). The `load_from_arff_to_dataframe` method in `sktime.utils.load_data` supports reading both univariate and multivariate problems. Examples are shown below using the ArrowHead problem again (this time loading from ARFF) and also the multivariate BasicMotions problem.
### Loading the univariate ArrowHead problem from ARFF
```
from sktime.utils.load_data import load_from_arff_to_dataframe
X, y = load_from_arff_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.arff"))
X.head()
```
### Loading the multivariate BasicMotions problem from ARFF
```
X, y = load_from_arff_to_dataframe(os.path.join(DATA_PATH, "BasicMotions/BasicMotions_TRAIN.arff"))
X.head()
```
## Loading from UCR .tsv Format Files
A further option is to load data into sktime from tab separated value (.tsv) files, as used by researchers at the University of Riverside, California (available at https://www.cs.ucr.edu/~eamonn/time_series_data_2018 ). The `load_from_ucr_tsv_to_dataframe` method in `sktime.utils.load_data` supports reading univariate problems. An example with ArrowHead is given below to demonstrate equivalence with loading from ARFF and .ts file formats.
### Loading the univariate ArrowHead problem from .tsv
```
from sktime.utils.load_data import load_from_ucr_tsv_to_dataframe
X, y = load_from_ucr_tsv_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.tsv"))
X.head()
```
## Using long-format data with sktime
It is also possible to use data from sources other than .ts and .arff files by manually shaping the data into the format described above. For convenience, a helper function is also provided to convert long-format data into sktime-formatted data in the `from_long_to_nested` method in `sktime.utils.load_data` (with assumptions made on how the data is initially formatted).
The method converts rows from a long-table schema data frame assuming each row contains information for:
`case_id, dimension_id, reading_id, value`
where `case_id` is an id to identify a specific case in the data, `dimension_id` is an integer between 0 and d-1 for d dimensions in the data, `reading_id` is the index of this observation for the associated `case_id` and `dimension_id`, and `value` is the actual value of the observation. E.g.:
| case_id | dim_id | reading_id | value
------------------------------------------------
0 | int | int | int | double
1 | int | int | int | double
2 | int | int | int | double
3 | int | int | int | double
To demonstrate this functionality the method below creates a dataset with a given number of cases, dimensions and observations:
```
import numpy as np
import pandas as pd
def generate_example_long_table(num_cases=50, series_len=20, num_dims=2):
rows_per_case = series_len*num_dims
total_rows = num_cases*series_len*num_dims
case_ids = np.empty(total_rows, dtype=np.int)
idxs = np.empty(total_rows, dtype=np.int)
dims = np.empty(total_rows, dtype=np.int)
vals = np.random.rand(total_rows)
for i in range(total_rows):
case_ids[i] = int(i/rows_per_case)
rem = i%rows_per_case
dims[i] = int(rem/series_len)
idxs[i] = rem%series_len
df = pd.DataFrame()
df['case_id'] = pd.Series(case_ids)
df['dim_id'] = pd.Series(dims)
df['reading_id'] = pd.Series(idxs)
df['value'] = pd.Series(vals)
return df
```
The following example generates a long-format table with 50 cases, each with 4 dimensions of length 20:
```
X = generate_example_long_table(num_cases=50, series_len=20, num_dims=4)
X.head()
X.tail()
```
As shown below, applying the `from_long_to_nested` method returns a sktime-formatted dataset with individual dimensions represented by columns of the output dataframe:
```
from sktime.utils.load_data import from_long_to_nested
X_nested = from_long_to_nested(X)
X_nested.head()
X_nested.iloc[0][0].head()
```
|
github_jupyter
|
from sktime.utils.load_data import load_from_tsfile_to_dataframe
import os
import sktime
DATA_PATH = os.path.join(os.path.dirname(sktime.__file__), "datasets/data")
train_x, train_y = load_from_tsfile_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.ts"))
test_x, test_y = load_from_tsfile_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TEST.ts"))
train_x.head()
train_y[0:5]
from sktime.utils.load_data import load_from_arff_to_dataframe
X, y = load_from_arff_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.arff"))
X.head()
X, y = load_from_arff_to_dataframe(os.path.join(DATA_PATH, "BasicMotions/BasicMotions_TRAIN.arff"))
X.head()
from sktime.utils.load_data import load_from_ucr_tsv_to_dataframe
X, y = load_from_ucr_tsv_to_dataframe(os.path.join(DATA_PATH, "ArrowHead/ArrowHead_TRAIN.tsv"))
X.head()
import numpy as np
import pandas as pd
def generate_example_long_table(num_cases=50, series_len=20, num_dims=2):
rows_per_case = series_len*num_dims
total_rows = num_cases*series_len*num_dims
case_ids = np.empty(total_rows, dtype=np.int)
idxs = np.empty(total_rows, dtype=np.int)
dims = np.empty(total_rows, dtype=np.int)
vals = np.random.rand(total_rows)
for i in range(total_rows):
case_ids[i] = int(i/rows_per_case)
rem = i%rows_per_case
dims[i] = int(rem/series_len)
idxs[i] = rem%series_len
df = pd.DataFrame()
df['case_id'] = pd.Series(case_ids)
df['dim_id'] = pd.Series(dims)
df['reading_id'] = pd.Series(idxs)
df['value'] = pd.Series(vals)
return df
X = generate_example_long_table(num_cases=50, series_len=20, num_dims=4)
X.head()
X.tail()
from sktime.utils.load_data import from_long_to_nested
X_nested = from_long_to_nested(X)
X_nested.head()
X_nested.iloc[0][0].head()
| 0.27338 | 0.986929 |
<img src='../../img/ods_stickers.jpg'>
# <center> Индивидуальный проект по анализу данных
## <center>Предсказание разрешения на строительные работы в San Francisco
<div style="text-align: right"> Автор материала: Фатыхов Тимур @FatykhovTimur </div>
<img src='../../img/title_img.jpg'>
### 1. Описание набора данных и признаков
Этот набор данных содержит информацию обо всех типах строительных разрешений с 1 января 2013 года по 25 февраля 2018 (почти 200к записей). Это не обязательно должно быть разрешение на строительство нового здания, надо также получать одобрение на изменение фасада, этажности, количества составных частей здания. А также при проведении трубопроводов, электричества, изменении планировки. Если кто то загорелся желанием узнать поподробнее, то [вот ссылка](https://www.thespruce.com/what-is-a-building-permit-1398344) с более подробным описанием строительных разрешений. Вышесказанное означает, что какие то признаки в строках, ожидаемо, будут иметь значение **NaN**, так как если мы хотим построить здание, то его существующая этажность не то чтобы 0. Ее попросту нет, как и материалов из которых здание существует (ведь оно не существует - спасибо кэп).
<br><br>
Данные настоящие и обновляются каждую субботу, спасибо открытому порталу Сан-Франциско.
<br>
**А зачем это все?** - по [некоторым данным](https://www.trulia.com/blog/trends/elasticity-2016/) несоответсвие спроса и предложения на рынке недвижимости связано с задержками на разрешение реализации строительных проектов. Банальная догадка: население выросло, надо снести парочку старых низких домов и построить новые. Пока получим разрешение на снос, а потом отдельное на строительство, а потом еще на проведение канализации и трубопровода - население выростет еще больше. Как бы было хорошо иметь систему, которая сама бы определяла дать разрешение или нет.
<br>
<br>
Каждую запись можно рассматривать как заявку в городской департамент. То есть в ней присутствует дата подачи, дата исполнения строительных работ (если они исполнились в конечном итоге), адрес, информация о старом здании и о новом (например старое из дерева, 3 этажа, а хотим построить кирпичную 5-этажку). Но, как говорится лучше один разу увидеть, поэтому давайте лучше глянем поближе...
```
import pandas as pd
import warnings
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
warnings.filterwarnings('ignore')
```
[**Ссылка**](https://www.kaggle.com/aparnashastry/building-permit-applications-data/data) по которой можно найти данные
```
df = pd.read_csv('./data/Building_Permits.csv', sep=',')
df.info()
```
Параметров очень много, как и их значений. Разберемся по порядку. Без паники. Сначала посмотрим ближе на самый интересный из всех:
```
# текущее состояние заявки
# это и будет наш целевой признак
df['Current Status'].value_counts()
```
- complete - исполнено (дом построен, фасад покрашен, пожарная сигнализация установлена и тд)
- issued - разрешение выдано
- filed - заявка подана (на рассмотрении)
- withdrawm - заявка отозвана (сам заявитель забрал)
- canceled - отменена департаментом
- expired - истек срок
- approved - заявка утверждена
- reinstated - восстановлена в правах
- suspend - заморожена
- revoked - аннулирована
- plancheck - проверка плана (плана проведения трубопровода, например)
- disapproved - заявка не одобрена
- incomplete - проект не завершен
- appeal - апелляция/обращение
Глубоко вникать в определение этих терминов можно долго (и это, будем честны, требует знаний того, как устроена работа в подобных департаментах). Но видно что они разделены на две основных группы: заявка одобрена и не одобрена.
Ну а теперь время узнать, какие свойства можно будет использовать для предсказания судьб будущих заявок (**жирным** выделены наиболее интересные для изучения, то есть те, что, возможно, будут **сильнее всего влиять на целевой признак**):
- Permit Number - номер заявки
- Permit Type - тип заявления (в виде числа)
- **Permit Type Definition - пояснение предыдущего пунтка (соответствие описаний и чисел, рассмотрим чуть подробнее далее)**
- Permit Creation Date - дата, в которую было выдан вердикт
- Block - блок (адрес)
- Lot - еще одна составляющая адреса
- **Street Number - номер улицы**
- Street Number Suffix - суффикс номера улицы (есть не у всех)
- **Street Name - название улицы**
- Street Suffix - суффикс названия улицы
- Unit - блок здания (1, 2534, 1432)
- Unit Suffix - суффикс блока (A, B, 4C)
- **Description - причины подачи заявки, описание деталей (починка крыши, снос стен и тд)**
- Current Status - статус заявки на данный момент (подробнее только что познакомились выше)
- Current Status Date - день, в который заявка преобрела актуальный статус
- **Filed Date - день подачи заявки**
- **Issued Date - день публикования заявки (день когда ее рассмотрели)**
- Completed Date - день, когда заявка исполнена (стены покрашены, проводка проведена, в общем работа сделана)
- **First Construction Document Date - дата, на которую назначено строительство**
- **Structural Notification - соблюдение некоторых юридических правил (значение Y - yes или NaN)**
- **Number of Existing Stories - кол-во этажей в существующем здании**
- **Number of Proposed Stories - кол-во предложенных в заявке этажей**
- **Voluntary Soft-Story Retrofit - кол-во этажей, удовлетворяющее сейсмическим условиям**
- **Fire Only Permit - предоставление противопожарной защиты (значение Y - yes или NaN)**
- Permit Expiration Date - дата истечения срока разрешения на работы
- **Estimated Cost - первоначальная оценка стоимости проекта**
- **Revised Cost - пересмотренная оценка**
- **Existing Use - назначение (использование) здания (гостиница, ресторан, жилой дом и тд)**
- **Existing Units - кол-во составных частей объекта (один дом или кооператив из 30 домов, например)**
- **Proposed Use - предложенное в заявке использование**
- **Proposed Units - предложенное кол-во составных частей объекта**
- **Plansets - кол-во планов, показывающих основную задумку проекта**
- **TIDF Compliance - соответствие еще одному юридическому условию (значение Y - yes или NaN)**
- Existing Construction Type - тип конструкции на момент подачи заявки в виде числа
- **Existing Construction Type Description - описание предыдущего пункта (например, кирпич или дерево)**
- Proposed Construction Type - предложенный тип конструкции
- **Proposed Construction Type Description - описание предыдущего пунтка **
- **Site Permit - разрешение на строительную площадку**
- Supervisor District - район, к которому принадлежит объект (значение от 1 до 11)
- **Neighborhoods - Analysis Boundaries - окрестности, к которым принадлежит объект (например, Linkoln Park, South Beach, Russian Hill...)**
- Zipcode - индекс
- Location - координаты (широта, долгота)
- Record ID - ID записи в базе департамента
___
Наконец, рассмотрим какие типы запросов (разрешений) поступают в департамент:
```
df['Permit Type Definition'].value_counts()
```
- otc alterations permit - *other-the-counter*, то есть внебиржевый, частный запрос (дядя Antony захотел провести электричество)
- additions alterations or repairs - дополнения или ремонт
- sign - erect - возведение постройки
- new construction wood frame - новая конструкция с деревянной рамой (буду честен, что именно это значит - загадка. Перевел и понял как смог)
- demolitions - снос
- wall or painted sign - изменение внешнего вида стен (реклама, покраска фасада)
- new construction - новое строительство
- grade or quarry or fill or excavate - другое (оценка, копка карьера с целью добычи, насыпь/заполнение ямы, создание ямы)
### 2. Первичный анализ данных
Вернемся к целевому признаку *Current Status* и немного прорядим данные. Заявки, которые на момент работы с данными находятся в обработке, ничем не помогут нашей модели, которая будет определять: разрешать ли стройку или нет. Также, например, если заявитель отозвал свою заявку, то никакой информации о решении департамента мы не имеем. Следовательно, подобные записи можно удалить (помечены ниже красным).
- complete - исполнено (дом построен, фасад покрашен, пожарная сигнализация установлена и тд)
- issued - разрешение выдано
- <font color='red'>filed - заявка подана </font> <font color='blue'> *(т.к. на рассмотрении)*</font>
- <font color='red'>withdrawm - заявка отозвана </font> <font color='blue'> *(сам заявитель забрал)*</font>
- canceled - отменена департаментом
- <font color='red'>expired - истек срок </font> <font color='blue'> *(департамент не успел принять решение)*</font>
- approved - заявка утверждена
- reinstated - восстановлена в правах
- suspend - заморожена
- revoked - аннулирована
- <font color='red'>plancheck - проверка плана (плана проведения трубопровода, например) </font> <font color='blue'> *(решение еще не принято)*</font>
- <font color='red'>disapproved - заявка не одобрена</font> <font color='blue'> *(заявка неправильно оформлена)*</font>
- incomplete - проект не завершен <font color='green'> (но, зато, одобрен)</font>
- <font color='red'>appeal - апелляция/обращение </font> <font color='blue'> *(в принципе не имеет отношения к задаче)*</font>
```
df = df[(df['Current Status'] != 'filed') &
(df['Current Status'] != 'withdrawn') &
(df['Current Status'] != 'expired') &
(df['Current Status'] != 'plancheck') &
(df['Current Status'] != 'disapproved') &
(df['Current Status'] != 'appeal') ]
```
Посмотрим на процентное соотношение классов между собой:
```
df['Current Status'].value_counts()/df.shape[0]*100
```
Видно, что распределение классов крайне не равномерное.
Так как мы поставили перед собой задачу создать модель, которая принимает решение о выдаче разрешения на строительные работы, то **отобразим множество упомянутых выше классов на множество {0, 1}** где 0 - отказать в запросе, 1 - одобрить проект.
<br>
Определим следующие классы как отказ в выдаче разрешения: <font color='red'>cancelled, suspend, revoked.</font>
<br>
Определим следующие классы как одобрение проекта: <font color='green'>complete, issued, approved, reinstated, incomplete.</font>
```
df['Current Status'] = df['Current Status'].map({'cancelled': 0, 'suspend': 0, 'revoked': 0,
'complete': 1, 'issued': 1, 'approved': 1,
'reinstated': 1, 'incomplete': 1})
df['Current Status'] = df['Current Status'].astype('int64')
print('Распределение классов в процентах:')
df['Current Status'].value_counts() /df.shape[0] * 100
```
### 3. Первичный визуальный анализ данных
<font size='5px' color='orange'>Ан нет... Пора спать, завтра важная пара с утра, а я с Новосибирска (+4 от Мск). Но можно же несколько баллов заработать за первые два пунтка все равно, так? :D
<br><br>
Если это дойдет до глаз читателей, то прошу прощения, за украденное время. Хорошего дня :)</font>
|
github_jupyter
|
import pandas as pd
import warnings
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
warnings.filterwarnings('ignore')
df = pd.read_csv('./data/Building_Permits.csv', sep=',')
df.info()
# текущее состояние заявки
# это и будет наш целевой признак
df['Current Status'].value_counts()
df['Permit Type Definition'].value_counts()
df = df[(df['Current Status'] != 'filed') &
(df['Current Status'] != 'withdrawn') &
(df['Current Status'] != 'expired') &
(df['Current Status'] != 'plancheck') &
(df['Current Status'] != 'disapproved') &
(df['Current Status'] != 'appeal') ]
df['Current Status'].value_counts()/df.shape[0]*100
df['Current Status'] = df['Current Status'].map({'cancelled': 0, 'suspend': 0, 'revoked': 0,
'complete': 1, 'issued': 1, 'approved': 1,
'reinstated': 1, 'incomplete': 1})
df['Current Status'] = df['Current Status'].astype('int64')
print('Распределение классов в процентах:')
df['Current Status'].value_counts() /df.shape[0] * 100
| 0.108484 | 0.993735 |
# Facies classification using Machine Learning
#### Brendon Hall, [Enthought](https://www.enthought.com/)
This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on [Neural Networks and Fuzzy Systems](http://www.people.ku.edu/~gbohling/EECS833/). This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see [Bohling and Dubois (2003)](http://www.kgs.ku.edu/PRS/publication/2003/ofr2003-50.pdf) and [Dubois et al. (2007)](http://dx.doi.org/10.1016/j.cageo.2006.08.011).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a support vector machine to classify facies types. Support vector machines (or SVMs) are a type of supervised learning model that can be trained on data to perform classification and regression tasks. The SVM algorithm uses the training data to fit an optimal hyperplane between the different classes (or facies, in our case). We will use the SVM implementation in [scikit-learn](http://scikit-learn.org/stable/modules/svm.html).
First we will [explore the dataset](#Exploring-the-dataset). We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data.
Next we will [condition the data set](#Conditioning-the-data-set). We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets.
We will then be ready to [build the SVM classifier](#Building-the-SVM-classifier). We will demonstrate how to use the cross validation set to do [model parameter selection](#Model-parameter-selection).
Finally, once we have a built and tuned the classifier, we can [apply the trained model](#Applying-the-classification-model-to-new-data) to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data.
## Exploring the dataset
First, we will examine the data set we will use to train the classifier. The training data is contained in the file `facies_vectors.csv`. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data.
```
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
```
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include [gamma ray](http://petrowiki.org/Gamma_ray_logs) (GR), [resistivity logging](http://petrowiki.org/Resistivity_and_spontaneous_%28SP%29_logging) (ILD_log10),
[photoelectric effect](http://www.glossary.oilfield.slb.com/en/Terms/p/photoelectric_effect.aspx) (PE), [neutron-density porosity difference and average neutron-density porosity](http://petrowiki.org/Neutron_porosity_logs) (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
```
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
```
This is a quick view of the statistical distribution of the input variables. Looking at the `count` values, there are 3232 feature vectors in the training set.
Remove a single well to use as a blind test later.
```
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
```
These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone.
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the `facies_vectors` dataframe.
```
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
```
Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's [excellent tutorial](https://github.com/seg/tutorials/tree/master/1504_Seismic_petrophysics_1).
```
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
```
Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters.
We then show log plots for wells `SHRIMPLIN`.
```
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
```
In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class.
```
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
```
This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.
Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful [Seaborn library](https://stanford.edu/~mwaskom/software/seaborn/) to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
```
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
```
## Conditioning the data set
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
```
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
```
Scikit includes a [preprocessing](http://scikit-learn.org/stable/modules/preprocessing.html) module that can 'standardize' the data (giving each variable zero mean and unit variance, also called *whitening*). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The `StandardScalar` class can be fit to the training set, and later used to standardize any training data.
```
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
```
Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
```
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
```
## Training the SVM classifier
Now we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a [support vector machine](https://en.wikipedia.org/wiki/Support_vector_machine). The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible.
The SVM implementation in [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html#sklearn.svm.SVC) takes a number of important parameters. First we create a classifier using the default settings.
```
from sklearn import svm
clf = svm.SVC()
```
Now we can train the classifier using the training set we created above.
```
clf.fit(X_train,y_train)
```
Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
```
predicted_labels = clf.predict(X_test)
```
We need some metrics to evaluate how good our classifier is doing. A [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) is a table that can be used to describe the performance of a classification model. [Scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html) allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.
The confusion matrix is simply a 2D array. The entries of confusion matrix `C[i][j]` are equal to the number of observations predicted to have facies `j`, but are known to have facies `i`.
To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file `classification_utilities.py` in this repo for the `display_cm()` function.
```
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
```
The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label `SS`, 23 were correctly indentified as `SS`, 21 were classified as `CSiS` and 2 were classified as `FSiS`.
The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications.
```
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
```
As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label `i`, `adjacent_facies[i]` is an array of the adjacent facies labels.
```
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
```
## Model parameter selection
The classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.
We will consider two parameters. The parameter `C` is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if `C` is too large it may 'overfit' the data and fail to generalize when classifying new data. If `C` is too small then the model will not be good at fitting outliers and will have a large error on the training set.
The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function `rbf` kernel (the default). The `gamma` parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.
We will train a series of classifiers with different values for `C` and `gamma`. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.
This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
```
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
```
The best accuracy on the cross validation error curve was achieved for `gamma = 1`, and `C = 10`. We can now create and train an optimized classifier based on these parameters:
```
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
```
[Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall) are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.
Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the `display_confusion_matrix()` function:
```
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
```
To interpret these results, consider facies `SS`. In our test set, if a sample was labeled `SS` the probability the sample was correct is 0.8 (precision). If we know a sample has facies `SS`, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The [F1 score](https://en.wikipedia.org/wiki/Precision_and_recall#F-measure) combines both to give a single measure of relevancy of the classifier results.
These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies `MS` or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies `BS` or bafflestone has the best `F1` score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.
We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
```
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
```
Considering adjacent facies, the `F1` scores for all facies types are above 0.9, except when classifying `SiSh` or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone.
These results are comparable to those reported in Dubois et al. (2007).
## Applying the classification model to the blind data
We held a well back from the training, and stored it in a dataframe called `blind`:
```
blind
```
The label vector is just the `Facies` column:
```
y_blind = blind['Facies'].values
```
We can form the feature matrix by dropping some of the columns and making a new dataframe:
```
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
```
Now we can transform this with the scaler we made before:
```
X_blind = scaler.transform(well_features)
```
Now it's a simple matter of making a prediction and storing it back in the dataframe:
```
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
```
Let's see how we did with the confusion matrix:
```
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
```
We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
```
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
```
...but does remarkably well on the adjacent facies predictions.
```
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
```
## Applying the classification model to new data
Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.
This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called `test_data`.
```
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
```
The data needs to be scaled using the same constants we used for the training data.
```
X_unknown = scaler.transform(well_features)
```
Finally we predict facies labels for the unknown data, and store the results in a `Facies` column of the `test_data` dataframe.
```
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
```
We can use the well log plot to view the classification results along with the well logs.
```
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
```
Finally we can write out a csv file with the well data along with the facies classification results.
```
well_data.to_csv('well_data_with_facies.csv')
```
## References
Amato del Monte, A., 2015. Seismic Petrophysics: Part 1, *The Leading Edge*, 34 (4). [doi:10.1190/tle34040440.1](http://dx.doi.org/10.1190/tle34040440.1)
Bohling, G. C., and M. K. Dubois, 2003. An Integrated Application of Neural Network and Markov Chain Techniques to Prediction of Lithofacies from Well Logs, *KGS Open-File Report* 2003-50, 6 pp. [pdf](http://www.kgs.ku.edu/PRS/publication/2003/ofr2003-50.pdf)
Dubois, M. K., G. C. Bohling, and S. Chakrabarti, 2007, Comparison of four approaches to a rock facies classification problem, *Computers & Geosciences*, 33 (5), 599-617 pp. [doi:10.1016/j.cageo.2006.08.011](http://dx.doi.org/10.1016/j.cageo.2006.08.011)
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
training_data.describe()
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
#count the number of unique entries for each facies, sort them by
#facies number (instead of by number of entries)
facies_counts = training_data['Facies'].value_counts().sort_index()
#use facies labels to index each count
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,
title='Distribution of Training Data by Facies')
facies_counts
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
correct_facies_labels = training_data['Facies'].values
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
from sklearn import svm
clf = svm.SVC()
clf.fit(X_train,y_train)
predicted_labels = clf.predict(X_test)
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
blind
y_blind = blind['Facies'].values
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
X_blind = scaler.transform(well_features)
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
X_unknown = scaler.transform(well_features)
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
| 0.436142 | 0.995008 |
# Descriptive Statistics
- We'll be focusing primarily on descriptive statistics in order to describe patterns, trends, distributions, and behaviors across our data.
# Measures of Central Tendency
## Mean, Median and Mode
The "Mean" is computed by adding all of the numbers in the data
together and dividing by the number elements contained in the data set.
Example :
Data Set = 2, 5, 9, 3, 5, 4, 7
Number of Elements in Data Set = 7
Mean = ( 2 + 5 + 9 + 7 + 5 + 4 + 3 ) / 7 = 5
Mathematical Notation for mean:
<img src="mean_formula.png" width="100" height="100">
n: is number of element
x_i: is the ith element of Dataset
### Median :
The "Median" of a data set is dependant on whether the number of
elements in the data set is odd or even. First reorder the data set
from the smallest to the largest then if the number of elements
are odd, then the Median is the element in the middle of the data set.
If the number of elements are even, then the Median is the average
of the two middle terms.
#### Examples : Odd Number of Elements
Data Set = 2, 5, 9, 3, 5, 4, 7
Reordered = 2, 3, 4, 5, 5, 7, 9
<img src="median_1.png" width="100" height="100">
Median = 5
#### Examples : Even Number of Elements
Data Set = 2, 5, 9, 3, 5, 4
Reordered = 2, 3, 4, 5, 5, 9
<img src="median_2.png" width="100" height="100">
Median = ( 4 + 5 ) / 2 = 4.5
Mathematical Notation for mean:
<img src="median_formula.png" width="300" height="300">
### Mode :
The "Mode" for a data set is the element that occurs the most often.
It is not uncommon for a data set to have more than one mode.
This happens when two or more elements accur with equal frequency
in the data set.
Example:
Data Set = 2, 5, 9, 3, 5, 4, 7
Mode = 5
### Activity: _Write a function to compute the mean from an arbitrary dataset._
```
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the average value of our dataset.
def compute_mean(dataset):
""" Main function that calculates the average value across our data. """
return
compute_mean(data)
```
### Activity: _Write a function to compute the median from an arbitrary dataset._
```
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the exact true median value of our dataset.
# HINT: Consider using DataFrame slicing to help with identifying the correct median value(s).
def compute_median(dataset):
""" Main function that determines the median value across our data. """
count = len(dataset)
if count < 1:
# TODO: Complete this if-statement
return
if count % 2 == 1:
# TODO: Complete this if-statement
return
else:
# TODO: Complete this if-else statement
return
compute_median(data)
```
### Activity: _Write a function to compute the mode from an arbitrary dataset._
```
# NOTE: Tricker than it looks!
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the relative mode across our dataset.
# HINT: Remember histograms and tokenization from CS 1.2? How many they help you here?
def compute_mode(dataset):
""" Main function that determines the mode value across our data. """
return
compute_mode(data)
```
## Measures of Spread
- Range
- Variance
### Range :
The "Range" for a data set is the difference between the largest value and
smallest value contained in the data set. First reorder the data set from
smallest to largest then subtract the first element from the last element.
Example:
Data Set = 2, 5, 9, 3, 5, 4, 7
Reordered = 2, 3, 4, 5, 5, 7, 9
Range = ( 9 - 2 ) = 7
### Variance :
Variance measures how spread out the data is.
Standard deviation is the root of variance
Example:
<img src="standard_deviation.png" width="500" height="500">
## What does variance (or standard deviation) mean?
- We measured number of rainy days at fall in three different cities in last 5 years:
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.DataFrame({'Rainy':[29,28,32,35,36,12,18,30,45,55, 32,32,32,32,32], 'City':['City_A']*5 + ['City_B']*5 + ['City_C']*5})
df
```
## Activity: Obtain the variance of City_A, City_B and City_C
|
github_jupyter
|
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the average value of our dataset.
def compute_mean(dataset):
""" Main function that calculates the average value across our data. """
return
compute_mean(data)
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the exact true median value of our dataset.
# HINT: Consider using DataFrame slicing to help with identifying the correct median value(s).
def compute_median(dataset):
""" Main function that determines the median value across our data. """
count = len(dataset)
if count < 1:
# TODO: Complete this if-statement
return
if count % 2 == 1:
# TODO: Complete this if-statement
return
else:
# TODO: Complete this if-else statement
return
compute_median(data)
# NOTE: Tricker than it looks!
data = np.array([1, 3, 5, 2, 3, 7, 8, 4, 10, 0, 6, 7, 3, 0, 3, 0, 5, 7, 10, 1, 4, 9, 3])
# TODO: Complete this function by having the function return the relative mode across our dataset.
# HINT: Remember histograms and tokenization from CS 1.2? How many they help you here?
def compute_mode(dataset):
""" Main function that determines the mode value across our data. """
return
compute_mode(data)
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.DataFrame({'Rainy':[29,28,32,35,36,12,18,30,45,55, 32,32,32,32,32], 'City':['City_A']*5 + ['City_B']*5 + ['City_C']*5})
df
| 0.175786 | 0.988569 |
# Ex8 - Transformação de visão e de projeção
Nesta atividade, vocês vão exercitar os conceitos de transformação de visão e de projeção. Para evitar erros de execução, utilize apenas uma célula de código para cada parte desta atividade. O teste de profundidade deve estar ativado em todos os exercícios.
### Parte 1 - Transformação de visão
Continuando com a construção do sistema solar da Atividade 7 ([EA979A_Ex07_Modelos_e_transformacoes-Gabarito](EA979A_Ex07_Modelos_e_transformacoes-Gabarito.ipynb)), adicione a renderização de Júpiter ao programa. As informações dele são fornecidas no código abaixo. Você vai perceber que o novo planeta adicionado não cabe na tela. Então, é preciso realizar algumas transformações para visualizá-lo. Para isso, crie uma matriz de visão e trate os eventos das setas do teclado para deslocar, simultaneamente, os parâmetros desta matriz (o ponto de visão e a posição da câmera) na direção da tecla pressionada. Ou seja, as setas esquerda e direita deslocam estes parâmetros no eixo x, e as setas 'para cima' e 'para baixo' deslocam eles no eixo y. Além disso, utilize as teclas + e - para alterar uma transformação global que, respectivamente, aumenta e diminui o tamanho de todos os objetos renderizados. A imagem abaixo mostram a aparência esperada com a adição de Júpiter. Comente o resultado obtido ao mudar os parâmetros de visão. O notebook ([29_Transformacao_visao](29_Transformacao_visao.ipynb)) exemplifica algumas formas de como criar e utilizar a matriz de visão.
<td><img src='cg/images/ex8_solar_system_3.png' style="width:300px">
```
PlanetarySheet = {
'sun': SunInfo(np.array([1.00, 1.00, 0.00, 1.0]), 1391900),
'earth': PlanetInfo(np.array([0.61, 0.79, 0.37, 1.0]), 12742, 365.2, 1.57, 1.0027, 1.0025, 0.0167),
'moon': PlanetInfo(np.array([0.50, 0.50, 0.50, 1.0]), 3475, 27.3, 5.1, 0.0025718 * moon_scale_factor, 0.0021479 * moon_scale_factor, 0.00014537),
'jupiter': PlanetInfo(np.array([0.83, 0.67, 0.53, 1.0]), 142984, 4331, 0.32, 5.2073, 5.2010, 0.2520)
}
```
### Parte 2 - Transformações de projeção
Adicione na renderização realizada na Parte 1 dois quadrados com cores diferentes ([27_Quadrado_com_transformacoes](27_Quadrado_com_transformacoes.ipynb)). O primeiro tem tamanho 25x25 e deve ser transladado para a posição (0, 0, -15). O segundo tem tamanho 30x30 e deve ser transladado para a posição (0, 0, -20). Renderize esta cena de duas formas. Uma delas utilizando uma matriz de projeção ortográfica e a outra utilizando uma matriz de projeção perspectiva. Realiza os ajustes necessários somente nos parâmetros das matrizes de visão e de projeção destas duas cenas para cada cena renderizar uma imagem semelhante à imagem abaixo (com os quadrados centrados na tela e com as bordas dos quadrados longe das bordas da janela). O ponto de visão está fixo na posição (0, 0, -1). Diferente da Parte 1, a matriz de transformação global deve ser uma matriz identidade e não há tratamento de eventos do teclado. As duas cenas podem ser renderizadas por um único programa através de múltiplos viewports ou por dois programas distintos. Comente como você chegou nos valores dos parâmetros das matrizes para obter as cenas pedidas. Os notebooks ([30_Transformacao_projecao_ortogonal](30_Transformacao_projecao_ortogonal.ipynb)) e ([31_Transformacao_projecao_perspectiva](31_Transformacao_projecao_perspectiva.ipynb)) exemplificam algumas formas de como criar e utilizar as matrizes de projeção.
<td><img src='cg/images/ex8_solar_system_4.png' style="width:400px">
### Parte 3 - Transformações de visão e de projeção
Utilizando o código da Parte 2, trate os eventos das setas do teclado para alterar apenas a posição da matriz de visão (o ponto de visão está fixo na posição (0, 0, -1)). Desloque o parâmetro de posição da câmera desta matriz na direção da tecla pressionada. Ou seja, as setas esquerda e direita deslocam no eixo x, e as setas 'para cima' e 'para baixo' deslocam no eixo y. Além disso, utilize as teclas + e - para, respectivamente, incrementar e decrementar a componente z da posição da câmera. As imagens abaixo mostram o efeito obtido quando a posição da câmera é transladada para a esquerda. Comente as diferenças entre as imagens renderizadas com a projeção perspectiva e com a projeção ortográfica ao mudar a posição da câmera. Os notebooks ([29_Transformacao_visao](29_Transformacao_visao.ipynb)), ([30_Transformacao_projecao_ortogonal](30_Transformacao_projecao_ortogonal.ipynb)) e ([31_Transformacao_projecao_perspectiva](31_Transformacao_projecao_perspectiva.ipynb)) exemplificam algumas formas de como criar e utilizar as matrizes de visão e de projeção.
<table>
<tr>
<td> Projeção perspectiva <img src='cg/images/ex8_solar_system_5.png' style="width:400px"> </td>
<td> Projeção ortográfica <img src='cg/images/ex8_solar_system_6.png' style="width:400px"></td>
</tr>
</table>
## Parte 4 - Completando o sistema solar
Complete o sistema solar adicionando os planetas que estão faltando. O código abaixo fornece os dados necessários para fazer isso. Além disso, altere o tamanho dos quadrados de fundo. Para o quadrado que está mais perto (na posição (0, 0, -15)), altere o tamanho para 130x130. Para o quadrado mais distante, altere o tamanho para 150x150. Por fim, configure as matrizes de visão e de projeção perspectiva para gerar a imagem abaixo. Os notebooks ([29_Transformacao_visao](29_Transformacao_visao.ipynb)) e ([31_Transformacao_projecao_perspectiva](31_Transformacao_projecao_perspectiva.ipynb)) exemplificam algumas formas de como criar e utilizar as matrizes de visão e de projeção.
<td><img src='cg/images/ex8_solar_system_7.png' style="width:400px">
```
PlanetarySheet = {
'sun': SunInfo(np.array([1.00, 1.00, 0.00, 1.0]), 1391900),
'mercury': PlanetInfo(np.array([0.96, 0.90, 0.71, 1.0]), 4866, 88.0, 6.34, 0.3870, 0.3788, 0.0796),
'venus': PlanetInfo(np.array([0.95, 0.82, 0.38, 1.0]), 12106, 224.7, 2.19, 0.7219, 0.7219, 0.0049),
'earth': PlanetInfo(np.array([0.61, 0.79, 0.37, 1.0]), 12742, 365.2, 1.57, 1.0027, 1.0025, 0.0167),
'moon': PlanetInfo(np.array([0.50, 0.50, 0.50, 1.0]), 3475, 27.3, 5.1, 0.0025718 * moon_scale_factor, 0.0021479 * moon_scale_factor, 0.00014537),
'mars': PlanetInfo(np.array([0.88, 0.81, 0.61, 1.0]), 6760, 687.0, 1.67, 1.5241, 1.5173, 0.1424),
'jupiter': PlanetInfo(np.array([0.83, 0.67, 0.53, 1.0]), 142984, 4331, 0.32, 5.2073, 5.2010, 0.2520),
'saturn': PlanetInfo(np.array([0.89, 0.87, 0.63, 1.0]), 116438, 10747, 0.93, 9.5590, 9.5231, 0.5181),
'uranus': PlanetInfo(np.array([0.00, 0.87, 0.95, 1.0]), 46940, 30589, 1.02, 19.1848, 19.1645, 0.9055),
'neptune': PlanetInfo(np.array([0.00, 0.51, 0.89, 1.0]), 45432, 59800, 0.72, 30.0806, 30.0788, 0.2587),
'pluto': PlanetInfo(np.array([0.62, 0.63, 0.64, 1.0]), 2274, 90560, 15.55, 39.5, 34.031, 9.8276)
}
```
|
github_jupyter
|
PlanetarySheet = {
'sun': SunInfo(np.array([1.00, 1.00, 0.00, 1.0]), 1391900),
'earth': PlanetInfo(np.array([0.61, 0.79, 0.37, 1.0]), 12742, 365.2, 1.57, 1.0027, 1.0025, 0.0167),
'moon': PlanetInfo(np.array([0.50, 0.50, 0.50, 1.0]), 3475, 27.3, 5.1, 0.0025718 * moon_scale_factor, 0.0021479 * moon_scale_factor, 0.00014537),
'jupiter': PlanetInfo(np.array([0.83, 0.67, 0.53, 1.0]), 142984, 4331, 0.32, 5.2073, 5.2010, 0.2520)
}
PlanetarySheet = {
'sun': SunInfo(np.array([1.00, 1.00, 0.00, 1.0]), 1391900),
'mercury': PlanetInfo(np.array([0.96, 0.90, 0.71, 1.0]), 4866, 88.0, 6.34, 0.3870, 0.3788, 0.0796),
'venus': PlanetInfo(np.array([0.95, 0.82, 0.38, 1.0]), 12106, 224.7, 2.19, 0.7219, 0.7219, 0.0049),
'earth': PlanetInfo(np.array([0.61, 0.79, 0.37, 1.0]), 12742, 365.2, 1.57, 1.0027, 1.0025, 0.0167),
'moon': PlanetInfo(np.array([0.50, 0.50, 0.50, 1.0]), 3475, 27.3, 5.1, 0.0025718 * moon_scale_factor, 0.0021479 * moon_scale_factor, 0.00014537),
'mars': PlanetInfo(np.array([0.88, 0.81, 0.61, 1.0]), 6760, 687.0, 1.67, 1.5241, 1.5173, 0.1424),
'jupiter': PlanetInfo(np.array([0.83, 0.67, 0.53, 1.0]), 142984, 4331, 0.32, 5.2073, 5.2010, 0.2520),
'saturn': PlanetInfo(np.array([0.89, 0.87, 0.63, 1.0]), 116438, 10747, 0.93, 9.5590, 9.5231, 0.5181),
'uranus': PlanetInfo(np.array([0.00, 0.87, 0.95, 1.0]), 46940, 30589, 1.02, 19.1848, 19.1645, 0.9055),
'neptune': PlanetInfo(np.array([0.00, 0.51, 0.89, 1.0]), 45432, 59800, 0.72, 30.0806, 30.0788, 0.2587),
'pluto': PlanetInfo(np.array([0.62, 0.63, 0.64, 1.0]), 2274, 90560, 15.55, 39.5, 34.031, 9.8276)
}
| 0.435181 | 0.950365 |
```
import tensorflow as tf
import os
import numpy as np
import matplotlib.pyplot as plt
tf.__version__
_URL = "http://127.0.0.1:81/pv/PlantVillage.zip"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="PlantVillage.zip",
extract=True)
base_dir = os.path.join(os.path.dirname(zip_file), 'PlantVillage\\train')
base_dir2 = os.path.join(os.path.dirname(zip_file), 'PlantVillage\\train')
IMAGE_SIZE = 256
BATCH_SIZE = 32
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
base_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
base_dir2,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
for image_batch, label_batch in train_generator:
break
image_batch.shape, label_batch.shape
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('new_labels.txt', 'w') as f:
f.write(labels)
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(47, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable variables = {}'.format(len(model.trainable_variables)))
epochs = 101
history = model.fit_generator(train_generator,
epochs=epochs,
validation_data=val_generator)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
base_model.trainable = True
print("Number of layers in the base model: ", len(base_model.layers))
fine_tune_at = 101
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer = tf.keras.optimizers.Adam(1e-5),
metrics=['accuracy'])
model.summary()
print('Number of trainable variables = {}'.format(len(model.trainable_variables)))
history_fine = model.fit_generator(train_generator,
epochs=11,
validation_data=val_generator)
saved_model_dir = '.'
tf.saved_model.save(model, saved_model_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
with open('new_model.tflite', 'wb') as f:
f.write(tflite_model)
acc = history_fine.history['accuracy']
val_acc = history_fine.history['val_accuracy']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
```
|
github_jupyter
|
import tensorflow as tf
import os
import numpy as np
import matplotlib.pyplot as plt
tf.__version__
_URL = "http://127.0.0.1:81/pv/PlantVillage.zip"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="PlantVillage.zip",
extract=True)
base_dir = os.path.join(os.path.dirname(zip_file), 'PlantVillage\\train')
base_dir2 = os.path.join(os.path.dirname(zip_file), 'PlantVillage\\train')
IMAGE_SIZE = 256
BATCH_SIZE = 32
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
base_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
base_dir2,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
for image_batch, label_batch in train_generator:
break
image_batch.shape, label_batch.shape
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('new_labels.txt', 'w') as f:
f.write(labels)
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(47, activation='softmax')
])
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable variables = {}'.format(len(model.trainable_variables)))
epochs = 101
history = model.fit_generator(train_generator,
epochs=epochs,
validation_data=val_generator)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
base_model.trainable = True
print("Number of layers in the base model: ", len(base_model.layers))
fine_tune_at = 101
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer = tf.keras.optimizers.Adam(1e-5),
metrics=['accuracy'])
model.summary()
print('Number of trainable variables = {}'.format(len(model.trainable_variables)))
history_fine = model.fit_generator(train_generator,
epochs=11,
validation_data=val_generator)
saved_model_dir = '.'
tf.saved_model.save(model, saved_model_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
with open('new_model.tflite', 'wb') as f:
f.write(tflite_model)
acc = history_fine.history['accuracy']
val_acc = history_fine.history['val_accuracy']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
| 0.760295 | 0.452415 |
# Lab 1: Introduction to Jupyter and Symbolic Math
## 1. Background
This course includes a set of computer laboratories that explore topics related to signals and systems. These will be made available to you as the course progresses, and you are expected to engage with the content and produce and submit a set of results related to some defined tasks. Assistance will be provided if required, but it is really expected that you'll work in your own time and at your own pace: it's important right from the start for you to take responsibility for your own learning. You shouldn't be working in groups, but helping one another out on specific issues is fine.
The computer labs are based on the Python programming language, and they require a kernel that includes all the standard numerical libraries. The labs have been created as worksheets using the *Jupyter Notebook* system, which works in a browser and allows you to interleave text and Python code snippets that can be run interactively. Jupyter can be installed via a package called *Anaconda* - see the Orientation "Labs" lesson on Vula for details.
This first lab covers the introduction to Jupyter as well as specific scientific libraries such as NumPy, SymPy and Matplotlib. These packages are instrumental for data visualisation using Python, and will be used throughout the three labs for this course.
### Basic Jupyter usage
A notebook is basically composed of cells. A cell can contain text ("markdown") or code - in this case, Python. For any cell you can select the content type using the pulldown on the menubar. A cell can be executed by pressing shift-enter while it is in focus.
Below is a cell that creates two "numpy" arrays. The values in `x` (i.e. function `x(t)`) are initialised to be a particular function of `t` (i.e. your time variable). Select it and press shift-enter to run the code in the Python kernel attached to the notebook. Change the number of points in `t` and press shift-enter again to run, and note the change in output values.
```
import numpy as np # Import numpy for vector computations
t = np.linspace(0, 10, 10); # Create a time variable from t = 0 to t = 10 with 10 points in this range
x = 0.1*t**2 - np.cos(t); # Create x variable as a function of t
print(x); # Print values of x at the 10 defined points of t
```
The Python kernel is fully-featured for numeric computation, and includes most of the packages useful for scientific computing. The following code plots the vectors defined above using "matplotlib". The `%matplotlib notebook` line makes any plots appear in the notebook itself rather than in a separate window.
```
import matplotlib.pyplot as plt # Import matplotlib for plotting
%matplotlib notebook
plt.rcParams['figure.dpi'] = 60; # Decrease plot size
plt.plot(t, x, 'ro-') # Plot x vs t using red, circular dots connected by lines
plt.xlabel('t'); plt.ylabel('x(t)'); # Labels for axes
```
When you execute the code above it runs it in the Python kernel for the notebook, which already has the variables `t` and `x` defined. This can cause problems: if you jump around the notebook running cells in some arbitrary order then the kernel will probably end up in a weird state. If this happens then you can use the menu item "Cell" -> "Run All" to re-execute all cells in the notebook in order. If things really go wrong then restart the kernel.
Executing a text or markdown cell has a different effect: the content is converted and rendered as rich text. The markdown language is documented all over the internet, and supports both HTML and LaTeX equations. Do a web search if you want details or introductions to these topics.
### Gaining familiarity
One of the nice things about Python is that there's an active community of technical people that provide useful information and resources. If you don't know how to do something, then a simple web search can quickly help. There are also some "Cheat Sheets" available on Vula if you're feeling rusty with your Python.
There's a basic numerical Python tutorial at `http://cs231n.github.io/python-numpy-tutorial`, and a Python notebook version at `https://github.com/kuleshov/cs228-material/blob/master/tutorials/python/cs228-python-tutorial.ipynb`. A copy of this notebook that has been updated for Python 3 is included in the "examples" directory of your "notebooks" folder. Load this notebook and work through the cells. Make changes to the contents, and press "shift-enter" to run and see the corresponding outputs. Pay particular attention to how to use numpy arrays, and how to plot using matplotlib.
You can also access function help in Jupyter notebook. You can access help windows for functions by placing a `?` before a function, e.g. `#?plt.plot()`.
## 2. Symbolic math introduction
Now that we've covered the basics, we can move onto something more useful: symbolic mathematics. This is a maturing technology that lets a computer do maths using symbolic manipulation rather than numerical computation. Python has support for symbolic computation via the "sympy" package. Some good examples of sympy in use are at `http://www.cfm.brown.edu/people/dobrush/am33/SymPy/index.html` and `https://github.com/sympy/sympy/wiki/Quick-examples`.
### Basic differentiation
The cell below imports the symbolic math package, and defines two symbolic variables `x` and `y`. A symbolic function $f(x,y) = (x^2-2x+3)/y$ is then defined and printed.
```
import sympy as sp # Import the sympy library under the name 'sp'
x, y = sp.symbols('x y');
f = (x**2 - 2*x + 3)/y;
print(f);
# Pro-tip: you can make your outputs look good with LaTeX:
from IPython.display import display
sp.init_printing() # Initialises pretty printing. Only needs to be run once!
# Behold:
print(f) # Ugly mode
display(f) # Fancy-pants mode
```
Note that `f` here is a symbol representing a function. It would be nice if the notation made it explicit that it's actually a function of $x$ and $y$, namely `f(x,y)`, but that's not how it works. However, we can query the free variables:
```
f.free_symbols
```
We can get sympy to find a symbolic expression for the partial derivative of $f(x,y)$ with respect to $y$ by using:
```
fpy = sp.diff(f, y)
fpy
```
To evaluate this derivative at some particular values $x=\pi$ and $y=2$ we can substitute into the symbolic expression:
```
fpyv = fpy.subs([(x, sp.pi), (y, 2)])
fpyv
```
Notice though that this is still a symbolic expression. It can be evaluated using the "evalf" method, which finally returns a number:
```
fpyv.evalf()
```
### More advanced differentiation
Symbolic expressions can be manipulated. For example, we can define $g(t) = f(x(t), y(t))$, which in this case (given above) means:
$$g(t) = (x(t)^2-2x(t)+3)/y(t),$$
and we can find its derivative with respect to time. I.e. $g$ is a function of time determined by $x$ and $y$, which are also functions of time.
```
t = sp.symbols('t');
xt = sp.Function("x")(t); # x(t)
yt = sp.Function("y")(t) # y(t)
g = f.subs([(x,xt),(y,yt)]); # Define g(t)
gp = sp.diff(g,t); # Differentiate g(t) with respect to time
g # Print g(t)
gp # Print g'(t)
```
### Plotting symbolic functions
The sympy module also has a `plot` method that knows how to plot symbolic functions of a single variable. The function `g` above with $x(t) = \sin(t)$ and $y(t) = \cos(2t)$ is a function of a single time variable `t`, and can be visualised as follows:
```
gs = g.subs([(xt,sp.sin(t)), (yt,sp.cos(2*t))]); # Create function gs(t)
gs
sp.plot(gs, (t,1,2), xlabel="t", ylabel="gs(t)"); # Plot gs vs t using sympy (t defined from t = 1 to t = 2)
```
A roughly equivalent plot could be obtained numerically by creating a lambda function for the expression, evaluating it for a closely-spaced set of values of `t` over the required range, and using standard numerical plotting functions that draw straight lines between the calulated points. If you increase the number of calculated points over the interval then the approximation in the above graph becomes more accurate. You can think of a lambda function as just another way of defining a function, e.g. `f = x^2` is the same as `def my_func(x): x^2`.
```
tv = np.linspace(1, 2, 10); # Axis between tv = 1 and tv = 2 for 10 steps
gs_h = sp.lambdify(t, gs, modules=['numpy']);
gstv = gs_h(tv);
plt.figure(); # New figure
plt.plot(tv, gstv);
```
### Symbolic integration
Integration is also a standard function in sympy, so we can find for example the integral
$$y(t) = \int_{-10}^t x(\lambda) d\lambda$$
for $x(t) = e^{-t/10} \cos(t)$:
```
xt = sp.exp(-t/10)*sp.cos(t); # x(t)
lamb = sp.symbols('lamb'); # Dummy variable Lambda
xl = xt.subs(t,lamb); # x(lamb)
yt = sp.integrate(xl, (lamb, -10, t)); # Indefinite integral that produces a function of t
yt # Display y(t)
```
NOTE: To get a definite integral over the range, say -10 to 0, you'd go `yt = sp.integrate(xl, (lamb, -10, 0))`. Also, don't forget about your initial conditions; the definite integral only gives the change in the variable over the
interval, so you need to add its initial state to this value get the true final state. We'll usually assume initial rest conditions in this course, but NOT always.
Overall, the sympy plot function is quite fragile, and might not always work. Symbolic math packages are amazing, but they're difficult to implement and are sometimes not robust: you'll find various postings on the internet that give instances of very good symbolic math engines giving a wrong result. In short, they are useful but you should be careful when using them.
## Tasks
These tasks involve writing code, or modifying existing code, to meet the objectives described.
1. We often need to plot complex-valued signals, where for each value of $t$ there is a real and an imaginary value, or a magnitude and a phase. We therefore need two sets of axes. Use the `subplot` functionality of `matplotlib.pyplot` to plot the real and imaginary parts of the signal $x(t) = e^{j \omega_0 t}$ in a single figure, but on two separate axes. Use a value of $\omega_0 = 4$ and make the plot range over $t=0$ to $t=10$. (5 marks)<br><br>
2. Signal processing often involves recursion. This course concentrates on continuous-time signals, but equivalent processes can be applied in the digital (discrete) case. Consider the recursive equation $x[n] = -0.90 x[n-1]$. Create a numpy array with 100 elements for each of the values $x[0]$ to $x[99]$, and write code to populate it (assuming the initial condition $x[0] = 10$). Use `stem` to make a plot of $x[n]$ versus $n$ (discrete time) over the range calculated. (5 marks)<br><br>
3. Define the expression $y(t) = v_0 t - \frac{1}{2} g t^2$ for some symbolic values of $v_0$ and $g$ using sympy. You should recognise this as the "altitude" of a particle moving under the influence of gravity, given that the initial velocity at time $t=0$ is $v_0$. Make a plot of the particle height in meters for $v_0 = 20 m/s$ given $g = 9.8 m/s^2$, over the range $t=0$ to $t=10$ s. (5 marks)<br><br>
4. Suppose the acceleration of a particle is given by $a(t) = 0.3 + \cos(t)$ for positive time. Use symbolic methods to find and plot the velocity $v(t)$ of the particle over the range $t=0$ to $t=5$ given the initial condition $v(0) = -0.2$. Then find and plot the position $s(t)$ of the particle over the same time period, given the additional auxiliary condition $s(0) = 0.1$. (5 marks)
|
github_jupyter
|
import numpy as np # Import numpy for vector computations
t = np.linspace(0, 10, 10); # Create a time variable from t = 0 to t = 10 with 10 points in this range
x = 0.1*t**2 - np.cos(t); # Create x variable as a function of t
print(x); # Print values of x at the 10 defined points of t
import matplotlib.pyplot as plt # Import matplotlib for plotting
%matplotlib notebook
plt.rcParams['figure.dpi'] = 60; # Decrease plot size
plt.plot(t, x, 'ro-') # Plot x vs t using red, circular dots connected by lines
plt.xlabel('t'); plt.ylabel('x(t)'); # Labels for axes
import sympy as sp # Import the sympy library under the name 'sp'
x, y = sp.symbols('x y');
f = (x**2 - 2*x + 3)/y;
print(f);
# Pro-tip: you can make your outputs look good with LaTeX:
from IPython.display import display
sp.init_printing() # Initialises pretty printing. Only needs to be run once!
# Behold:
print(f) # Ugly mode
display(f) # Fancy-pants mode
f.free_symbols
fpy = sp.diff(f, y)
fpy
fpyv = fpy.subs([(x, sp.pi), (y, 2)])
fpyv
fpyv.evalf()
t = sp.symbols('t');
xt = sp.Function("x")(t); # x(t)
yt = sp.Function("y")(t) # y(t)
g = f.subs([(x,xt),(y,yt)]); # Define g(t)
gp = sp.diff(g,t); # Differentiate g(t) with respect to time
g # Print g(t)
gp # Print g'(t)
gs = g.subs([(xt,sp.sin(t)), (yt,sp.cos(2*t))]); # Create function gs(t)
gs
sp.plot(gs, (t,1,2), xlabel="t", ylabel="gs(t)"); # Plot gs vs t using sympy (t defined from t = 1 to t = 2)
tv = np.linspace(1, 2, 10); # Axis between tv = 1 and tv = 2 for 10 steps
gs_h = sp.lambdify(t, gs, modules=['numpy']);
gstv = gs_h(tv);
plt.figure(); # New figure
plt.plot(tv, gstv);
xt = sp.exp(-t/10)*sp.cos(t); # x(t)
lamb = sp.symbols('lamb'); # Dummy variable Lambda
xl = xt.subs(t,lamb); # x(lamb)
yt = sp.integrate(xl, (lamb, -10, t)); # Indefinite integral that produces a function of t
yt # Display y(t)
| 0.624408 | 0.990178 |
```
from plotly.subplots import make_subplots
import pandas as pd
import plotly.graph_objects as go
import plotly
cols = plotly.colors.DEFAULT_PLOTLY_COLORS
small_filter=['1M','10M','50M']
big_filter=['100M','500M','1000M']
#print(small_filter)
#plain netcat
p_df = pd.read_csv('../benchmark-nebula/experiments/08_plain/results/08_plain_results.csv', sep=';')
#ipsec netcat
m_df = pd.read_csv('../benchmark-ipsec/experiments/06_ipsec_netcat_delay/results/06_ipsec_netcat_delay_results.csv', sep=';')
#nebula netcat
o_df = pd.read_csv('../benchmark-nebula/experiments/05_nebula_netcat_delay/results/05_nebula_netcat_delay_results.csv', sep=';')
p_df['size'] = p_df['size'].str.strip()
m_df['size'] = m_df['size'].str.strip()
o_df['size'] = o_df['size'].str.strip()
# i_m1_filter = m_df[m_df['size']=='1M']
# n_m1_filter = o_df[o_df['size']=='1M']
# M1
p_m1_filter = p_df[p_df['size']=='1M']
i_m1_filter = m_df[m_df['size']=='1M']
n_m1_filter = o_df[o_df['size']=='1M']
#plain
plain_m1_mean_df = p_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m1_std_df = p_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m1_mean_df = i_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m1_std_df = i_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m1_mean_df = n_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m1_std_df = n_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M10
p_m10_filter = p_df[p_df['size']=='10M']
i_m10_filter = m_df[m_df['size']=='10M']
n_m10_filter = o_df[o_df['size']=='10M']
#plain
plain_m10_mean_df = p_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m10_std_df = p_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m10_mean_df = i_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m10_std_df = i_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m10_mean_df = n_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m10_std_df = n_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M50
p_m50_filter = p_df[p_df['size']=='50M']
i_m50_filter = m_df[m_df['size']=='50M']
n_m50_filter = o_df[o_df['size']=='50M']
#plain
plain_m50_mean_df = p_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m50_std_df = p_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m50_mean_df = i_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m50_std_df = i_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m50_mean_df = n_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m50_std_df = n_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M100
p_m100_filter = p_df[p_df['size']=='100M']
i_m100_filter = m_df[m_df['size']=='100M']
n_m100_filter = o_df[o_df['size']=='100M']
#plain
plain_m100_mean_df = p_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m100_std_df = p_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m100_mean_df = i_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m100_std_df = i_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m100_mean_df = n_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m100_std_df = n_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M500
p_m500_filter = p_df[p_df['size']=='500M']
i_m500_filter = m_df[m_df['size']=='500M']
n_m500_filter = o_df[o_df['size']=='500M']
#plain
plain_m500_mean_df = p_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m500_std_df = p_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m500_mean_df = i_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m500_std_df = i_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m500_mean_df = n_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m500_std_df = n_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M1000
p_m1000_filter = p_df[m_df['size']=='1000M']
i_m1000_filter = m_df[m_df['size']=='1000M']
n_m1000_filter = o_df[o_df['size']=='1000M']
#plain
plain_m1000_mean_df = p_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m1000_std_df = p_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m1000_mean_df = i_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m1000_std_df = i_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m1000_mean_df = n_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m1000_std_df = n_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# fig = go.Figure()
fig = make_subplots(rows=3, cols=2,
subplot_titles=("Transferring 1M",
"Transferring 10M",
"Transferring 50M",
"Transferring 100M",
"Transferring 500M",
"Transferring 1000M")
)
# 1M plain
fig.add_trace(
go.Bar(
name='Plain',
x='plain',#plain_m1_mean_df['size'],
y=plain_m1_mean_df.seconds,
marker=dict(color="#32a852"),
error_y=dict(type='data', array=plain_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=1
)
# 1M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x='ipsec',#ipsec_m1_mean_df['size'],
y=ipsec_m1_mean_df.seconds,
marker=dict(color="#636EFA"),
error_y=dict(type='data', array=ipsec_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=1
)
# 1M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x='nebnc',#nebula_m1_mean_df['size'],
y=nebula_m1_mean_df.seconds,
marker=dict(color="#EF553B"),
error_y=dict(type='data', array=nebula_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=1, col=1
)
# 10M plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m10_mean_df['size'],
y=plain_m10_mean_df.seconds,
marker=dict(color="#32a852"), showlegend=False,
error_y=dict(type='data', array=plain_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=2
)
# 10M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m10_mean_df['size'],
y=ipsec_m10_mean_df.seconds,
marker=dict(color="#636EFA"), showlegend=False,
error_y=dict(type='data', array=ipsec_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=2
)
# 10M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m10_mean_df['size'],
y=nebula_m10_mean_df.seconds,
marker=dict(color="#EF553B"), showlegend=False,
error_y=dict(type='data', array=nebula_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=1, col=2
)
# 50M IPsec
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m50_mean_df['size'],
y=plain_m50_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=1
)
# 50M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m50_mean_df['size'],
y=ipsec_m50_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=1
)
# 50M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m50_mean_df['size'],
y=nebula_m50_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=2, col=1
)
# 100M Plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m100_mean_df['size'],
y=plain_m100_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=2
)
# 100M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m100_mean_df['size'],
y=ipsec_m100_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=2
)
# 100M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m100_mean_df['size'],
y=nebula_m100_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=2, col=2
)
# 500M plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m500_mean_df['size'],
y=plain_m500_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=1
)
# 500M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m500_mean_df['size'],
y=ipsec_m500_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=1
)
# 500M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m500_mean_df['size'],
y=nebula_m500_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=3, col=1
)
# 1000M IPsec
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m1000_mean_df['size'],
y=plain_m1000_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=2
)
# 1000M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m1000_mean_df['size'],
y=ipsec_m1000_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=2
)
# 1000M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m1000_mean_df['size'],
y=nebula_m1000_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=3, col=2
)
# Update xaxis properties
fig.update_xaxes(title_text="File size in megabytes", row=1, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=1, col=2)
fig.update_xaxes(title_text="File size in megabytes", row=2, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=2, col=2)
fig.update_xaxes(title_text="File size in megabytes", row=3, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=3, col=2)
# Update yaxis properties
fig.update_yaxes(title_text="Time (seconds)", row=1, col=1)
# fig.update_yaxes(title_text="Time (seconds)", row=1, col=2)
# fig.update_yaxes(title_text="Time (seconds)", row=1, col=3)
fig.update_yaxes(title_text="Time (seconds)", row=2, col=1)
fig.update_yaxes(title_text="Time (seconds)", row=3, col=1)
# fig.update_yaxes(title_text="Time (seconds)", row=2, col=2)
# fig.update_yaxes(title_text="Time (seconds)", row=2, col=3)
fig.update_layout(
legend_title_text='Trend' )
fig.show()
# fig.write_image("nebula_vs_ipsec_netcat_multiplot_test.png", width=1000, scale=1.5)
fig.write_image("nebula_vs_ipsec_netcat_multiplot_test.png", width=600, height=800, scale=1.5)
```
|
github_jupyter
|
from plotly.subplots import make_subplots
import pandas as pd
import plotly.graph_objects as go
import plotly
cols = plotly.colors.DEFAULT_PLOTLY_COLORS
small_filter=['1M','10M','50M']
big_filter=['100M','500M','1000M']
#print(small_filter)
#plain netcat
p_df = pd.read_csv('../benchmark-nebula/experiments/08_plain/results/08_plain_results.csv', sep=';')
#ipsec netcat
m_df = pd.read_csv('../benchmark-ipsec/experiments/06_ipsec_netcat_delay/results/06_ipsec_netcat_delay_results.csv', sep=';')
#nebula netcat
o_df = pd.read_csv('../benchmark-nebula/experiments/05_nebula_netcat_delay/results/05_nebula_netcat_delay_results.csv', sep=';')
p_df['size'] = p_df['size'].str.strip()
m_df['size'] = m_df['size'].str.strip()
o_df['size'] = o_df['size'].str.strip()
# i_m1_filter = m_df[m_df['size']=='1M']
# n_m1_filter = o_df[o_df['size']=='1M']
# M1
p_m1_filter = p_df[p_df['size']=='1M']
i_m1_filter = m_df[m_df['size']=='1M']
n_m1_filter = o_df[o_df['size']=='1M']
#plain
plain_m1_mean_df = p_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m1_std_df = p_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m1_mean_df = i_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m1_std_df = i_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m1_mean_df = n_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m1_std_df = n_m1_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M10
p_m10_filter = p_df[p_df['size']=='10M']
i_m10_filter = m_df[m_df['size']=='10M']
n_m10_filter = o_df[o_df['size']=='10M']
#plain
plain_m10_mean_df = p_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m10_std_df = p_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m10_mean_df = i_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m10_std_df = i_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m10_mean_df = n_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m10_std_df = n_m10_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M50
p_m50_filter = p_df[p_df['size']=='50M']
i_m50_filter = m_df[m_df['size']=='50M']
n_m50_filter = o_df[o_df['size']=='50M']
#plain
plain_m50_mean_df = p_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m50_std_df = p_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m50_mean_df = i_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m50_std_df = i_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m50_mean_df = n_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m50_std_df = n_m50_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M100
p_m100_filter = p_df[p_df['size']=='100M']
i_m100_filter = m_df[m_df['size']=='100M']
n_m100_filter = o_df[o_df['size']=='100M']
#plain
plain_m100_mean_df = p_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m100_std_df = p_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m100_mean_df = i_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m100_std_df = i_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m100_mean_df = n_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m100_std_df = n_m100_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M500
p_m500_filter = p_df[p_df['size']=='500M']
i_m500_filter = m_df[m_df['size']=='500M']
n_m500_filter = o_df[o_df['size']=='500M']
#plain
plain_m500_mean_df = p_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m500_std_df = p_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m500_mean_df = i_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m500_std_df = i_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m500_mean_df = n_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m500_std_df = n_m500_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# M1000
p_m1000_filter = p_df[m_df['size']=='1000M']
i_m1000_filter = m_df[m_df['size']=='1000M']
n_m1000_filter = o_df[o_df['size']=='1000M']
#plain
plain_m1000_mean_df = p_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
plain_m1000_std_df = p_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# IPsec
ipsec_m1000_mean_df = i_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
ipsec_m1000_std_df = i_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# Nebula
nebula_m1000_mean_df = n_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'mean'}).reset_index().sort_values('filename', ascending=True)
nebula_m1000_std_df = n_m1000_filter[['seconds', 'filename', 'size']].groupby(['size', 'filename']).agg({'seconds': 'std'}).reset_index().sort_values('filename', ascending=True)
# fig = go.Figure()
fig = make_subplots(rows=3, cols=2,
subplot_titles=("Transferring 1M",
"Transferring 10M",
"Transferring 50M",
"Transferring 100M",
"Transferring 500M",
"Transferring 1000M")
)
# 1M plain
fig.add_trace(
go.Bar(
name='Plain',
x='plain',#plain_m1_mean_df['size'],
y=plain_m1_mean_df.seconds,
marker=dict(color="#32a852"),
error_y=dict(type='data', array=plain_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=1
)
# 1M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x='ipsec',#ipsec_m1_mean_df['size'],
y=ipsec_m1_mean_df.seconds,
marker=dict(color="#636EFA"),
error_y=dict(type='data', array=ipsec_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=1
)
# 1M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x='nebnc',#nebula_m1_mean_df['size'],
y=nebula_m1_mean_df.seconds,
marker=dict(color="#EF553B"),
error_y=dict(type='data', array=nebula_m1_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=1, col=1
)
# 10M plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m10_mean_df['size'],
y=plain_m10_mean_df.seconds,
marker=dict(color="#32a852"), showlegend=False,
error_y=dict(type='data', array=plain_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=2
)
# 10M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m10_mean_df['size'],
y=ipsec_m10_mean_df.seconds,
marker=dict(color="#636EFA"), showlegend=False,
error_y=dict(type='data', array=ipsec_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=1, col=2
)
# 10M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m10_mean_df['size'],
y=nebula_m10_mean_df.seconds,
marker=dict(color="#EF553B"), showlegend=False,
error_y=dict(type='data', array=nebula_m10_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=1, col=2
)
# 50M IPsec
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m50_mean_df['size'],
y=plain_m50_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=1
)
# 50M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m50_mean_df['size'],
y=ipsec_m50_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=1
)
# 50M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m50_mean_df['size'],
y=nebula_m50_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m50_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=2, col=1
)
# 100M Plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m100_mean_df['size'],
y=plain_m100_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=2
)
# 100M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m100_mean_df['size'],
y=ipsec_m100_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=2, col=2
)
# 100M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m100_mean_df['size'],
y=nebula_m100_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m100_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=2, col=2
)
# 500M plain
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m500_mean_df['size'],
y=plain_m500_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=1
)
# 500M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m500_mean_df['size'],
y=ipsec_m500_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=1
)
# 500M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m500_mean_df['size'],
y=nebula_m500_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m500_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=3, col=1
)
# 1000M IPsec
fig.add_trace(
go.Bar(
name='Plain',
x=plain_m1000_mean_df['size'],
y=plain_m1000_mean_df.seconds,
marker=dict(color="#32a852"),showlegend=False,
error_y=dict(type='data', array=plain_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=2
)
# 1000M IPsec
fig.add_trace(
go.Bar(
name='IPsec',
x=ipsec_m1000_mean_df['size'],
y=ipsec_m1000_mean_df.seconds,
marker=dict(color="#636EFA"),showlegend=False,
error_y=dict(type='data', array=ipsec_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000')
), row=3, col=2
)
# 1000M Nebula
fig.add_trace(
go.Bar(
name='Nebula',
x=nebula_m1000_mean_df['size'],
y=nebula_m1000_mean_df.seconds,
marker=dict(color="#EF553B"),showlegend=False,
error_y=dict(type='data', array=nebula_m1000_std_df.seconds, visible=True, thickness=1.5, color='#000000'),
), row=3, col=2
)
# Update xaxis properties
fig.update_xaxes(title_text="File size in megabytes", row=1, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=1, col=2)
fig.update_xaxes(title_text="File size in megabytes", row=2, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=2, col=2)
fig.update_xaxes(title_text="File size in megabytes", row=3, col=1)
fig.update_xaxes(title_text="File size in megabytes", row=3, col=2)
# Update yaxis properties
fig.update_yaxes(title_text="Time (seconds)", row=1, col=1)
# fig.update_yaxes(title_text="Time (seconds)", row=1, col=2)
# fig.update_yaxes(title_text="Time (seconds)", row=1, col=3)
fig.update_yaxes(title_text="Time (seconds)", row=2, col=1)
fig.update_yaxes(title_text="Time (seconds)", row=3, col=1)
# fig.update_yaxes(title_text="Time (seconds)", row=2, col=2)
# fig.update_yaxes(title_text="Time (seconds)", row=2, col=3)
fig.update_layout(
legend_title_text='Trend' )
fig.show()
# fig.write_image("nebula_vs_ipsec_netcat_multiplot_test.png", width=1000, scale=1.5)
fig.write_image("nebula_vs_ipsec_netcat_multiplot_test.png", width=600, height=800, scale=1.5)
| 0.174059 | 0.230119 |
# Lesson 3 Exercise 1: Three Queries Three Tables
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Cassandra_logo.svg/1200px-Cassandra_logo.svg.png" width="250" height="250">
### Walk through the basics of creating a table in Apache Cassandra, inserting rows of data, and doing a simple CQL query to validate the information. You will practice Denormalization, and the concept of 1 table per query, which is an encouraged practice with Apache Cassandra.
### Remember, replace ##### with your answer.
Note: __Do not__ click the blue Preview button at the bottom
#### We will use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally:
! pip install cassandra-driver
#### More documentation can be found here: https://datastax.github.io/python-driver/
#### Import Apache Cassandra python package
```
import cassandra
```
### Create a connection to the database
```
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
```
### Create a keyspace to work in
```
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
```
#### Connect to our Keyspace. Compare this to how we had to create a new session in PostgreSQL.
```
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
```
### Let's imagine we would like to start creating a Music Library of albums.
### We want to ask 3 questions of the data
#### 1. Give every album in the music library that was released in a given year
`select * from music_library WHERE YEAR=1970`
#### 2. Give every album in the music library that was created by a given artist
`select * from artist_library WHERE artist_name="The Beatles"`
#### 3. Give all the information from the music library about a given album
`select * from album_library WHERE album_name="Close To You"`
### Because we want to do three different queries, we will need different tables that partition the data differently.
<img src="images/table1.png" width="350" height="350">
<img src="images/table2.png" width="350" height="350">
<img src="images/table0.png" width="550" height="550">
### TO-DO: Create the tables.
```
query = "CREATE TABLE IF NOT EXISTS music_library"
query = query + "(year int, artist_name text, album_name text, PRIMARY KEY(year, artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query1 = "CREATE TABLE IF NOT EXISTS artist_library"
query1 = query1 + "(artist_name text, year int, album_name text, PRIMARY KEY (artist_name, year))"
try:
session.execute(query1)
except Exception as e:
print(e)
query2 = "CREATE TABLE IF NOT EXISTS album_library"
query2 = query2 + "(artist_name text, album_name text, year int, PRIMARY KEY (album_name, artist_name))"
try:
session.execute(query2)
except Exception as e:
print(e)
```
### TO-DO: Insert data into the tables
```
def insertData(query, values):
try:
session.execute(query, (values[0], values[1], values[2]))
print("Ingested data to {} table!".format(query.split(' ')[2]))
except Exception as e:
print(e)
query = "INSERT INTO music_library (year, artist_name, album_name) VALUES (%s, %s, %s)"
query1 = "INSERT INTO artist_library (artist_name, year, album_name) VALUES (%s, %s, %s)"
query2 = "INSERT INTO album_library (album_name, artist_name, year) VALUES (%s, %s, %s)"
insertData(query, [1970, "The Beatles", "Let it Be"])
insertData(query, [1965, "The Beatles", "Rubber Soul"])
insertData(query, [1965, "The Who", "My Generation"])
insertData(query, [1966, "The Monkees", "The Monkees"])
insertData(query, [1970, "The Carpenters", "Close To You"])
insertData(query1, ["The Beatles", 1970, "Let it Be"])
insertData(query1, ["The Beatles", 1965, "Rubber Soul"])
insertData(query1, ["The Who", 1965, "My Generation"])
insertData(query1, ["The Monkees", 1966, "The Monkees"])
insertData(query1, ["The Carpenters", 1970, "Close To You"])
insertData(query2, ["Let it Be", "The Beatles", 1970])
insertData(query2, ["Rubber Soul", "The Beatles", 1965])
insertData(query2, ["My Generation", "The Who", 1965])
insertData(query2, ["The Monkees", "The Monkees", 1966])
insertData(query2, ["Close To You", "The Carpenters", 1970])
```
This might have felt unnatural to insert duplicate data into the tables. If I just normalized these tables, I wouldn't have to have extra copies! While this is true, remember there are no `JOINS` in Apache Cassandra. For the benefit of high availibity and scalabity, denormalization must be how this is done.
### TO-DO: Validate the Data Model
```
query = "select * from music_library WHERE year=1970"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name)
```
### Your output should be:
1970 The Beatles Let it Be<br>
1970 The Carpenters Close To You
### TO-DO: Validate the Data Model
```
query = "select * from artist_library WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.album_name, row.year)
```
### Your output should be:
The Beatles Rubber Soul 1965 <br>
The Beatles Let it Be 1970
### TO-DO: Validate the Data Model
```
query = "select * from album_library WHERE album_name='Close To You'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.year, row.album_name)
```
### Your output should be:
The Carpenters 1970 Close To You
```
def dropTable(table):
query = 'drop table {}'.format(table)
try:
rows = session.execute(query)
print('{} was dropped!'.format(table))
except Exception as e:
print(e)
dropTable('music_library')
dropTable('artist_library')
dropTable('album_library')
```
### And finally close the session and cluster connection
```
session.shutdown()
cluster.shutdown()
```
|
github_jupyter
|
import cassandra
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
query = "CREATE TABLE IF NOT EXISTS music_library"
query = query + "(year int, artist_name text, album_name text, PRIMARY KEY(year, artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
query1 = "CREATE TABLE IF NOT EXISTS artist_library"
query1 = query1 + "(artist_name text, year int, album_name text, PRIMARY KEY (artist_name, year))"
try:
session.execute(query1)
except Exception as e:
print(e)
query2 = "CREATE TABLE IF NOT EXISTS album_library"
query2 = query2 + "(artist_name text, album_name text, year int, PRIMARY KEY (album_name, artist_name))"
try:
session.execute(query2)
except Exception as e:
print(e)
def insertData(query, values):
try:
session.execute(query, (values[0], values[1], values[2]))
print("Ingested data to {} table!".format(query.split(' ')[2]))
except Exception as e:
print(e)
query = "INSERT INTO music_library (year, artist_name, album_name) VALUES (%s, %s, %s)"
query1 = "INSERT INTO artist_library (artist_name, year, album_name) VALUES (%s, %s, %s)"
query2 = "INSERT INTO album_library (album_name, artist_name, year) VALUES (%s, %s, %s)"
insertData(query, [1970, "The Beatles", "Let it Be"])
insertData(query, [1965, "The Beatles", "Rubber Soul"])
insertData(query, [1965, "The Who", "My Generation"])
insertData(query, [1966, "The Monkees", "The Monkees"])
insertData(query, [1970, "The Carpenters", "Close To You"])
insertData(query1, ["The Beatles", 1970, "Let it Be"])
insertData(query1, ["The Beatles", 1965, "Rubber Soul"])
insertData(query1, ["The Who", 1965, "My Generation"])
insertData(query1, ["The Monkees", 1966, "The Monkees"])
insertData(query1, ["The Carpenters", 1970, "Close To You"])
insertData(query2, ["Let it Be", "The Beatles", 1970])
insertData(query2, ["Rubber Soul", "The Beatles", 1965])
insertData(query2, ["My Generation", "The Who", 1965])
insertData(query2, ["The Monkees", "The Monkees", 1966])
insertData(query2, ["Close To You", "The Carpenters", 1970])
query = "select * from music_library WHERE year=1970"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.artist_name, row.album_name)
query = "select * from artist_library WHERE artist_name='The Beatles'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.album_name, row.year)
query = "select * from album_library WHERE album_name='Close To You'"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.artist_name, row.year, row.album_name)
def dropTable(table):
query = 'drop table {}'.format(table)
try:
rows = session.execute(query)
print('{} was dropped!'.format(table))
except Exception as e:
print(e)
dropTable('music_library')
dropTable('artist_library')
dropTable('album_library')
session.shutdown()
cluster.shutdown()
| 0.189371 | 0.942401 |
# Stacking
In this notebook we look at the best parameters found for the following models:
1. XGBoost
2. LightGBM
3. CatBoost
4. HistGradientBoosting (scikit-learn)
We then use stacking to ensemble these 4 models.
**Note:** I leave the models on their verbose settings so I can monitor their training since it will take a long time to finish
```
# Global variables for testing changes to this notebook quickly
RANDOM_SEED = 0
NUM_TREES = 15000
EARLY_STOP = 200
NUM_FOLDS = 3
TEST = False
SUBMIT = True
# General imports
import numpy as np
import pandas as pd
import scipy.stats as stats
import pyarrow
import time
import gc
# Evaluation and model selection
from sklearn.base import clone
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.linear_model import LogisticRegression, SGDClassifier
# Models
from catboost import CatBoostClassifier
from lightgbm import LGBMClassifier
from xgboost import XGBClassifier
from sklearn.ensemble import HistGradientBoostingClassifier
# Hide warnings (makes optuna output easier to parse)
import warnings
warnings.filterwarnings('ignore')
```
# Preparing the Data
We define our cross-validation scheme at the start to ensure that it is the same across all the models we consider
```
%%time
# Load Data
train = pd.read_feather("../data/train.feather")
test = pd.read_feather("../data/test.feather")
submission = pd.read_csv('../data/sample_submission.csv')
if TEST:
train, junk = train_test_split(
train,
train_size = 0.1,
shuffle = True,
stratify = train['target'],
)
train.reset_index(drop = True, inplace = True)
del junk
gc.collect()
# Relevant features
features = [x for x in train.columns if x not in ['id','target']]
# Stratified k-fold cross-validation
train['kfold'] = -1
skf = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED)
for fold, (train_idx, valid_idx) in enumerate(skf.split(train, train['target'])):
train['kfold'].iloc[valid_idx] = fold
oof_preds = pd.DataFrame(
data = dict(kfold = train['kfold'])
)
test_preds = pd.DataFrame(
data = dict(id = test['id'])
)
```
# Feature Engineering
We experiment with feature engineering using row statistics, primarily to add variance to our predictions.
```
def create_row_stats(data):
cont_cols, cat_cols = list(), list()
for col in features:
if data[col].dtype.name.startswith("int"):
cat_cols.append(col)
else:
cont_cols.append(col)
new_data = data.copy()
new_data['binary_count'] = data[cat_cols].sum(axis=1)
new_data['binary_std'] = data[cat_cols].std(axis=1)
new_data['min'] = data[cont_cols].min(axis=1)
new_data['std'] = data[cont_cols].std(axis=1)
new_data['max'] = data[cont_cols].max(axis=1)
new_data['median'] = data[cont_cols].median(axis=1)
new_data['mean'] = data[cont_cols].mean(axis=1)
#new_data['var'] = data[cont_cols].var(axis=1)
#new_data['sum'] = data[cont_cols].sum(axis=1)
#new_data['sem'] = data[cont_cols].sem(axis=1)
new_data['skew'] = data[cont_cols].skew(axis=1)
new_data['median_abs_dev'] = stats.median_abs_deviation(data[cont_cols], axis=1)
new_data['zscore'] = (np.abs(stats.zscore(data[cont_cols]))).sum(axis=1)
return new_data
%%time
train = create_row_stats(train)
test = create_row_stats(test)
# New features
all_features = [x for x in train.columns if x not in ['id','target','kfold']]
assert features != all_features
```
# 1. XGBoost
We use the best parameters from [this Kaggle notebook](https://www.kaggle.com/rsizem2/tps-10-21-optuna-w-pruning-callbacks-xgboost). Except for using CPU rather than GPU, which in a lot of cases results in more accurate results
```
# Best Parameters
xgboost_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
#'tree_method': 'hist',
'max_depth': 5,
'learning_rate': 0.02261104274598307,
'min_child_weight': 74.7573299373233,
'subsample': 0.766,
'colsample_bytree': 0.268,
'colsample_bylevel': 0.591,
'reg_lambda': 75.35694292360638
}
def train_xgboost(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = XGBClassifier(**{**xgboost_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = EARLY_STOP,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 3 models
test_preds['XGBoost'], oof_preds['XGBoost'] = train_xgboost()
test_preds['XGB_Hist'], oof_preds['XGB_Hist'] = train_xgboost(
model_params = dict(tree_method = 'hist')
)
test_preds['XGB_Stats'], oof_preds['XGB_Stats'] = train_xgboost(new_features = True)
```
# 2. LightGBM
```
# Best Parameters
lightgbm_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
'max_depth': 6,
'learning_rate': 0.009099999999999999,
'min_child_samples': 4260,
'subsample': 0.87,
'subsample_freq': 3,
'colsample_bytree': 0.27,
'reg_lambda': 0.0003694272556917343,
'num_leaves': 26,
}
def train_lightgbm(model_params = {}, fit_params = {}, new_features = False):
# Store the holdout predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = LGBMClassifier(**{**lightgbm_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
verbose = 0,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = EARLY_STOP,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 2 models
test_preds['LightGBM'], oof_preds['LightGBM'] = train_lightgbm()
test_preds['LGBM_Stats'], oof_preds['LGBM_Stats'] = train_lightgbm(new_features = True)
```
# 3. CatBoost
```
# Best Parameters
catboost_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
'boosting_type': 'Plain',
'bootstrap_type': 'Bernoulli',
'early_stopping_rounds': EARLY_STOP,
'eval_metric': 'AUC',
'max_depth': 7,
'learning_rate': 0.01,
'min_child_samples': 12710,
'random_strength': 33.21156029537479,
'leaf_estimation_iterations': 1,
'subsample': 0.6990000000000001,
'reg_lambda': 60.52806724303393
}
def train_catboost(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
start = time.time()
# Define Model
model = CatBoostClassifier(**{**catboost_params, **model_params})
gc.collect()
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
use_best_model = True,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train CatBoost
test_preds['CatBoost'], oof_preds['CatBoost'] = train_catboost()
test_preds['Cat_Stats'], oof_preds['Cat_Stats'] = train_catboost(new_features = True)
```
# 4. Scikit-Learn
```
# Best Parameters
histgbc_params = {
'random_state': RANDOM_SEED,
'max_iter': NUM_TREES,
'validation_fraction': 0.33,
'early_stopping': True,
'n_iter_no_change': EARLY_STOP,
'verbose': 0,
}
def train_histgbm(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = HistGradientBoostingClassifier(**{**histgbc_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 2 models with different random seets
test_preds['HistGBM'], oof_preds['HistGBM'] = train_histgbm()
test_preds['Hist_Stats'], oof_preds['Hist_Stats'] = train_histgbm(new_features = True)
```
# Predictions
```
oof_preds.head()
test_preds.head()
```
# Generate Submissions
We create submissions for the CPU generated predictions to see if they are better than the GPU generated models we created with Kaggle notebooks.
```
# Make submission
submission['target'] = test_preds['XGBoost']
if SUBMIT: submission.to_csv(f'../output/xgboost_cpu_{NUM_FOLDS}fold_submission.csv', index=False)
# Make submission
submission['target'] = test_preds['CatBoost']
if SUBMIT: submission.to_csv(f'../output/catboost_cpu_{NUM_FOLDS}fold_submission.csv', index=False)
```
# Stacking
We use XGBoost and LightGBM as meta models for stacking:
## 1. LightGBM Classifier
```
def stack_lightgbm():
preds = np.zeros((test.shape[0],))
scores = np.zeros(NUM_FOLDS)
for j in range(NUM_FOLDS):
X_train = oof_preds[oof_preds.kfold != j].drop('kfold', axis = 1)
X_valid = oof_preds[oof_preds.kfold == j].drop('kfold', axis = 1)
y_train = train['target'][train.kfold != j]
y_valid = train['target'][train.kfold == j]
X_test = test_preds.drop('id', axis = 1)
model = LGBMClassifier(random_state = RANDOM_SEED, n_estimators = 200)
model.fit(
X_train, y_train,
verbose = 0,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = 25,
)
preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
preds_valid = model.predict_proba(X_valid)[:, 1]
scores[j] = roc_auc_score(y_valid, preds_valid)
print("Fold", j ,"(AUC):", scores[j])
print("Avg (AUC):", round(scores.mean(),6))
print("Min (AUC):", round(scores.min(),6))
return preds
# LGBMClassifier meta model
submission['target'] = stack_lightgbm()
if SUBMIT: submission.to_csv(f'../output/stack_lgbm_{NUM_FOLDS}fold_submission.csv', index=False)
```
## 2. XGBoost Classifier
```
def stack_xgboost():
preds = np.zeros((test.shape[0],))
scores = np.zeros(NUM_FOLDS)
for j in range(NUM_FOLDS):
X_train = oof_preds[oof_preds.kfold != j].drop('kfold', axis = 1)
X_valid = oof_preds[oof_preds.kfold == j].drop('kfold', axis = 1)
y_train = train['target'][train.kfold != j]
y_valid = train['target'][train.kfold == j]
X_test = test_preds.drop('id', axis = 1)
model = XGBClassifier(random_state = RANDOM_SEED, n_estimators = 200)
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = 25,
)
preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
preds_valid = model.predict_proba(X_valid)[:, 1]
scores[j] = roc_auc_score(y_valid, preds_valid)
print("Fold", j ,"(AUC):", scores[j])
print("Avg (AUC):", round(scores.mean(),6))
print("Min (AUC):", round(scores.min(),6))
return preds
# XGBClassifier meta model
submission['target'] = stack_xgboost()
if SUBMIT: submission.to_csv(f'../output/stack_xgb_{NUM_FOLDS}fold_submission.csv', index=False)
```
|
github_jupyter
|
# Global variables for testing changes to this notebook quickly
RANDOM_SEED = 0
NUM_TREES = 15000
EARLY_STOP = 200
NUM_FOLDS = 3
TEST = False
SUBMIT = True
# General imports
import numpy as np
import pandas as pd
import scipy.stats as stats
import pyarrow
import time
import gc
# Evaluation and model selection
from sklearn.base import clone
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import StratifiedKFold, train_test_split
from sklearn.linear_model import LogisticRegression, SGDClassifier
# Models
from catboost import CatBoostClassifier
from lightgbm import LGBMClassifier
from xgboost import XGBClassifier
from sklearn.ensemble import HistGradientBoostingClassifier
# Hide warnings (makes optuna output easier to parse)
import warnings
warnings.filterwarnings('ignore')
%%time
# Load Data
train = pd.read_feather("../data/train.feather")
test = pd.read_feather("../data/test.feather")
submission = pd.read_csv('../data/sample_submission.csv')
if TEST:
train, junk = train_test_split(
train,
train_size = 0.1,
shuffle = True,
stratify = train['target'],
)
train.reset_index(drop = True, inplace = True)
del junk
gc.collect()
# Relevant features
features = [x for x in train.columns if x not in ['id','target']]
# Stratified k-fold cross-validation
train['kfold'] = -1
skf = StratifiedKFold(n_splits = NUM_FOLDS, shuffle = True, random_state = RANDOM_SEED)
for fold, (train_idx, valid_idx) in enumerate(skf.split(train, train['target'])):
train['kfold'].iloc[valid_idx] = fold
oof_preds = pd.DataFrame(
data = dict(kfold = train['kfold'])
)
test_preds = pd.DataFrame(
data = dict(id = test['id'])
)
def create_row_stats(data):
cont_cols, cat_cols = list(), list()
for col in features:
if data[col].dtype.name.startswith("int"):
cat_cols.append(col)
else:
cont_cols.append(col)
new_data = data.copy()
new_data['binary_count'] = data[cat_cols].sum(axis=1)
new_data['binary_std'] = data[cat_cols].std(axis=1)
new_data['min'] = data[cont_cols].min(axis=1)
new_data['std'] = data[cont_cols].std(axis=1)
new_data['max'] = data[cont_cols].max(axis=1)
new_data['median'] = data[cont_cols].median(axis=1)
new_data['mean'] = data[cont_cols].mean(axis=1)
#new_data['var'] = data[cont_cols].var(axis=1)
#new_data['sum'] = data[cont_cols].sum(axis=1)
#new_data['sem'] = data[cont_cols].sem(axis=1)
new_data['skew'] = data[cont_cols].skew(axis=1)
new_data['median_abs_dev'] = stats.median_abs_deviation(data[cont_cols], axis=1)
new_data['zscore'] = (np.abs(stats.zscore(data[cont_cols]))).sum(axis=1)
return new_data
%%time
train = create_row_stats(train)
test = create_row_stats(test)
# New features
all_features = [x for x in train.columns if x not in ['id','target','kfold']]
assert features != all_features
# Best Parameters
xgboost_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
#'tree_method': 'hist',
'max_depth': 5,
'learning_rate': 0.02261104274598307,
'min_child_weight': 74.7573299373233,
'subsample': 0.766,
'colsample_bytree': 0.268,
'colsample_bylevel': 0.591,
'reg_lambda': 75.35694292360638
}
def train_xgboost(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = XGBClassifier(**{**xgboost_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = EARLY_STOP,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 3 models
test_preds['XGBoost'], oof_preds['XGBoost'] = train_xgboost()
test_preds['XGB_Hist'], oof_preds['XGB_Hist'] = train_xgboost(
model_params = dict(tree_method = 'hist')
)
test_preds['XGB_Stats'], oof_preds['XGB_Stats'] = train_xgboost(new_features = True)
# Best Parameters
lightgbm_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
'max_depth': 6,
'learning_rate': 0.009099999999999999,
'min_child_samples': 4260,
'subsample': 0.87,
'subsample_freq': 3,
'colsample_bytree': 0.27,
'reg_lambda': 0.0003694272556917343,
'num_leaves': 26,
}
def train_lightgbm(model_params = {}, fit_params = {}, new_features = False):
# Store the holdout predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = LGBMClassifier(**{**lightgbm_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
verbose = 0,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = EARLY_STOP,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 2 models
test_preds['LightGBM'], oof_preds['LightGBM'] = train_lightgbm()
test_preds['LGBM_Stats'], oof_preds['LGBM_Stats'] = train_lightgbm(new_features = True)
# Best Parameters
catboost_params = {
'random_state': RANDOM_SEED,
'n_estimators': NUM_TREES,
'boosting_type': 'Plain',
'bootstrap_type': 'Bernoulli',
'early_stopping_rounds': EARLY_STOP,
'eval_metric': 'AUC',
'max_depth': 7,
'learning_rate': 0.01,
'min_child_samples': 12710,
'random_strength': 33.21156029537479,
'leaf_estimation_iterations': 1,
'subsample': 0.6990000000000001,
'reg_lambda': 60.52806724303393
}
def train_catboost(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
start = time.time()
# Define Model
model = CatBoostClassifier(**{**catboost_params, **model_params})
gc.collect()
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
use_best_model = True,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train CatBoost
test_preds['CatBoost'], oof_preds['CatBoost'] = train_catboost()
test_preds['Cat_Stats'], oof_preds['Cat_Stats'] = train_catboost(new_features = True)
# Best Parameters
histgbc_params = {
'random_state': RANDOM_SEED,
'max_iter': NUM_TREES,
'validation_fraction': 0.33,
'early_stopping': True,
'n_iter_no_change': EARLY_STOP,
'verbose': 0,
}
def train_histgbm(model_params = {}, fit_params = {}, new_features = False):
# Store the predictions
oof_preds = np.zeros((train.shape[0],))
test_preds = np.zeros((test.shape[0],))
print('')
# Stratified k-fold cross-validation
for fold in range(NUM_FOLDS):
# Training and Validation Sets
if new_features:
X_train, y_train = train[train.kfold != fold][features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][features], train[train.kfold == fold]['target']
X_test = test[features]
else:
X_train, y_train = train[train.kfold != fold][all_features], train[train.kfold != fold]['target']
X_valid, y_valid = train[train.kfold == fold][all_features], train[train.kfold == fold]['target']
X_test = test[all_features]
# Define Model
model = HistGradientBoostingClassifier(**{**histgbc_params, **model_params})
gc.collect()
start = time.time()
model.fit(
X_train, y_train,
**fit_params
)
# validation and test predictions
valid_preds = model.predict_proba(X_valid)[:, 1]
test_preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
oof_preds[train.kfold == fold] = valid_preds
# fold auc score
fold_auc = roc_auc_score(y_valid, valid_preds)
end = time.time()
print(f'Fold {fold} (AUC): {round(fold_auc, 5)} in {round(end - start, 2)}s.')
return test_preds, oof_preds
# Train 2 models with different random seets
test_preds['HistGBM'], oof_preds['HistGBM'] = train_histgbm()
test_preds['Hist_Stats'], oof_preds['Hist_Stats'] = train_histgbm(new_features = True)
oof_preds.head()
test_preds.head()
# Make submission
submission['target'] = test_preds['XGBoost']
if SUBMIT: submission.to_csv(f'../output/xgboost_cpu_{NUM_FOLDS}fold_submission.csv', index=False)
# Make submission
submission['target'] = test_preds['CatBoost']
if SUBMIT: submission.to_csv(f'../output/catboost_cpu_{NUM_FOLDS}fold_submission.csv', index=False)
def stack_lightgbm():
preds = np.zeros((test.shape[0],))
scores = np.zeros(NUM_FOLDS)
for j in range(NUM_FOLDS):
X_train = oof_preds[oof_preds.kfold != j].drop('kfold', axis = 1)
X_valid = oof_preds[oof_preds.kfold == j].drop('kfold', axis = 1)
y_train = train['target'][train.kfold != j]
y_valid = train['target'][train.kfold == j]
X_test = test_preds.drop('id', axis = 1)
model = LGBMClassifier(random_state = RANDOM_SEED, n_estimators = 200)
model.fit(
X_train, y_train,
verbose = 0,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = 25,
)
preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
preds_valid = model.predict_proba(X_valid)[:, 1]
scores[j] = roc_auc_score(y_valid, preds_valid)
print("Fold", j ,"(AUC):", scores[j])
print("Avg (AUC):", round(scores.mean(),6))
print("Min (AUC):", round(scores.min(),6))
return preds
# LGBMClassifier meta model
submission['target'] = stack_lightgbm()
if SUBMIT: submission.to_csv(f'../output/stack_lgbm_{NUM_FOLDS}fold_submission.csv', index=False)
def stack_xgboost():
preds = np.zeros((test.shape[0],))
scores = np.zeros(NUM_FOLDS)
for j in range(NUM_FOLDS):
X_train = oof_preds[oof_preds.kfold != j].drop('kfold', axis = 1)
X_valid = oof_preds[oof_preds.kfold == j].drop('kfold', axis = 1)
y_train = train['target'][train.kfold != j]
y_valid = train['target'][train.kfold == j]
X_test = test_preds.drop('id', axis = 1)
model = XGBClassifier(random_state = RANDOM_SEED, n_estimators = 200)
model.fit(
X_train, y_train,
verbose = False,
eval_set = [(X_valid, y_valid)],
eval_metric = "auc",
early_stopping_rounds = 25,
)
preds += model.predict_proba(X_test)[:, 1] / NUM_FOLDS
preds_valid = model.predict_proba(X_valid)[:, 1]
scores[j] = roc_auc_score(y_valid, preds_valid)
print("Fold", j ,"(AUC):", scores[j])
print("Avg (AUC):", round(scores.mean(),6))
print("Min (AUC):", round(scores.min(),6))
return preds
# XGBClassifier meta model
submission['target'] = stack_xgboost()
if SUBMIT: submission.to_csv(f'../output/stack_xgb_{NUM_FOLDS}fold_submission.csv', index=False)
| 0.59514 | 0.886125 |
# Quantum Key Distribution
## Contents
1. Introduction
2. Protocol Overview
3. Qiskit Example: Without Interception
4. Qiskit Example: With Interception
5. Risk Analysis
## 1. Introduction
When Alice and Bob want to communicate a secret message (such as Bob’s online banking details) over an insecure channel (such as the internet), its essential to encrypt the message. Since cryptography is a large area and almost all of it is outside the scope of this textbook, we will have to believe that Alice and Bob having a secret key that no-one else knows is useful and allows them to communicate using symmetric-key cryptography.
If Alice and Bob want to use Eve’s classical communication channel to share their key, it is impossible to tell if Eve has made a copy of this key for herself- they must place complete trust in Eve that she is not listening. If, however, Eve provides a quantum communication channel, Alice and Bob no longer need to trust Eve at all- they will know if she tries to read Bob’s message before it gets to Alice.
For some readers, it may be useful to give an idea of how a quantum channel may be physically implemented. An example of a classical channel could be a telephone line; we send electric signals through the line that represent our message (or bits). A proposed example of a quantum communication channel could be some kind of fibre-optic cable, through which we can send individual photons (particles of light). Photons have a property call _polarisation,_ and this polarisation can be one of two states. We can use this to represent a qubit.
## 2. Protocol Overview
The protocol makes use of the fact that measuring a qubit can change its state. If Alice sends Bob a qubit, and an eavesdropper (Eve) tries to measure it before Bob does, there is a chance that Eve’s measurement will change the state of the qubit and Bob will not receive the qubit state Alice sent.
```
from qiskit import QuantumCircuit, execute, Aer
from qiskit.visualization import plot_histogram, plot_bloch_multivector
from numpy.random import randint
import numpy as np
print("Imports Successful")
```
If Alice prepares a qubit in the state $|+\rangle$ (`0` in the X-basis), and Bob measures it in the X-basis, Bob is sure to measure `0`:
```
qc = QuantumCircuit(1,1)
# Alice prepares qubit in state |+>
qc.h(0)
qc.barrier()
# Alice now sends the qubit to Bob
# who measures it in the X-basis
qc.h(0)
qc.measure(0,0)
# Draw and simulate circuit
display(qc.draw())
svs = Aer.get_backend('qasm_simulator')
job = execute(qc, svs)
plot_histogram(job.result().get_counts())
```
But if Eve tries to measure this qubit in the Z-basis before it reaches Bob, she will change the qubit's state from $|+\rangle$ to either $|0\rangle$ or $|1\rangle$, and Bob is no longer certain to measure `0`:
```
qc = QuantumCircuit(1,1)
# Alice prepares qubit in state |+>
qc.h(0)
# Alice now sends the qubit to Bob
# but Eve intercepts and tries to read it
qc.measure(0, 0)
qc.barrier()
# Eve then passes this on to Bob
# who measures it in the X-basis
qc.h(0)
qc.measure(0,0)
# Draw and simulate circuit
display(qc.draw())
svs = Aer.get_backend('qasm_simulator')
job = execute(qc, svs)
plot_histogram(job.result().get_counts())
```
We can see here that Bob now has a 50% chance of measuring `1`, and if he does, he and Alice will know there is something wrong with their channel.
The quantum key distribution protocol involves repeating this process enough times that an eavesdropper has a negligible chance of getting away with this interception. It is roughly as follows:
**- Step 1**
Alice choses a string of random bits, e.g.:
`1000101011010100`
And a random choice of basis for each bit:
`ZZXZXXXZXZXXXXXX`
Alice keeps these two pieces of information private to herself.
**- Step 2**
Alice then encodes each bit onto a string of qubits using the basis she chose, this means each qubit is in one of the states $|0\rangle$, $|1\rangle$, $|+\rangle$ or $|-\rangle$, chosen at random. In this case, the string of qubits would look like this:
$$ |-\rangle|0\rangle|+\rangle|0\rangle|1\rangle|0\rangle|1\rangle|+\rangle|1\rangle|-\rangle|+\rangle|-\rangle|0\rangle|-\rangle|0\rangle|+\rangle
$$
This is the message she sends to Bob.
**- Step 3**
Bob then measures each qubit at random, for example, he might use the bases:
`XZZZXZXZXZXZZZXZ`
And Bob keeps the measurement results private.
**- Step 4**
Bob and Alice then publicly share which basis they used for each qubit. If Bob measured a qubit in the same basis Alice prepared it in, they use this to form part of their shared secret key, otherwise they discard the information for that bit.
**- Step 5**
Finally, Bob and Alice share a random sample of their keys, and if the samples match, they can be sure (to a small margin of error) that their transmission is successful.
## 3. Qiskit Example: Without Interception
Let’s first see how the protocol works when no-one is listening in, then we can see how Alice and Bob are able to detect an eavesdropper. As always, let's start by importing everything we need:
To generate pseudo-random keys, we will use the `randint` function from numpy. To make sure you can reproduce the results on this page, we will set the seed to 0:
```
np.random.seed(seed=0)
```
We will call the length of Alice's initial message `n`. In this example, Alice will send a message 100 qubits long:
```
n = 100
```
### 3.1 Step 1:
Alice generates her random set of bits:
```
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
print(alice_bits)
```
At the moment, the set of bits '`alice_bits`' is only known to Alice. We will keep track of what information is only known to Alice, what information is only known to Bob, and what has been sent over Eve's channel in a table like this:
| Alice's Knowledge |Over Eve's Channel| Bob's Knowledge |
|:-----------------:|:----------------:|:---------------:|
| alice_bits | | |
### 3.2 Step 2:
Alice chooses to encode each bit on qubit in the $X$ or $Z$-basis at random, and stores the choice for each qubit in `alice_bases`. In this case, a `0` means "prepare in the $Z$-basis", and a `1` means "prepare in the $X$-basis":
```
np.random.seed(seed=0)
n = 100
## Step 1
#Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
print(alice_bases)
```
Alice also keeps this knowledge private:
| Alice's Knowledge |Over Eve's Channel| Bob's Knowledge |
|:-----------------:|:----------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
The function `encode_message` below, creates a list of `QuantumCircuit`s, each representing a single qubit in Alice's message:
```
def encode_message(bits, bases):
message = []
for i in range(n):
qc = QuantumCircuit(1,1)
if bases[i] == 0: # Prepare qubit in Z-basis
if bits[i] == 0:
pass
else:
qc.x(0)
else: # Prepare qubit in X-basis
if bits[i] == 0:
qc.h(0)
else:
qc.x(0)
qc.h(0)
qc.barrier()
message.append(qc)
return message
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
```
We can see that the first bit in `alices_bits` is `0`, and the basis she encodes this in is the $X$-basis (represented by `1`):
```
print('bit = %i' % alice_bits[0])
print('basis = %i' % alice_bases[0])
```
And if we view the first circuit in `message` (representing the first qubit in Alice's message), we can verify that Alice has prepared a qubit in the state $|+\rangle$:
```
message[0].draw()
```
As another example, we can see that the fourth bit in `alice_bits` is `1`, and it is encoded in the $Z$-basis, Alice prepares the corresponding qubit in the state $|1\rangle$:
```
print('bit = %i' % alice_bits[4])
print('basis = %i' % alice_bases[4])
message[4].draw()
```
This message of qubits is then sent to Bob over Eve's quantum channel:
| Alice's Knowledge |Over Eve's Channel| Bob's Knowledge |
|:-----------------:|:----------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
### 3.3 Step 3:
Bob then measures each qubit in the $X$ or $Z$-basis at random and stores this information:
```
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
print(bob_bases)
```
`bob_bases` stores Bob's choice for which basis he measures each qubit in.
| Alice's Knowledge |Over Eve's Channel| Bob's Knowledge |
|:-----------------:|:----------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
Below, the function `measure_message`, applies the corresponding measurement and simulates the result of measuring each qubit. We store the measurement results in `bob_results`.
```
def measure_message(message, bases):
backend = Aer.get_backend('qasm_simulator')
measurements = []
for q in range(n):
if bases[q] == 0: # measuring in Z-basis
message[q].measure(0,0)
if bases[q] == 1: # measuring in X-basis
message[q].h(0)
message[q].measure(0,0)
result = execute(message[q], backend, shots=1, memory=True).result()
measured_bit = int(result.get_memory()[0])
measurements.append(measured_bit)
return measurements
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
```
We can see that the circuit in `message[0]` (representing the 0th qubit) has had an $X$-measurement added to it by Bob:
```
message[0].draw()
```
Since Bob has by chance chosen to measure in the same basis Alice encoded the qubit in, Bob is guaranteed to get the result `0`. For the 6th qubit (shown below), Bob's random choice of measurement is not the same as Alice's, and Bob's result has only a 50% chance of matching Alices'.
```
message[6].draw()
print(bob_results)
```
Bob keeps his results private.
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
### 3.4 Step 4:
After this, Alice reveals (through Eve's channel) which qubits were encoded in which basis:
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
And Bob reveals which basis he measured each qubit in:
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
| bob_bases | bob_bases | |
If Bob happened to measure a bit in the same basis Alice prepared it in, this means the entry in `bob_results` will match the corresponding entry in `alice_bits`, and they can use that bit as part of their key. If they measured in different bases, Bob's result is random, and they both throw that entry away. Here is a function `remove_garbage` that does this for us:
```
def remove_garbage(a_bases, b_bases, bits):
good_bits = []
for q in range(n):
if a_bases[q] == b_bases[q]:
# If both used the same basis, add
# this to the list of 'good' bits
good_bits.append(bits[q])
return good_bits
```
Alice and Bob both discard the useless bits, and use the remaining bits to form their secret keys:
```
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
print(alice_key)
```
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
| bob_bases | bob_bases | |
| alice_key | | |
```
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
print(bob_key)
```
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
| bob_bases | bob_bases | |
| alice_key | | bob_key |
### 3.5 Step 5:
Finally, Bob and Alice compare a random selection of the bits in their keys to make sure the protocol has worked correctly:
```
def sample_bits(bits, selection):
sample = []
for i in selection:
# use np.mod to make sure the
# bit we sample is always in
# the list range
i = np.mod(i, len(bits))
# pop(i) removes the element of the
# list at index 'i'
sample.append(bits.pop(i))
return sample
```
Alice and Bob both broadcast these publicly, and remove them from their keys as they are no longer secret:
```
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
## Step 5
sample_size = 15
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
print(" bob_sample = " + str(bob_sample))
alice_sample = sample_bits(alice_key, bit_selection)
print("alice_sample = "+ str(alice_sample))
```
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
| bob_bases | bob_bases | |
| alice_key | | bob_key |
| bob_sample | bob_sample | bob_sample |
| alice_sample | alice_sample | alice_sample |
If the protocol has worked correctly without interference, their samples should match:
```
bob_sample == alice_sample
```
If their samples match, it means (with high probability) `alice_key == bob_key`. They now share a secret key they can use to encrypt their messages!
| Alice's Knowledge | Over Eve's Channel | Bob's Knowledge |
|:-----------------:|:------------------:|:---------------:|
| alice_bits | | |
| alice_bases | | |
| message | message | message |
| | | bob_bases |
| | | bob_results |
| | alice_bases | alice_bases |
| bob_bases | bob_bases | |
| alice_key | | bob_key |
| bob_sample | bob_sample | bob_sample |
| alice_sample | alice_sample | alice_sample |
| shared_key | | shared_key |
```
print(bob_key)
print(alice_key)
print("key length = %i" % len(alice_key))
```
## 4. Qiskit Example: *With* Interception
Let’s now see how Alice and Bob can tell if Eve has been trying to listen in on their quantum message. We repeat the same steps as without interference, but before Bob receives his qubits, Eve will try and extract some information from them. Let's set a different seed so we get a specific set of reproducible 'random' results:
```
np.random.seed(seed=3)
```
### 4.1 Step 1:
Alice generates her set of random bits:
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
print(alice_bits)
```
### 4.2 Step 2:
Alice encodes these in the $Z$ and $X$-bases at random, and sends these to Bob through Eve's quantum channel:
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
print(alice_bases)
```
In this case, the first qubit in Alice's message is in the state $|+\rangle$:
```
message[0].draw()
```
### Interception!
Oh no! Eve intercepts the message as it passes through her channel. She tries to measure the qubits in a random selection of bases, in the same way Bob will later.
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
print(intercepted_message)
```
We can see the case of qubit 0 below; Eve's random choice of basis is not the same as Alice's, and this will change the qubit state from $|+\rangle$, to a random state in the $Z$-basis, with 50% probability of $|0\rangle$ or $|1\rangle$:
```
message[0].draw()
```
### 4.3 Step 3:
Eve then passes on the qubits to Bob, who measures them at random. In this case, Bob chose (by chance) to measure in the same basis Alice prepared the qubit in. Without interception, Bob would be guaranteed to measure `0`, but because Eve tried to read the message he now has a 50% chance of measuring `1` instead.
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
message[0].draw()
```
### 4.4 Step 4:
Bob and Alice reveal their basis choices, and discard the useless bits:
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
```
### 4.5 Step 5:
Bob and Alice compare the same random selection of their keys to see if the qubits were intercepted:
```
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
## Step 5
sample_size = 15
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
print(" bob_sample = " + str(bob_sample))
alice_sample = sample_bits(alice_key, bit_selection)
print("alice_sample = "+ str(alice_sample))
bob_sample == alice_sample
```
Oh no! Bob's key and Alice's key do not match. We know this is because Eve tried to read the message between steps 2 and 3, and changed the qubits' states. For all Alice and Bob know, this could be due to noise in the channel, but either way they must throw away all their results and try again- Eve's interception attempt has failed.
## 5. Risk Analysis
For this type of interception, in which Eve measures all the qubits, there is a small chance that Bob and Alice's samples could match, and Alice sends her vulnerable message through Eve's channel. Let's calculate that chance and see how risky quantum key distribution is.
- For Alice and Bob to use a qubit's result, they must both have chosen the same basis. If Eve choses this basis too, she will successfully intercept this bit without introducing any error. There is a 50% chance of this happening.
- If Eve choses the *wrong* basis, i.e. a different basis to Alice and Bob, there is still a 50% chance Bob will measure the value Alice was trying to send. In this case, the interception also goes undetected.
- But if Eve choses the *wrong* basis, i.e. a different basis to Alice and Bob, there is a 50% chance Bob will not measure the value Alice was trying to send, and this *will* introduce an error into their keys.

If Alice and Bob compare 1 bit from their keys, the probability the bits will match is $0.75$, and if so they will not notice Eve's interception. If they measure 2 bits, there is a $0.75^2 = 0.5625$ chance of the interception being noticed. We can see that the probability of Eve going undetected can be calculated from the number of bits ($x$) Alice and Bob chose to compare:
$$ P(\text{undetected}) = 0.75^x $$
If we decide to compare 15 bits as we did above, there is a 1.3% chance Eve will be undetected. If this is too risky for us, we could compare 50 bits instead, and have a 0.00006% chance of being spied upon unknowingly.
You can retry the protocol again by running the cell below. Try changing `sample_size` to something low and see how easy it is for Eve to intercept Alice and Bob's keys.
```
n = 100
# Step 1
alice_bits = randint(2, size=n)
alice_bases = randint(2, size=n)
# Step 2
message = encode_message(alice_bits, alice_bases)
# Interception!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
# Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
# Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
# Step 5
sample_size = 15 # Change this to something lower and see if
# Eve can intercept the message without Alice
# and Bob finding out
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
alice_sample = sample_bits(alice_key, bit_selection)
if bob_sample != alice_sample:
print("Eve's interference was detected.")
else:
print("Eve went undetected!")
import qiskit
qiskit.__qiskit_version__
```
|
github_jupyter
|
from qiskit import QuantumCircuit, execute, Aer
from qiskit.visualization import plot_histogram, plot_bloch_multivector
from numpy.random import randint
import numpy as np
print("Imports Successful")
qc = QuantumCircuit(1,1)
# Alice prepares qubit in state |+>
qc.h(0)
qc.barrier()
# Alice now sends the qubit to Bob
# who measures it in the X-basis
qc.h(0)
qc.measure(0,0)
# Draw and simulate circuit
display(qc.draw())
svs = Aer.get_backend('qasm_simulator')
job = execute(qc, svs)
plot_histogram(job.result().get_counts())
qc = QuantumCircuit(1,1)
# Alice prepares qubit in state |+>
qc.h(0)
# Alice now sends the qubit to Bob
# but Eve intercepts and tries to read it
qc.measure(0, 0)
qc.barrier()
# Eve then passes this on to Bob
# who measures it in the X-basis
qc.h(0)
qc.measure(0,0)
# Draw and simulate circuit
display(qc.draw())
svs = Aer.get_backend('qasm_simulator')
job = execute(qc, svs)
plot_histogram(job.result().get_counts())
np.random.seed(seed=0)
n = 100
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
print(alice_bits)
np.random.seed(seed=0)
n = 100
## Step 1
#Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
print(alice_bases)
def encode_message(bits, bases):
message = []
for i in range(n):
qc = QuantumCircuit(1,1)
if bases[i] == 0: # Prepare qubit in Z-basis
if bits[i] == 0:
pass
else:
qc.x(0)
else: # Prepare qubit in X-basis
if bits[i] == 0:
qc.h(0)
else:
qc.x(0)
qc.h(0)
qc.barrier()
message.append(qc)
return message
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
print('bit = %i' % alice_bits[0])
print('basis = %i' % alice_bases[0])
message[0].draw()
print('bit = %i' % alice_bits[4])
print('basis = %i' % alice_bases[4])
message[4].draw()
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
print(bob_bases)
def measure_message(message, bases):
backend = Aer.get_backend('qasm_simulator')
measurements = []
for q in range(n):
if bases[q] == 0: # measuring in Z-basis
message[q].measure(0,0)
if bases[q] == 1: # measuring in X-basis
message[q].h(0)
message[q].measure(0,0)
result = execute(message[q], backend, shots=1, memory=True).result()
measured_bit = int(result.get_memory()[0])
measurements.append(measured_bit)
return measurements
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
message[0].draw()
message[6].draw()
print(bob_results)
def remove_garbage(a_bases, b_bases, bits):
good_bits = []
for q in range(n):
if a_bases[q] == b_bases[q]:
# If both used the same basis, add
# this to the list of 'good' bits
good_bits.append(bits[q])
return good_bits
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
print(alice_key)
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
print(bob_key)
def sample_bits(bits, selection):
sample = []
for i in selection:
# use np.mod to make sure the
# bit we sample is always in
# the list range
i = np.mod(i, len(bits))
# pop(i) removes the element of the
# list at index 'i'
sample.append(bits.pop(i))
return sample
np.random.seed(seed=0)
n = 100
## Step 1
# Alice generates bits
alice_bits = randint(2, size=n)
## Step 2
# Create an array to tell us which qubits
# are encoded in which bases
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Step 3
# Decide which basis to measure in:
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
## Step 5
sample_size = 15
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
print(" bob_sample = " + str(bob_sample))
alice_sample = sample_bits(alice_key, bit_selection)
print("alice_sample = "+ str(alice_sample))
bob_sample == alice_sample
print(bob_key)
print(alice_key)
print("key length = %i" % len(alice_key))
np.random.seed(seed=3)
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
print(alice_bits)
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
print(alice_bases)
message[0].draw()
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
print(intercepted_message)
message[0].draw()
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
message[0].draw()
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
np.random.seed(seed=3)
## Step 1
alice_bits = randint(2, size=n)
## Step 2
alice_bases = randint(2, size=n)
message = encode_message(alice_bits, alice_bases)
## Interception!!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
## Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
## Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
## Step 5
sample_size = 15
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
print(" bob_sample = " + str(bob_sample))
alice_sample = sample_bits(alice_key, bit_selection)
print("alice_sample = "+ str(alice_sample))
bob_sample == alice_sample
n = 100
# Step 1
alice_bits = randint(2, size=n)
alice_bases = randint(2, size=n)
# Step 2
message = encode_message(alice_bits, alice_bases)
# Interception!
eve_bases = randint(2, size=n)
intercepted_message = measure_message(message, eve_bases)
# Step 3
bob_bases = randint(2, size=n)
bob_results = measure_message(message, bob_bases)
# Step 4
bob_key = remove_garbage(alice_bases, bob_bases, bob_results)
alice_key = remove_garbage(alice_bases, bob_bases, alice_bits)
# Step 5
sample_size = 15 # Change this to something lower and see if
# Eve can intercept the message without Alice
# and Bob finding out
bit_selection = randint(n, size=sample_size)
bob_sample = sample_bits(bob_key, bit_selection)
alice_sample = sample_bits(alice_key, bit_selection)
if bob_sample != alice_sample:
print("Eve's interference was detected.")
else:
print("Eve went undetected!")
import qiskit
qiskit.__qiskit_version__
| 0.587943 | 0.990478 |
# Fix reads
```
import os
# Fix warning about locale unset
os.environ['LANG'] = 'en_US.UTF-8'
!pushd output/reads; prename 's/initial_//' *.fq.gz; popd
```
Jackalope produces reads with non-standard identifiers where pairs of reads don't have matching identifiers. For example:
* Pair 1: `@SH08-001-NC_011083-3048632-R/1`
* Pair 2: `@SH08-001-NC_011083-3048396-F/2`
In order to run snippy, these paired identifiers need to match (except for the `/1` and `/2` suffix).
So, I have to replace them all with something unique, but which matches in each pair of files. I do this by replacing the position (I think) with the read number (as it appears in the file). So the above identifiers become:
* Pair 1: `@SH08-001-NC_011083-1/1`
* Pair 2: `@SH08-001-NC_011083-1/2`
```
import glob
import os
files = [os.path.basename(f) for f in glob.glob('output/reads/*.fq.gz')]
!parallel -j 24 -I% 'gzip -d --stdout output/reads/% | perl ../scripts/replace-fastq-header.pl | gzip > output/%' \
::: {' '.join(files)}
```
# Create input file for snippy
```
import os
import glob
reference_file = 'input/genome.fasta.gz'
# snippy only runs with uncompressed reference
!gunzip -f -k {reference_file}
reference_file_abs = os.path.abspath('input/genome.fasta')
snippy_out = os.path.abspath('phylogeny')
if not os.path.exists(snippy_out):
os.mkdir(snippy_out)
with open(f'{snippy_out}/snippy.fofn', 'w') as snippy_fofn:
directory = 'output'
for file in glob.glob(f'{directory}/*_R1.fq.gz'):
sample = os.path.basename(file).rsplit('_R1.fq.gz')[0]
files = [f'{directory}/{sample}_R1.fq.gz', f'{directory}/{sample}_R2.fq.gz']
files = [os.path.abspath(f) for f in files]
values = [sample]
values.extend(files)
snippy_fofn.write('\t'.join(values)+'\n')
!head -n 1 {snippy_out}/snippy.fofn
```
# Run snippy
```
!conda run --name snippy snippy-multi {snippy_out}/snippy.fofn \
--reference {reference_file_abs} --cpus 6 > {snippy_out}/snippy-commands-all.sh
!head -n-2 {snippy_out}/snippy-commands-all.sh > {snippy_out}/snippy-commands-variant.sh
!tail -n 2 {snippy_out}/snippy-commands-all.sh > {snippy_out}/snippy-commands-core.sh
!tail -n 2 {snippy_out}/snippy-commands-variant.sh
!echo '****'
!tail {snippy_out}/snippy-commands-core.sh
# Run variant calling in parallel
!(pushd {snippy_out} && conda run --name snippy \
parallel -j 12 -a {snippy_out}/snippy-commands-variant.sh && popd) > {snippy_out}/snippy-variant.log 2>&1
# Run core in serial
!(pushd {snippy_out} && conda run --name snippy \
bash {snippy_out}/snippy-commands-core.sh && popd) > {snippy_out}/snippy-core.log 2>&1
!column -s$'\t' -t phylogeny/core.txt
```
# Tree
```
!iqtree -redo -s phylogeny/core.aln -T 24 | tail -n 30
!sed -i.bak 's/Reference/reference/' phylogeny/core.aln.treefile
```
|
github_jupyter
|
import os
# Fix warning about locale unset
os.environ['LANG'] = 'en_US.UTF-8'
!pushd output/reads; prename 's/initial_//' *.fq.gz; popd
import glob
import os
files = [os.path.basename(f) for f in glob.glob('output/reads/*.fq.gz')]
!parallel -j 24 -I% 'gzip -d --stdout output/reads/% | perl ../scripts/replace-fastq-header.pl | gzip > output/%' \
::: {' '.join(files)}
import os
import glob
reference_file = 'input/genome.fasta.gz'
# snippy only runs with uncompressed reference
!gunzip -f -k {reference_file}
reference_file_abs = os.path.abspath('input/genome.fasta')
snippy_out = os.path.abspath('phylogeny')
if not os.path.exists(snippy_out):
os.mkdir(snippy_out)
with open(f'{snippy_out}/snippy.fofn', 'w') as snippy_fofn:
directory = 'output'
for file in glob.glob(f'{directory}/*_R1.fq.gz'):
sample = os.path.basename(file).rsplit('_R1.fq.gz')[0]
files = [f'{directory}/{sample}_R1.fq.gz', f'{directory}/{sample}_R2.fq.gz']
files = [os.path.abspath(f) for f in files]
values = [sample]
values.extend(files)
snippy_fofn.write('\t'.join(values)+'\n')
!head -n 1 {snippy_out}/snippy.fofn
!conda run --name snippy snippy-multi {snippy_out}/snippy.fofn \
--reference {reference_file_abs} --cpus 6 > {snippy_out}/snippy-commands-all.sh
!head -n-2 {snippy_out}/snippy-commands-all.sh > {snippy_out}/snippy-commands-variant.sh
!tail -n 2 {snippy_out}/snippy-commands-all.sh > {snippy_out}/snippy-commands-core.sh
!tail -n 2 {snippy_out}/snippy-commands-variant.sh
!echo '****'
!tail {snippy_out}/snippy-commands-core.sh
# Run variant calling in parallel
!(pushd {snippy_out} && conda run --name snippy \
parallel -j 12 -a {snippy_out}/snippy-commands-variant.sh && popd) > {snippy_out}/snippy-variant.log 2>&1
# Run core in serial
!(pushd {snippy_out} && conda run --name snippy \
bash {snippy_out}/snippy-commands-core.sh && popd) > {snippy_out}/snippy-core.log 2>&1
!column -s$'\t' -t phylogeny/core.txt
!iqtree -redo -s phylogeny/core.aln -T 24 | tail -n 30
!sed -i.bak 's/Reference/reference/' phylogeny/core.aln.treefile
| 0.300746 | 0.547525 |
# Assignment #06: small numpy exercises for doing Big Science
## Exercise #06-01: indexing
Given a 2D numpy array defined as:
```
import numpy as np
x = np.array([[1, 2, 3],
[4, 5, 6]])
```
The following indexing operations all select the same values out of the array:
- ``x[:, 1]``
- ``x[slice(0, 2, 1), 1]``
- ``x[(slice(0, 2, 1), 1)]``
- ``x[slice(0, 2, 1), slice(1, 2, 1)]``
- ``x[..., 1]``
- ``x[::1, 1]``
- ``x[[0, 1], 1]``
- ``x[:, -2]``
- ``x[:, 1:2]``
- ``x[:, [1]]``
This can be checked with the following test:
```
from numpy.testing import assert_equal
ref = 7
assert_equal(ref, x[:, 1].sum())
assert_equal(ref, x[..., 1].sum())
assert_equal(ref, x[::1, 1].sum())
assert_equal(ref, x[slice(0, 2, 1), 1].sum())
assert_equal(ref, x[(slice(0, 2, 1), 1)].sum())
assert_equal(ref, x[slice(0, 2, 1), slice(1, 2, 1)].sum())
assert_equal(ref, x[[0, 1], 1].sum())
assert_equal(ref, x[:, -2].sum())
assert_equal(ref, x[:, 1:2].sum())
assert_equal(ref, x[:, [1]].sum())
```
**Questions:**
- **What is the ``...`` syntax doing? Again, it is the literal equivalent of an actual python object: what is it?**
- **some of these indexing operations are truly equivalent to the "obvious" one, ``x[:, 1]``. List them.**
- **Classify these operations (i) in basic and advanced operations, and (ii) by the shape of their output. Explain.**
- **I'd like my array ``a = x[:, 1:2]`` to have a shape of (2, ) like most of the other operations listed above. What can I do to reshape it?**
## Exercise #06-02: the difference
Consider the following example:
```
a = np.array([1, 2, 3])
b = a
c = a
b = a - 10
c -= 100
```
**What will be the values printed by ``print(a, b, c)`` after this code snippet? Explain.**
## Exercise #06-03: Greenwich
[ERA-Interim reanalysis](https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim) provides global atmospheric fields from 1979 to today. Someone prepared a grid of average temperature available here:
```
from urllib.request import Request, urlopen
from io import BytesIO
import json
# Parse the given url
url = 'https://github.com/fmaussion/scientific_programming/raw/master/data/monthly_temp.npz'
req = urlopen(Request(url)).read()
with np.load(BytesIO(req)) as data:
temp = data['temp']
lon = data['lon']
lat = data['lat']
```
However, the data is not well processed! The longitudes are ranging from 0 to 360°, thus cutting UK and Africa in half! Reorganize the data and the corresponding coordinate to obtain a plot similar to this one:
<img src="../img/18_temp_pic.png" align='left'>
*Back to the [table of contents](00-Introduction.ipynb#ctoc)*
|
github_jupyter
|
import numpy as np
x = np.array([[1, 2, 3],
[4, 5, 6]])
from numpy.testing import assert_equal
ref = 7
assert_equal(ref, x[:, 1].sum())
assert_equal(ref, x[..., 1].sum())
assert_equal(ref, x[::1, 1].sum())
assert_equal(ref, x[slice(0, 2, 1), 1].sum())
assert_equal(ref, x[(slice(0, 2, 1), 1)].sum())
assert_equal(ref, x[slice(0, 2, 1), slice(1, 2, 1)].sum())
assert_equal(ref, x[[0, 1], 1].sum())
assert_equal(ref, x[:, -2].sum())
assert_equal(ref, x[:, 1:2].sum())
assert_equal(ref, x[:, [1]].sum())
a = np.array([1, 2, 3])
b = a
c = a
b = a - 10
c -= 100
from urllib.request import Request, urlopen
from io import BytesIO
import json
# Parse the given url
url = 'https://github.com/fmaussion/scientific_programming/raw/master/data/monthly_temp.npz'
req = urlopen(Request(url)).read()
with np.load(BytesIO(req)) as data:
temp = data['temp']
lon = data['lon']
lat = data['lat']
| 0.499512 | 0.986403 |
# Loops and Control Structures
We will study these two topics together, because they are so commonly found in association with one another. Let's say you want to go through your list of students, and only print the names of students older than 25. How?
We will how look at "iteration" over both List and Hash data structrues.
## Iteration - "looping"
"looping" (iteration) means to do the same thing many times. Imagine this code:
mylist = [123, 334, 223, 197, 901, 344]
print mylist[0]
print mylist[1]
print mylist[2]
print mylist[3]
print mylist[4]
print mylist[5]
Why is this bad? (discuss... there are many reasons!)
This is why we need Loops - they "abstract" the problem of doing the same thing, on a different value, some arbitrary number of times.
### while , for
The two basic kinds of iterations are "while" and "for"
* "while" means "while a certain condition is true"
* "for" means "for a certain number of times"
They look like this:
while CONDTION:
do something here
for ITERATING:
do something here
Here is an example of while and for loops:
```
myage = 0
while myage < 18: # note the ":" colon character!
print("I am still young, at ", myage)
myage = myage + 1
#myage+=1
print("Now that I am ", myage, " I am old!")
myage = 10
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
print("I am ", myage, " years old")
print("at the end I am ", myage, " years old")
```
## if/else
Sometimes, you want to do something if a condition is true, but do something else if the condition is false. For that, you can use the "else" statement. It looks like this:
```
myage = 0
print("even number modulo ", 2 % 2)
print("odd number modulo ", 21 % 2) # 10 * 2 = 20 remainder 1
print("odd number modulo ", 35 % 3) # 10 * 3 = 33 remainder 2
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
if myage % 2 == 0: # % is the modulo operator - gives you the remainder when dividing myage by 2
# note that the comparison operator is "==". IT IS NOT "=" (= is used to assign a value!)
# see end of Lesson 4 to review the comparison and mathematical operators!
print("I am", myage, "my age is an even number")
else:
print("I am", myage, "my age is an odd number")
```
### More complex condtionals - elif
if this, else if (elif) that, else the other thing.
you can use as many elif's as you want!
<pre>
</pre>
```
myage = 0
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
if (myage % 2 == 0) & (myage % 4 == 0):
print("I am", myage, "my age is divisible by BOTH 2 (logical AND!) 4")
elif myage % 4 == 0: # note the order of my "if" statements... what happens if I switch the %2 and the %4? Try!
print("I am", myage, "my age is divisible by 4")
elif myage % 2 == 0:
print("I am", myage, "my age is divisible by 2")
else:
print("I am", myage, "my age is an odd number")
#NOTE: this is a surprisingly common interview question for a data scientist!!!!
```
# DANGER!!!!!!!!
Note that loops can be dangerous!! For example, in a "while" loop, if the condition is never met, then the loop will never end! This is an "infinite loop", and they can be very very very bad, if your statements are, for example, writing to a file, or interacting with a database!
for example (DO NOT TRY THIS!!!!!!!!)
age = 0
while age != 21: # != "not equal to" condition will never be met - our age is always an even number
print(age)
age += 2 # an abbreviation for "age = age + 2"
You should always be careful that the condition will be met; however, if you cannot be careful, there is a way to do a "sanity check" in your code so that you break the loop under a certain condition. The command is "break", and it also is associated with a conditional phrase:
age = 0
while (age != 21): # condition will never be met, because our age is always an even number
print(age)
age += 2 # an abbreviation for "age = age + 2"
if age > 100: # set a "sanity check" can't be more than 100!
break
```
age = 0
while (age != 21): # this condition will never be met, because our age is always an even number!!!!
print(age)
age += 2 # an abbreviation for "age = age + 2"
if age > 30: # once you are 30, your life has ended... break the loop
break
```
Note that the use of "break" can fool you into thinking that everything was OK! The loop ends, and the program continues to run.... so it isn't PERFECT, but it can at least help minimize damage!
A better way do this is to "raise an exception" - this allows you to send a message explaining what went wrong. In this introductory section, we wont discuss the topic of Exceptions beyond the fact that they exist, and are used as follows:
```
age = 0
while (age != 21): # this condition will never be met, because our age is always an even number!!!!
print(age)
age += 2 # an abbreviation for "age = age + 2"
if age > 30: # once you are 30, your life has ended... break the loop
raise Exception("the age became greater than 30, and this should not happen")
break
```
# "Safer" ways to iterate/loop
There are some safe ways to iterate:
1. using "in" on a list is safer
2. using "in" on a list index is safer
for example:
```
students = ["Mark", "Jonas", "Michele", "Alberto"]
for name in students:
print("student's name is ", name)
# you can determine the length of an array using the "len" function:
print()
print( len(students))
print()
for index in range(len(students)):
print("student #", index, "is named", students[index])
```
You can do similar things with Dictionaries. Often, you will want to separate your data based on the "keys" of the Dictionary. To do this, use the "keys" method on your dictionary. for example:
students = {"Mark": 50, "John": 45}
print( students.keys() )
```
students = {"Mark": 50, "John": 45}
print(students.keys())
print(students.values()) # this also works if you want to extract all of the values as a list - uncommon...
```
Note that the output is telling us that the "keys" method returns us an object of type "dict_keys"... I will simply tell you that dict_keys can be used as an iterator (exactly like "range" is!). So you can say:
```
students = {"Mark": 50, "John": 45}
for student in students.keys():
print("student named", student, "is", students[student], "years old")
```
|
github_jupyter
|
myage = 0
while myage < 18: # note the ":" colon character!
print("I am still young, at ", myage)
myage = myage + 1
#myage+=1
print("Now that I am ", myage, " I am old!")
myage = 10
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
print("I am ", myage, " years old")
print("at the end I am ", myage, " years old")
myage = 0
print("even number modulo ", 2 % 2)
print("odd number modulo ", 21 % 2) # 10 * 2 = 20 remainder 1
print("odd number modulo ", 35 % 3) # 10 * 3 = 33 remainder 2
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
if myage % 2 == 0: # % is the modulo operator - gives you the remainder when dividing myage by 2
# note that the comparison operator is "==". IT IS NOT "=" (= is used to assign a value!)
# see end of Lesson 4 to review the comparison and mathematical operators!
print("I am", myage, "my age is an even number")
else:
print("I am", myage, "my age is an odd number")
myage = 0
for myage in range(1,18): # note the ":", and the use of "in" to assign the next value to the variable
if (myage % 2 == 0) & (myage % 4 == 0):
print("I am", myage, "my age is divisible by BOTH 2 (logical AND!) 4")
elif myage % 4 == 0: # note the order of my "if" statements... what happens if I switch the %2 and the %4? Try!
print("I am", myage, "my age is divisible by 4")
elif myage % 2 == 0:
print("I am", myage, "my age is divisible by 2")
else:
print("I am", myage, "my age is an odd number")
#NOTE: this is a surprisingly common interview question for a data scientist!!!!
age = 0
while (age != 21): # this condition will never be met, because our age is always an even number!!!!
print(age)
age += 2 # an abbreviation for "age = age + 2"
if age > 30: # once you are 30, your life has ended... break the loop
break
age = 0
while (age != 21): # this condition will never be met, because our age is always an even number!!!!
print(age)
age += 2 # an abbreviation for "age = age + 2"
if age > 30: # once you are 30, your life has ended... break the loop
raise Exception("the age became greater than 30, and this should not happen")
break
students = ["Mark", "Jonas", "Michele", "Alberto"]
for name in students:
print("student's name is ", name)
# you can determine the length of an array using the "len" function:
print()
print( len(students))
print()
for index in range(len(students)):
print("student #", index, "is named", students[index])
students = {"Mark": 50, "John": 45}
print(students.keys())
print(students.values()) # this also works if you want to extract all of the values as a list - uncommon...
students = {"Mark": 50, "John": 45}
for student in students.keys():
print("student named", student, "is", students[student], "years old")
| 0.106667 | 0.85561 |
# 文本卷积
## 基本文本卷积
- For more information, refer to:
- [Kim 2014](http://emnlp2014.org/papers/pdf/EMNLP2014181.pdf)
- [Zhang et al 2015](https://papers.nips.cc/paper/5782-character-level-convolutional-networks-for-text-classification.pdf)
- 使用卷积进行句子分类(Kim 2014)

- 多个卷积核

```
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.sequence import pad_sequences
num_features = 3000
sequence_length = 300
embedding_dimension = 100
(x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=num_features)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
x_train = pad_sequences(x_train, maxlen=sequence_length)
x_test = pad_sequences(x_test, maxlen=sequence_length)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
```
## 构造基本句子分类器
```
def imdb_cnn():
model = keras.Sequential([
layers.Embedding(input_dim=num_features, output_dim=embedding_dimension,input_length=sequence_length),
layers.Conv1D(filters=50, kernel_size=5, strides=1, padding='valid'),
layers.MaxPool1D(2, padding='valid'),
layers.Flatten(), # 将输入展平。不影响批量大小
layers.Dense(10, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=keras.optimizers.Adam(1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
return model
model = imdb_cnn()
model.summary()
%%time
history = model.fit(x_train, y_train, batch_size=64, epochs=5, validation_split=0.1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'valiation'], loc='upper left')
plt.show()
```
## 多核卷积网络
```
filter_sizes=[3,4,5]
def convolution():
inn = layers.Input(shape=(sequence_length, embedding_dimension, 1))
cnns = []
for size in filter_sizes:
conv = layers.Conv2D(filters=64, kernel_size=(size, embedding_dimension),
strides=1, padding='valid', activation='relu')(inn)
pool = layers.MaxPool2D(pool_size=(sequence_length-size+1, 1), padding='valid')(conv)
cnns.append(pool)
outt = layers.concatenate(cnns)
model = keras.Model(inputs=inn, outputs=outt)
return model
def cnn_mulfilter():
model = keras.Sequential([
layers.Embedding(input_dim=num_features, output_dim=embedding_dimension,
input_length=sequence_length),
layers.Reshape((sequence_length, embedding_dimension, 1)),
convolution(),
layers.Flatten(),
layers.Dense(10, activation='relu'),
layers.Dropout(0.2),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
return model
model = cnn_mulfilter()
model.summary()
%%time
history = model.fit(x_train, y_train, batch_size=64, epochs=5, validation_split=0.1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'valiation'], loc='upper left')
plt.show()
```
|
github_jupyter
|
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.sequence import pad_sequences
num_features = 3000
sequence_length = 300
embedding_dimension = 100
(x_train, y_train), (x_test, y_test) = keras.datasets.imdb.load_data(num_words=num_features)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
x_train = pad_sequences(x_train, maxlen=sequence_length)
x_test = pad_sequences(x_test, maxlen=sequence_length)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
def imdb_cnn():
model = keras.Sequential([
layers.Embedding(input_dim=num_features, output_dim=embedding_dimension,input_length=sequence_length),
layers.Conv1D(filters=50, kernel_size=5, strides=1, padding='valid'),
layers.MaxPool1D(2, padding='valid'),
layers.Flatten(), # 将输入展平。不影响批量大小
layers.Dense(10, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=keras.optimizers.Adam(1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
return model
model = imdb_cnn()
model.summary()
%%time
history = model.fit(x_train, y_train, batch_size=64, epochs=5, validation_split=0.1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'valiation'], loc='upper left')
plt.show()
filter_sizes=[3,4,5]
def convolution():
inn = layers.Input(shape=(sequence_length, embedding_dimension, 1))
cnns = []
for size in filter_sizes:
conv = layers.Conv2D(filters=64, kernel_size=(size, embedding_dimension),
strides=1, padding='valid', activation='relu')(inn)
pool = layers.MaxPool2D(pool_size=(sequence_length-size+1, 1), padding='valid')(conv)
cnns.append(pool)
outt = layers.concatenate(cnns)
model = keras.Model(inputs=inn, outputs=outt)
return model
def cnn_mulfilter():
model = keras.Sequential([
layers.Embedding(input_dim=num_features, output_dim=embedding_dimension,
input_length=sequence_length),
layers.Reshape((sequence_length, embedding_dimension, 1)),
convolution(),
layers.Flatten(),
layers.Dense(10, activation='relu'),
layers.Dropout(0.2),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=keras.optimizers.Adam(),
loss=keras.losses.BinaryCrossentropy(),
metrics=['accuracy'])
return model
model = cnn_mulfilter()
model.summary()
%%time
history = model.fit(x_train, y_train, batch_size=64, epochs=5, validation_split=0.1)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['training', 'valiation'], loc='upper left')
plt.show()
| 0.885061 | 0.878835 |
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from mpl_toolkits.axes_grid1 import make_axes_locatable
import os
import warnings
warnings.filterwarnings("ignore")
import matplotlib
font = {'size' : 14}
matplotlib.rc('font', **font)
neg,avs,avm,avf,ri,rd = np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8))
for m in range(8):
for n in range(8):
dnm = np.load('data/k3_v/m' + str(m+4) + '_n' + str(n+4) + '.npy')
diff = dnm[:,0] - dnm[:,2]
neg[m,n] = np.sum(diff<0)
avs[m,n] = np.mean(dnm[:,0])
avm[m,n] = np.mean(dnm[:,1])
avf[m,n] = np.mean(dnm[:,2])
ri[m,n] = (avs[m,n] - avf[m,n]) / avs[m,n]
rd[m,n] = (avm[m,n] - avs[m,n]) / avs[m,n]
np.sum(neg)
fig,ax = plt.subplots(1,1,figsize=(4,4))
im = ax.imshow(rd, cmap='Blues', vmin=np.min(rd), vmax=np.max(rd), origin='lower');
xtk = 3+np.arange(0,8)
ax.set_xticklabels(xtk)
ax.set_yticklabels(xtk);
div = make_axes_locatable(ax)
cax = div.append_axes("right", size="3%", pad=0.1)
plt.colorbar(im, cax=cax);
np.mean(rd)
fig, ax = plt.subplots(1,4,figsize=(16,4),sharey=True)
im0 = ax[0].imshow(neg, cmap='Blues', vmin=np.min(neg), vmax=np.max(neg), origin='lower')
ax[0].set_title('Number of negative optimizations')
im1 = ax[1].imshow(avs, cmap='Blues', vmin=np.min(avs), vmax=np.max(avs), origin='lower')
ax[1].set_title('Average initial distance')
im2 = ax[2].imshow(avf, cmap='Blues', vmin=np.min(avf), vmax=np.max(avf), origin='lower')
ax[2].set_title('Average final distance')
im3 = ax[3].imshow(ri, cmap='Blues', vmin=np.min(ri), vmax=np.max(ri), origin='lower')
ax[3].set_title('Relative improvement')
ax[0].set_ylabel('M: number of observables');
iml = [im0,im1,im2,im3]
xtk = 4+np.arange(0,8)
for i in range(4):
a = ax[i]
a.set_xticks(np.arange(0,8))
a.set_yticks(np.arange(0,8))
a.set_xticklabels(xtk)
a.set_yticklabels(xtk);
a.set_xlabel('N: number of experiments')
div = make_axes_locatable(a)
cax = div.append_axes("right", size="3%", pad=0.1)
plt.colorbar(iml[i], cax=cax);
fig.tight_layout()
fig,ax = plt.subplots(1,1,figsize=(9,3))
ax.plot(1+np.arange(0,8), ri[:8,0], label='N=K+1, mean = %.2f' % np.mean(ri[:,0]), color = 'r')
ax.plot(1+np.arange(0,8), ri[:8,1], label='N=K+2, mean = %.2f' % np.mean(ri[:,1]), color = 'b')
ax.plot(1+np.arange(0,8), ri[:8,2], label='N=K+3, mean = %.2f' % np.mean(ri[:,2]), color = 'g')
ax.legend(loc=(1.02,0))
ax.set_ylim(0,0.8);
ax.set_ylabel('Relative improvement');
ax.set_xlabel('M');
ax.set_title('K=3')
fig.tight_layout()
fig.savefig('figures/k3-lines.pdf')
xtk = 1+np.arange(0,8)
fig,ax = plt.subplots(1,1,figsize=(5,3))
ax.plot(xtk, np.mean(ri[0:8,0:8], axis=1), color = 'k', lw=7)
'''for i in range(8):
ax.plot(1+np.arange(0,8), ri[:,i], label='N=K+%d' % (i+1))'''
#ax.legend(loc=(1.02,0))
ax.set_ylim(0,0.7);
ax.set_ylabel('RI averaged over N');
ax.set_xlabel('M');
ax.set_title('K=3')
ax.set_xticks(xtk)
ax.grid(True)
fig.tight_layout()
fig.savefig('figures/k3-average.pdf')
np.mean(ri, axis=1)
np.save('data/ri/ri3.npy', ri)
print(ri)
```
|
github_jupyter
|
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from mpl_toolkits.axes_grid1 import make_axes_locatable
import os
import warnings
warnings.filterwarnings("ignore")
import matplotlib
font = {'size' : 14}
matplotlib.rc('font', **font)
neg,avs,avm,avf,ri,rd = np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8)),np.zeros((8,8))
for m in range(8):
for n in range(8):
dnm = np.load('data/k3_v/m' + str(m+4) + '_n' + str(n+4) + '.npy')
diff = dnm[:,0] - dnm[:,2]
neg[m,n] = np.sum(diff<0)
avs[m,n] = np.mean(dnm[:,0])
avm[m,n] = np.mean(dnm[:,1])
avf[m,n] = np.mean(dnm[:,2])
ri[m,n] = (avs[m,n] - avf[m,n]) / avs[m,n]
rd[m,n] = (avm[m,n] - avs[m,n]) / avs[m,n]
np.sum(neg)
fig,ax = plt.subplots(1,1,figsize=(4,4))
im = ax.imshow(rd, cmap='Blues', vmin=np.min(rd), vmax=np.max(rd), origin='lower');
xtk = 3+np.arange(0,8)
ax.set_xticklabels(xtk)
ax.set_yticklabels(xtk);
div = make_axes_locatable(ax)
cax = div.append_axes("right", size="3%", pad=0.1)
plt.colorbar(im, cax=cax);
np.mean(rd)
fig, ax = plt.subplots(1,4,figsize=(16,4),sharey=True)
im0 = ax[0].imshow(neg, cmap='Blues', vmin=np.min(neg), vmax=np.max(neg), origin='lower')
ax[0].set_title('Number of negative optimizations')
im1 = ax[1].imshow(avs, cmap='Blues', vmin=np.min(avs), vmax=np.max(avs), origin='lower')
ax[1].set_title('Average initial distance')
im2 = ax[2].imshow(avf, cmap='Blues', vmin=np.min(avf), vmax=np.max(avf), origin='lower')
ax[2].set_title('Average final distance')
im3 = ax[3].imshow(ri, cmap='Blues', vmin=np.min(ri), vmax=np.max(ri), origin='lower')
ax[3].set_title('Relative improvement')
ax[0].set_ylabel('M: number of observables');
iml = [im0,im1,im2,im3]
xtk = 4+np.arange(0,8)
for i in range(4):
a = ax[i]
a.set_xticks(np.arange(0,8))
a.set_yticks(np.arange(0,8))
a.set_xticklabels(xtk)
a.set_yticklabels(xtk);
a.set_xlabel('N: number of experiments')
div = make_axes_locatable(a)
cax = div.append_axes("right", size="3%", pad=0.1)
plt.colorbar(iml[i], cax=cax);
fig.tight_layout()
fig,ax = plt.subplots(1,1,figsize=(9,3))
ax.plot(1+np.arange(0,8), ri[:8,0], label='N=K+1, mean = %.2f' % np.mean(ri[:,0]), color = 'r')
ax.plot(1+np.arange(0,8), ri[:8,1], label='N=K+2, mean = %.2f' % np.mean(ri[:,1]), color = 'b')
ax.plot(1+np.arange(0,8), ri[:8,2], label='N=K+3, mean = %.2f' % np.mean(ri[:,2]), color = 'g')
ax.legend(loc=(1.02,0))
ax.set_ylim(0,0.8);
ax.set_ylabel('Relative improvement');
ax.set_xlabel('M');
ax.set_title('K=3')
fig.tight_layout()
fig.savefig('figures/k3-lines.pdf')
xtk = 1+np.arange(0,8)
fig,ax = plt.subplots(1,1,figsize=(5,3))
ax.plot(xtk, np.mean(ri[0:8,0:8], axis=1), color = 'k', lw=7)
'''for i in range(8):
ax.plot(1+np.arange(0,8), ri[:,i], label='N=K+%d' % (i+1))'''
#ax.legend(loc=(1.02,0))
ax.set_ylim(0,0.7);
ax.set_ylabel('RI averaged over N');
ax.set_xlabel('M');
ax.set_title('K=3')
ax.set_xticks(xtk)
ax.grid(True)
fig.tight_layout()
fig.savefig('figures/k3-average.pdf')
np.mean(ri, axis=1)
np.save('data/ri/ri3.npy', ri)
print(ri)
| 0.359926 | 0.322646 |
```
import os
import cv2
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import SGD
from torchvision import models
import scipy.ndimage as nd
import numpy as np
from torchvision import transforms
from PIL import Image
IMG_PATH = './pytorch-cnn-visualizations/input_images/dd_tree.jpg'
pil_img = Image.open(IMG_PATH)
pil_img
img_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
x = img_transform(pil_img).unsqueeze_(0).cuda()
model = models.alexnet(pretrained=True).cuda()
class DeepDream:
def __init__(self, module, layer):
self.module = module
self.trace = (None, None, None)
self.layer = layer
def register_hooks(self):
def hook(module, input, output):
self.trace = (module, input, output)
self.layer.register_forward_hook(hook)
def optim(self, image, steps):
self.optimizer = SGD(image, lr=4, weight_decay=1e-4)
for _ in range(steps):
optimizer.zero_grad()
x = image
for index, layer in enumerate(self.module):
x = layer(x)
if layer == self.layer:
break
output = self.trace[2]
loss = -torch.mean(output)
optimizer.step()
grad = torch.norm(output)
image += grad
return image
def dream(self, image, n, scale_factor):
if n <= 0: return
scale_factor *= scale_factor
image = torch.nn.functional.interpolate(tensor, scale_factor=scale_factor)
self.optim()
def __call__(self, image, n_repeat=4, scale_factor=0.7):
self.register_hooks()
octaves = [image]
for i in range(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1, 1.0 / octave_scale, 1.0 / octave_scale),order=1))
print(octaves)
dd = DeepDream(model, model.features[3])
dd()
def plot_tensor(tensor):
tensor = torch.nn.functional.interpolate(tensor, scale_factor=0.7)
img = transforms.ToPILImage()(tensor.squeeze().cpu())
return img
plot_tensor(x)
np_img = cv2.imread(IMG_PATH)
octaves = [np.expand_dims(np_img,0)]
octave_n = 6
octave_scale = 1.4
for i in range(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1, 1.0 / octave_scale, 1.0 / octave_scale),order=1))
def guassian_filter(*args, **kwargs):
conv = nn.Conv2d()
g_weights = nd.filters.gaussian_filter(*args, **kwargs)
conv.weight.data.copy_(torch.from_numpy(generated_filters))
return conv
bluerred = nd.filters.gaussian_filter(x.cpu().numpy(),sigma=2)
bluerred = torch.from_numpy(bluerred)
plot_tensor(bluerred)
for octave in octaves:
print(octave.shape)
tensor = transforms.functional.to_pil_image(octave.squeeze())
tensor = transforms.functional.to_tensor(tensor)
img = plot_tensor(tensor)
img.show()
a = [1,2]
for i, j in zip(a[1::-1], a[0::-1]):
print(i,j)
```
|
github_jupyter
|
import os
import cv2
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import SGD
from torchvision import models
import scipy.ndimage as nd
import numpy as np
from torchvision import transforms
from PIL import Image
IMG_PATH = './pytorch-cnn-visualizations/input_images/dd_tree.jpg'
pil_img = Image.open(IMG_PATH)
pil_img
img_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
x = img_transform(pil_img).unsqueeze_(0).cuda()
model = models.alexnet(pretrained=True).cuda()
class DeepDream:
def __init__(self, module, layer):
self.module = module
self.trace = (None, None, None)
self.layer = layer
def register_hooks(self):
def hook(module, input, output):
self.trace = (module, input, output)
self.layer.register_forward_hook(hook)
def optim(self, image, steps):
self.optimizer = SGD(image, lr=4, weight_decay=1e-4)
for _ in range(steps):
optimizer.zero_grad()
x = image
for index, layer in enumerate(self.module):
x = layer(x)
if layer == self.layer:
break
output = self.trace[2]
loss = -torch.mean(output)
optimizer.step()
grad = torch.norm(output)
image += grad
return image
def dream(self, image, n, scale_factor):
if n <= 0: return
scale_factor *= scale_factor
image = torch.nn.functional.interpolate(tensor, scale_factor=scale_factor)
self.optim()
def __call__(self, image, n_repeat=4, scale_factor=0.7):
self.register_hooks()
octaves = [image]
for i in range(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1, 1.0 / octave_scale, 1.0 / octave_scale),order=1))
print(octaves)
dd = DeepDream(model, model.features[3])
dd()
def plot_tensor(tensor):
tensor = torch.nn.functional.interpolate(tensor, scale_factor=0.7)
img = transforms.ToPILImage()(tensor.squeeze().cpu())
return img
plot_tensor(x)
np_img = cv2.imread(IMG_PATH)
octaves = [np.expand_dims(np_img,0)]
octave_n = 6
octave_scale = 1.4
for i in range(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1, 1.0 / octave_scale, 1.0 / octave_scale),order=1))
def guassian_filter(*args, **kwargs):
conv = nn.Conv2d()
g_weights = nd.filters.gaussian_filter(*args, **kwargs)
conv.weight.data.copy_(torch.from_numpy(generated_filters))
return conv
bluerred = nd.filters.gaussian_filter(x.cpu().numpy(),sigma=2)
bluerred = torch.from_numpy(bluerred)
plot_tensor(bluerred)
for octave in octaves:
print(octave.shape)
tensor = transforms.functional.to_pil_image(octave.squeeze())
tensor = transforms.functional.to_tensor(tensor)
img = plot_tensor(tensor)
img.show()
a = [1,2]
for i, j in zip(a[1::-1], a[0::-1]):
print(i,j)
| 0.798933 | 0.560614 |
```
import feather
import numpy as np
import pandas as pd
from sklearn import metrics
from sklearn import model_selection
from sklearn import preprocessing
import tensorflow as tf
```
### Load NYC Taxi fare prepped data
```
train_df = feather.read_dataframe("../../datasets/kaggle/new-york-city-taxi-fare-prediction/train.feather")
train_df["hour_period"]=train_df["hour"] // 4
train_df = pd.get_dummies(train_df, prefix=["year","hour_period"], columns=["year","hour_period"])
print(train_df.shape)
print(train_df.columns)
cols = [
"passenger_count",
"distance_miles",
"distance_to_center",
"is_to_from_JFK_new",
"year_2009",
"year_2010",
"year_2011",
"year_2012",
"year_2013",
"year_2014",
"year_2015",
"hour_period_0",
"hour_period_1",
"hour_period_2",
"hour_period_3",
"hour_period_4",
"hour_period_5",
]
x = train_df[cols].values
y = train_df[['fare_amount']].values
x_train, x_val, y_train, y_val = model_selection.train_test_split(
x, y, test_size=0.1, random_state=42)
train_df = None
x = None
y = None
scaler = preprocessing.StandardScaler()
x_train_norm = scaler.fit_transform(x_train)
x_val_norm = scaler.transform(x_val)
x_train = None
x_val = None
x_train_norm.shape
feature_columns = [
tf.feature_column.numeric_column('x', shape=np.array(x_train_norm).shape[1:])]
from datetime import datetime
print(datetime.now())
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_train_norm}, y=y_train, batch_size=10000, num_epochs=25, shuffle=True)
regressor = tf.estimator.DNNRegressor(
feature_columns=feature_columns, hidden_units=[50, 25, 25])
regressor.train(input_fn=train_input_fn)
print(datetime.now())
val_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_val_norm}, y=y_val, num_epochs=1, shuffle=False)
scores = regressor.evaluate(input_fn=val_input_fn)
print('MSE (tensorflow): {0:f}'.format(scores['average_loss']))
predictions = regressor.predict(input_fn=val_input_fn)
y_predicted = np.array(list(p['predictions'] for p in predictions))
y_predicted = y_predicted.reshape(np.array(y_val).shape)
score_sklearn = metrics.mean_squared_error(y_predicted, y_val)
print('MSE (sklearn): {0:f}'.format(score_sklearn))
y_predicted.shape
y_predicted[0:2]
test_df = feather.read_dataframe("../../datasets/kaggle/new-york-city-taxi-fare-prediction/test.feather")
test_df["hour_period"]=test_df["hour"] // 4
test_df = pd.get_dummies(test_df, prefix=["year","hour_period"], columns=["year","hour_period"])
cols = [
"passenger_count",
"distance_miles",
"distance_to_center",
"is_to_from_JFK_new",
"year_2009",
"year_2010",
"year_2011",
"year_2012",
"year_2013",
"year_2014",
"year_2015",
"hour_period_0",
"hour_period_1",
"hour_period_2",
"hour_period_3",
"hour_period_4",
"hour_period_5",
]
x_test = test_df[cols].values
x_test_norm = scaler.transform(x_test)
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_test_norm}, num_epochs=1, shuffle=False)
test_predictions = regressor.predict(input_fn=test_input_fn)
y_predicted = np.array(list(p['predictions'] for p in test_predictions))
y_predicted = y_predicted.reshape((9914,1))
#y_predicted = np.array(list(test_predictions))
#y_predicted = y_predicted.reshape((9914,1))
y_predicted.shape
preds = [p[0] for p in y_predicted]
preds
# Write the predictions to a CSV file which we can submit to the competition.
submission = pd.DataFrame(
{'key': test_df.key, 'fare_amount': preds},
columns = ['key', 'fare_amount'])
submission.to_csv('../../datasets/kaggle/new-york-city-taxi-fare-prediction/submission.csv', index = False)
submission.describe()
submission.describe()
```
|
github_jupyter
|
import feather
import numpy as np
import pandas as pd
from sklearn import metrics
from sklearn import model_selection
from sklearn import preprocessing
import tensorflow as tf
train_df = feather.read_dataframe("../../datasets/kaggle/new-york-city-taxi-fare-prediction/train.feather")
train_df["hour_period"]=train_df["hour"] // 4
train_df = pd.get_dummies(train_df, prefix=["year","hour_period"], columns=["year","hour_period"])
print(train_df.shape)
print(train_df.columns)
cols = [
"passenger_count",
"distance_miles",
"distance_to_center",
"is_to_from_JFK_new",
"year_2009",
"year_2010",
"year_2011",
"year_2012",
"year_2013",
"year_2014",
"year_2015",
"hour_period_0",
"hour_period_1",
"hour_period_2",
"hour_period_3",
"hour_period_4",
"hour_period_5",
]
x = train_df[cols].values
y = train_df[['fare_amount']].values
x_train, x_val, y_train, y_val = model_selection.train_test_split(
x, y, test_size=0.1, random_state=42)
train_df = None
x = None
y = None
scaler = preprocessing.StandardScaler()
x_train_norm = scaler.fit_transform(x_train)
x_val_norm = scaler.transform(x_val)
x_train = None
x_val = None
x_train_norm.shape
feature_columns = [
tf.feature_column.numeric_column('x', shape=np.array(x_train_norm).shape[1:])]
from datetime import datetime
print(datetime.now())
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_train_norm}, y=y_train, batch_size=10000, num_epochs=25, shuffle=True)
regressor = tf.estimator.DNNRegressor(
feature_columns=feature_columns, hidden_units=[50, 25, 25])
regressor.train(input_fn=train_input_fn)
print(datetime.now())
val_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_val_norm}, y=y_val, num_epochs=1, shuffle=False)
scores = regressor.evaluate(input_fn=val_input_fn)
print('MSE (tensorflow): {0:f}'.format(scores['average_loss']))
predictions = regressor.predict(input_fn=val_input_fn)
y_predicted = np.array(list(p['predictions'] for p in predictions))
y_predicted = y_predicted.reshape(np.array(y_val).shape)
score_sklearn = metrics.mean_squared_error(y_predicted, y_val)
print('MSE (sklearn): {0:f}'.format(score_sklearn))
y_predicted.shape
y_predicted[0:2]
test_df = feather.read_dataframe("../../datasets/kaggle/new-york-city-taxi-fare-prediction/test.feather")
test_df["hour_period"]=test_df["hour"] // 4
test_df = pd.get_dummies(test_df, prefix=["year","hour_period"], columns=["year","hour_period"])
cols = [
"passenger_count",
"distance_miles",
"distance_to_center",
"is_to_from_JFK_new",
"year_2009",
"year_2010",
"year_2011",
"year_2012",
"year_2013",
"year_2014",
"year_2015",
"hour_period_0",
"hour_period_1",
"hour_period_2",
"hour_period_3",
"hour_period_4",
"hour_period_5",
]
x_test = test_df[cols].values
x_test_norm = scaler.transform(x_test)
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={'x': x_test_norm}, num_epochs=1, shuffle=False)
test_predictions = regressor.predict(input_fn=test_input_fn)
y_predicted = np.array(list(p['predictions'] for p in test_predictions))
y_predicted = y_predicted.reshape((9914,1))
#y_predicted = np.array(list(test_predictions))
#y_predicted = y_predicted.reshape((9914,1))
y_predicted.shape
preds = [p[0] for p in y_predicted]
preds
# Write the predictions to a CSV file which we can submit to the competition.
submission = pd.DataFrame(
{'key': test_df.key, 'fare_amount': preds},
columns = ['key', 'fare_amount'])
submission.to_csv('../../datasets/kaggle/new-york-city-taxi-fare-prediction/submission.csv', index = False)
submission.describe()
submission.describe()
| 0.453504 | 0.694594 |
# Effects of Chaining rNMF and sICA
This Notebook reproduces some results of the manuscript http://dx.doi.org/10.1016/j.neuroimage.2014.04.041. Please make sure to have the regNMF module (available at https://github.com/jansoe/FUImaging) in your PYTHONPATH.
```
import sys
import os
import pickle
import matplotlib.pyplot as plt
import numpy as np
from collections import defaultdict
from scipy.spatial.distance import pdist
from scipy.stats import gaussian_kde
pythonpath_for_regnmf = os.path.realpath(os.path.join(os.path.pardir, os.path.pardir))
sys.path.append(pythonpath_for_regnmf)
from regnmf import ImageAnalysisComponents as ia
from regnmf import datamaker
from regnmf.regularizedHALS import convex_cone
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
%matplotlib inline
```
### Parameter for creation of surrogate Data
```
param = {'act_time': [0.01, 0.1, 0.3, 0.8, 1.0, 1.0],
'cov': 0.3,
'latents': 40,
'mean': 0.2,
'no_samples': 50,
'noisevar': 0.2,
'shape': (50, 50),
'width':0.1,
'var': 0.08}
```
### Parameter for Matrix Factorization
```
anal_param = {'sparse_param': 0.5,
'factors': 80,
'smooth_param': 2,
'init':'convex',
'sparse_fct':'global_sparse',
'verbose':0
}
```
### Helper Functions
```
def violin_plot(ax, data, color='b'):
'''
create violin plots on an axis
'''
w = 0.4
for p, d in enumerate(data):
k = gaussian_kde(d) #calculates the kernel density
m = k.dataset.min() #lower bound of violin
M = k.dataset.max() #upper bound of violin
x = np.arange(m,M,(M-m)/100.) # support for violin
v = k.evaluate(x) #violin profile (density curve)
scale = w/v.max()
v = v*scale #scaling the violin to the available space
ax.fill_betweenx(x,p,v+p, facecolor=color, edgecolor = color, alpha=1)
ax.fill_betweenx(x,p,-v+p, facecolor=color, edgecolor = color, alpha=1)
#median
perc = np.percentile(d, [25,50,75])
perc_width = k.evaluate(perc)*scale
l1, = ax.plot([p-perc_width[1],p+perc_width[1]],[perc[1], perc[1]], 'k', lw=0.5)
l2, = ax.plot([p-perc_width[0],p+perc_width[0]],[perc[0], perc[0]], '0.25', lw=0.5)
ax.plot([p-perc_width[2],p+perc_width[2]],[perc[2], perc[2]], '0.25', lw=0.5)
ax.legend([l1, l2], ['median', 'quartiles'], prop={'size':fontsize}, numpoints=1,
loc = 'lower right', labelspacing=0.1, handletextpad=0.5, bbox_to_anchor = (1, 0.9),
handlelength=1, borderaxespad=-0.5, frameon=False)
def cor(time1, time2, num_sources):
'''calculate crosscorrelation between sources and latents'''
return np.corrcoef(np.vstack((time1, time2)))[num_sources:, :num_sources]
```
## Perform chained matrix factorization
applied factorizations:
- plain rNMF and plain sICA
- rNMF on data from sICA (i.e. PCA) reconstruction, that is __A*X__
- rNMF initialized by rectified sICA components
- sICA on pixel participation of rNMF
- sICA on data from rNMF reconstruction
```
num_datasets = 5 #number of independent datasets
mse = defaultdict(list)
cor = defaultdict(list)
for dummy in range(num_datasets):
compare = {}
# create data
tempdata = datamaker.Dataset(param)
# plain NMF
nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors'])
anal_param.update({'init':'convex'})
nnma.param.update(anal_param)
compare['nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# plain sICA
sica = ia.sICA(num_components=anal_param['factors'])
compare['sica'] = sica(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# NMF on sICA reduced data
reduced_data = np.dot(compare['sica']._series, compare['sica'].base._series)
compare['sicareduced_nmf'] = nnma(ia.TimeSeries(reduced_data, shape=param['shape']))
# sICA initialized NMF
nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors'])
A = compare['sica']._series.copy()
X = compare['sica'].base._series.copy()
A[A<0]=0
X[X<0]=0
anal_param.update({'init':{'A':A, 'X':X}})
nnma.param.update(anal_param)
compare['sicainit_nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# NMF initialized sICA
compare['nmfinit_sica'] = compare['nmf'].copy()
sica = ia.sICA(num_components=anal_param['factors'])
out_temp = sica(compare['nmfinit_sica'].base.copy())
compare['nmfinit_sica'].base = out_temp.base
compare['nmfinit_sica']._series = np.dot(compare['nmfinit_sica']._series, out_temp._series)
# sICA on NMF reduced data
nmf_reduced = np.dot(compare['nmf']._series, compare['nmf'].base._series)
sica = ia.sICA(num_components=anal_param['factors'])
compare['nmfreduced_sica'] = sica(ia.TimeSeries(nmf_reduced, shape=param['shape']))
# sICA on convex cone
compare['ccinit_sica'] = compare['nmf'].copy()
init = convex_cone(tempdata.observed, anal_param['factors'])
out_temp = sica(ia.TimeSeries(np.array(init['base']), shape=param['shape']))
compare['ccinit_sica'].base = out_temp.base
compare['ccinit_sica']._series = np.dot(np.array(init['timecourses']).T, out_temp._series)
#collect performance measures
for k in compare:
cor[k] += list(tempdata.cor2source(compare[k])[1])
mse[k] += list(tempdata.mse2source(compare[k], local=0.05))
```
### Violinplots of Source Recovery (SR)
```
fig = plt.figure(figsize=(15, 6))
fontsize = 10
ax = fig.add_axes([0.1,0.2,0.35,0.75])
keys = ['nmf', 'sicainit_nmf', 'sicareduced_nmf']
data = [1-np.array(mse[i]) for i in keys]
violin_plot(ax, data, '0.5')
ax.set_xticks(range(len(keys)))
ax.set_xticklabels(['NMF', 'sICA init\nNMF', 'sICA reconst.\nNMF'],
rotation='0', ha='center', size=fontsize)
ax.set_ylabel('SR', size=fontsize)
ax.set_ylim([0,0.9])
ax.set_yticks([0,0.4,0.8])
ax.yaxis.set_tick_params(labelsize=fontsize)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_tick_params(size=0)
for pos in ['right', 'bottom', 'top']:
ax.spines[pos].set_color('none')
ax = fig.add_axes([0.6,0.2,0.35,0.75])
keys = ['sica', 'nmfinit_sica', 'nmfreduced_sica']
data = [1-np.array(mse[i]) for i in keys]
violin_plot(ax, data, '0.5')
ax.set_xticks(range(len(keys)))
ax.set_xticklabels(['sICA', 'NMF init\nsICA', 'NMF reconst.\nsICA'],
rotation='0', ha='center', size=fontsize)
ax.set_ylabel('SR', size=fontsize)
ax.set_ylim([0,0.9])
ax.set_yticks([0,0.4,0.8])
ax.yaxis.set_tick_params(labelsize=fontsize)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_tick_params(size=0)
for pos in ['right', 'bottom', 'top']:
ax.spines[pos].set_color('none')
plt.show()
```
|
github_jupyter
|
import sys
import os
import pickle
import matplotlib.pyplot as plt
import numpy as np
from collections import defaultdict
from scipy.spatial.distance import pdist
from scipy.stats import gaussian_kde
pythonpath_for_regnmf = os.path.realpath(os.path.join(os.path.pardir, os.path.pardir))
sys.path.append(pythonpath_for_regnmf)
from regnmf import ImageAnalysisComponents as ia
from regnmf import datamaker
from regnmf.regularizedHALS import convex_cone
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
%matplotlib inline
param = {'act_time': [0.01, 0.1, 0.3, 0.8, 1.0, 1.0],
'cov': 0.3,
'latents': 40,
'mean': 0.2,
'no_samples': 50,
'noisevar': 0.2,
'shape': (50, 50),
'width':0.1,
'var': 0.08}
anal_param = {'sparse_param': 0.5,
'factors': 80,
'smooth_param': 2,
'init':'convex',
'sparse_fct':'global_sparse',
'verbose':0
}
def violin_plot(ax, data, color='b'):
'''
create violin plots on an axis
'''
w = 0.4
for p, d in enumerate(data):
k = gaussian_kde(d) #calculates the kernel density
m = k.dataset.min() #lower bound of violin
M = k.dataset.max() #upper bound of violin
x = np.arange(m,M,(M-m)/100.) # support for violin
v = k.evaluate(x) #violin profile (density curve)
scale = w/v.max()
v = v*scale #scaling the violin to the available space
ax.fill_betweenx(x,p,v+p, facecolor=color, edgecolor = color, alpha=1)
ax.fill_betweenx(x,p,-v+p, facecolor=color, edgecolor = color, alpha=1)
#median
perc = np.percentile(d, [25,50,75])
perc_width = k.evaluate(perc)*scale
l1, = ax.plot([p-perc_width[1],p+perc_width[1]],[perc[1], perc[1]], 'k', lw=0.5)
l2, = ax.plot([p-perc_width[0],p+perc_width[0]],[perc[0], perc[0]], '0.25', lw=0.5)
ax.plot([p-perc_width[2],p+perc_width[2]],[perc[2], perc[2]], '0.25', lw=0.5)
ax.legend([l1, l2], ['median', 'quartiles'], prop={'size':fontsize}, numpoints=1,
loc = 'lower right', labelspacing=0.1, handletextpad=0.5, bbox_to_anchor = (1, 0.9),
handlelength=1, borderaxespad=-0.5, frameon=False)
def cor(time1, time2, num_sources):
'''calculate crosscorrelation between sources and latents'''
return np.corrcoef(np.vstack((time1, time2)))[num_sources:, :num_sources]
num_datasets = 5 #number of independent datasets
mse = defaultdict(list)
cor = defaultdict(list)
for dummy in range(num_datasets):
compare = {}
# create data
tempdata = datamaker.Dataset(param)
# plain NMF
nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors'])
anal_param.update({'init':'convex'})
nnma.param.update(anal_param)
compare['nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# plain sICA
sica = ia.sICA(num_components=anal_param['factors'])
compare['sica'] = sica(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# NMF on sICA reduced data
reduced_data = np.dot(compare['sica']._series, compare['sica'].base._series)
compare['sicareduced_nmf'] = nnma(ia.TimeSeries(reduced_data, shape=param['shape']))
# sICA initialized NMF
nnma = ia.NNMF(maxcount=50, num_components=anal_param['factors'])
A = compare['sica']._series.copy()
X = compare['sica'].base._series.copy()
A[A<0]=0
X[X<0]=0
anal_param.update({'init':{'A':A, 'X':X}})
nnma.param.update(anal_param)
compare['sicainit_nmf'] = nnma(ia.TimeSeries(tempdata.observed, shape=param['shape']))
# NMF initialized sICA
compare['nmfinit_sica'] = compare['nmf'].copy()
sica = ia.sICA(num_components=anal_param['factors'])
out_temp = sica(compare['nmfinit_sica'].base.copy())
compare['nmfinit_sica'].base = out_temp.base
compare['nmfinit_sica']._series = np.dot(compare['nmfinit_sica']._series, out_temp._series)
# sICA on NMF reduced data
nmf_reduced = np.dot(compare['nmf']._series, compare['nmf'].base._series)
sica = ia.sICA(num_components=anal_param['factors'])
compare['nmfreduced_sica'] = sica(ia.TimeSeries(nmf_reduced, shape=param['shape']))
# sICA on convex cone
compare['ccinit_sica'] = compare['nmf'].copy()
init = convex_cone(tempdata.observed, anal_param['factors'])
out_temp = sica(ia.TimeSeries(np.array(init['base']), shape=param['shape']))
compare['ccinit_sica'].base = out_temp.base
compare['ccinit_sica']._series = np.dot(np.array(init['timecourses']).T, out_temp._series)
#collect performance measures
for k in compare:
cor[k] += list(tempdata.cor2source(compare[k])[1])
mse[k] += list(tempdata.mse2source(compare[k], local=0.05))
fig = plt.figure(figsize=(15, 6))
fontsize = 10
ax = fig.add_axes([0.1,0.2,0.35,0.75])
keys = ['nmf', 'sicainit_nmf', 'sicareduced_nmf']
data = [1-np.array(mse[i]) for i in keys]
violin_plot(ax, data, '0.5')
ax.set_xticks(range(len(keys)))
ax.set_xticklabels(['NMF', 'sICA init\nNMF', 'sICA reconst.\nNMF'],
rotation='0', ha='center', size=fontsize)
ax.set_ylabel('SR', size=fontsize)
ax.set_ylim([0,0.9])
ax.set_yticks([0,0.4,0.8])
ax.yaxis.set_tick_params(labelsize=fontsize)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_tick_params(size=0)
for pos in ['right', 'bottom', 'top']:
ax.spines[pos].set_color('none')
ax = fig.add_axes([0.6,0.2,0.35,0.75])
keys = ['sica', 'nmfinit_sica', 'nmfreduced_sica']
data = [1-np.array(mse[i]) for i in keys]
violin_plot(ax, data, '0.5')
ax.set_xticks(range(len(keys)))
ax.set_xticklabels(['sICA', 'NMF init\nsICA', 'NMF reconst.\nsICA'],
rotation='0', ha='center', size=fontsize)
ax.set_ylabel('SR', size=fontsize)
ax.set_ylim([0,0.9])
ax.set_yticks([0,0.4,0.8])
ax.yaxis.set_tick_params(labelsize=fontsize)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_tick_params(size=0)
for pos in ['right', 'bottom', 'top']:
ax.spines[pos].set_color('none')
plt.show()
| 0.362743 | 0.874774 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.