markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
3 Lowest HDI Countries | ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'TFR', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'TFR', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'TFR', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: TFR 1960-2018')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: TFR 1960-2018.png')
#TFR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'TFR', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'TFR', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'TFR', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('TFR')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: TFR 1960-2018')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: TFR 1960-2018.png')
#TFR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: FLPR 1990-2017')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: FLPR 1990-2017.png')
#FLPR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: FLPR 1990-2017')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: FLPR 1990-2017.png')
#FLPR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'HDI', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'HDI', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'HDI', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('HDI')
ax.legend(loc='lower right')
ax.set_title('3 Lowest HDI Countries: HDI 1990-2018')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: HDI 1990-2018.png')
#HDI of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='Year', y= 'HDI', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='Year', y= 'HDI', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='Year', y= 'HDI', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('HDI')
ax.legend(loc='lower right')
ax.set_title('3 Lowest HDI Countries: HDI 1990-2018')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: HDI 1990-2018.png')
#HDI of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='HDI', y= 'TFR', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: HDI vs TFR')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: HDI vs TFR.png')
#HDI vs TFR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='HDI', y= 'TFR', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='HDI', y= 'TFR', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='HDI', y= 'TFR', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('TFR')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: HDI vs TFR')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: TFR vs HDI.png')
#HDI vs TFR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: HDI vs FLPR')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: HDI vs FLPR.png')
#HDI vs FLPR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='HDI', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='best')
ax.set_title('3 Lowest HDI Countries: HDI vs FLPR')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: HDI vs FLPR.png')
#HDI vs FLPR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='scatter', c = 'navy', ax=ax, label = 'Chad')
ax.legend(loc='lower right')
ax.set_title('3 Lowest HDI Countries: TFR vs FLPR')
ax.figure.savefig('Scatter Plot 3 Lowest HDI Countries: TFR vs FLPR.png')
#TFR vs FLPR of 3 lowest HDI countries
ax = africa_low.loc[africa_low['Country']=='Niger'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'cornflowerblue', label = 'Niger')
africa_low.loc[africa_low['Country']=='Central African Republic'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'mediumblue', ax=ax, label = 'Central African Republic')
africa_low.loc[africa_low['Country']=='Chad'].plot(x='TFR', y= 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', kind='line', c = 'navy', ax=ax, label = 'Chad')
ax.set_ylabel('Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)')
ax.legend(loc='lower right')
ax.set_title('3 Lowest HDI Countries: TFR vs FLPR')
ax.figure.savefig('Line Plot 3 Lowest HDI Countries: TFR vs FLPR.png')
#TFR vs FLPR of 3 lowest HDI countries | _____no_output_____ | MIT | Python Analysis/Visualisations/Code/Visualisation_TFR_and_FLPR_of_Highest_and_Lowest_HDI_countries.ipynb | ChangHuaHua/QM2-Group-12 |
Hyperparameter tuning with Amazon SageMaker and Deep Graph Library with PyTorch backend_**Creating a Hyperparameter tuning job for a DGL network**_______ Contents1. [Background](Background) 2. [Setup](Setup) 3. [Tune](Train) 4. [Wrap-up](Wrap-up) BackgroundThis example notebook shows how to generate knowledge graph embedding using the DMLC DGL API and FB15k dataset. It uses the Amazon SageMaker hyperparameter tuning to start multiple training jobs with different hyperparameter combinations. This helps you find the set with best model performance. This is an important step in the machine learning process as hyperparameter settings can have a large effect on model accuracy. In this example, you use the [Amazon SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) to create a hyperparameter tuning job for an Amazon SageMaker estimator. SetupThis notebook was created and tested on an ml.p3.2xlarge notebook instance.Prerequisites * You can successfully run the kge_pytorch example (see kge_pytorch.ipynb). * You have an S3 bucket and prefix that you want to use for training and model data. This should be within the same Region as the notebook instance, training, and hosting. * You have the IAM role ARN used to give training and hosting access to your data. See the documentation for more details on creating these. If a role not associated with the current notebook instance, or more than one role, is required for training or hosting, replace sagemaker.get_execution_role() with the appropriate full IAM role ARN strings. | import sagemaker
from sagemaker import get_execution_role
from sagemaker.session import Session
# Setup session
sess = sagemaker.Session()
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sess.default_bucket()
# Location to put your custom code.
custom_code_upload_location = 'customcode'
# IAM execution role that gives Amazon SageMaker access to resources in your AWS account.
# You can use the Amazon SageMaker Python SDK to get the role from the notebook environment.
role = get_execution_role() | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
Now we'll import the Python libraries we'll need. | import boto3
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
TuneSimilar to training a single training job in Amazon SageMaker, you define the training estimator passing in the code scripts, IAM role, (per job) hardware configuration, and any hyperparameters you're not tuning. | from sagemaker.pytorch import PyTorch
ENTRY_POINT = 'train.py'
CODE_PATH = './'
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
params = {}
params['dataset'] = 'FB15k'
params['model'] = 'DistMult'
params['batch_size'] = 1024
params['neg_sample_size'] = 256
params['hidden_dim'] = 2000
params['max_step'] = 100000
params['batch_size_eval'] = 16
params['valid'] = True
params['test'] = True
params['neg_adversarial_sampling'] = True
estimator = PyTorch(entry_point=ENTRY_POINT,
source_dir=CODE_PATH,
role=role,
train_instance_count=1,
train_instance_type='ml.p3.2xlarge',
framework_version="1.3.1",
py_version='py3',
debugger_hook_config=False,
hyperparameters=params,
sagemaker_session=sess) | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
After you define your estimator, specify the hyperparameters you want to tune and their possible values. You have three different types of hyperparameters. * Categorical parameters need to take one value from a discrete set. Define this by passing the list of possible values to CategoricalParameter(list) * Continuous parameters can take any real number value between the minimum and maximum value, defined by ContinuousParameter(min, max) * Integer parameters can take any integer value between the minimum and maximum value, defined by IntegerParameter(min, max) If possible, it's almost always best to specify a value as the least restrictive type. For example, tuning threshold as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with possible values of 0.01, 0.1, 0.15, or 0.2. | hyperparameter_ranges = {'lr': ContinuousParameter(0.01, 0.1),
'gamma': ContinuousParameter(400, 600)} | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
Next, specify the objective metric that you want to tune and its definition. This includes the regular expression needed to extract that metric from the Amazon CloudWatch logs of the training job.You can capture evalution results such as MR, MRR and Hit10. | metric = []
mr_metric = {'Name': 'final_MR', 'Regex':"Test average MR at \[\S*\]: (\S*)"}
mrr_metric = {'Name': 'final_MRR', 'Regex':"Test average MRR at \[\S*\]: (\S*)"}
hit10_metric = {'Name': 'final_Hit10', 'Regex':"Test average HITS@10 at \[\S*\]: (\S*)"}
metric.append(mr_metric)
metric.append(mrr_metric)
metric.append(hit10_metric) | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
Now, create a HyperparameterTuner object, which you pass. * The training estimator you created above * The hyperparameter ranges * Objective metric name and definition * Number of training jobs to run in-total and how many training jobs should be run simultaneously. More parallel jobs will finish tuning sooner, but may sacrifice accuracy. We recommend that you set the parallel jobs value to less than 10 percent of the total number of training jobs It's set it higher in this example to keep it short. * Whether you should maximize or minimize the objective metric. You choose 'Minimize' in this example, which is what you want for the MR result.You can also add a task_tag with value 'DGL' to help tracking the hyperparameter tuning task. | task_tags = [{'Key':'ML Task', 'Value':'DGL'}]
tuner = HyperparameterTuner(estimator,
objective_metric_name='final_MR',
objective_type='Minimize',
hyperparameter_ranges=hyperparameter_ranges,
metric_definitions=metric,
tags=task_tags,
max_jobs=6,
max_parallel_jobs=2) | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
And finally, you can start the tuning job by calling .fit(). | tuner.fit() | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
Run a quick check of the hyperparameter tuning jobs status to make sure it started successfully and is InProgress. | boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus'] | _____no_output_____ | Apache-2.0 | sagemaker-python-sdk/dgl_kge/kge_pytorch_hypertune.ipynb | P15241328/amazon-sagemaker-examples |
You can also run the notebook in [COLAB](https://colab.research.google.com/github/deepmipt/DeepPavlov/blob/master/examples/classification_tutorial.ipynb). | !pip3 install deeppavlov | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Classification on DeepPavlov **Task**:Intent recognition on SNIPS dataset: https://github.com/snipsco/nlu-benchmark/tree/master/2017-06-custom-intent-engines that has already been recomposed to `csv` format and can be downloaded from http://files.deeppavlov.ai/datasets/snips_intents/train.csvFastText English word embeddings ~8Gb: http://files.deeppavlov.ai/deeppavlov_data/embeddings/wiki.en.bin Plan of the notebook with documentation links:1. [Data aggregation](Data-aggregation) * [DatasetReader](DatasetReader): [docs link](https://deeppavlov.readthedocs.io/en/latest/apiref/dataset_readers.html) * [DatasetIterator](DatasetIterator): [docs link](https://deeppavlov.readthedocs.io/en/latest/apiref/dataset_iterators.html)2. [Data preprocessing](Data-preprocessing): [docs link](https://deeppavlov.readthedocs.io/en/latest/components/data_processors.html) * [Lowercasing](Lowercasing) * [Tokenization](Tokenization) * [Vocabulary](Vocabulary)3. [Featurization](Featurization): [docs link](https://deeppavlov.readthedocs.io/en/latest/components/data_processors.html), [pre-trained embeddings link](https://deeppavlov.readthedocs.io/en/latest/intro/pretrained_vectors.html) * [Bag-of-words embedder](Bag-of-words) * [TF-IDF vectorizer](TF-IDF-Vectorizer) * [GloVe embedder](GloVe-embedder) * [Mean GloVe embedder](Mean-GloVe-embedder) * [GloVe weighted by TF-IDF embedder](GloVe-weighted-by-TF-IDF-embedder)4. [Models](Models): [docs link](https://deeppavlov.readthedocs.io/en/latest/components/classifiers.html) * [Building models in python](Models-in-python) - [Sklearn component classifiers](SklearnComponent-classifier-on-Tfidf-features-in-python) - [Keras classification model on GloVe emb](KerasClassificationModel-on-GloVe-embeddings-in-python) - [Sklearn component classifier on GloVe weighted emb](SklearnComponent-classifier-on-GloVe-weighted-by-TF-IDF-embeddings-in-python) * [Building models from configs](Models-from-configs) - [Sklearn component classifiers](SklearnComponent-classifier-on-Tfidf-features-from-config) - [Keras classification model](KerasClassificationModel-on-fastText-embeddings-from-config) - [Sklearn component classifier on GloVe weighted emb](SklearnComponent-classifier-on-GloVe-weighted-by-TF-IDF-embeddings-from-config) * [Bonus: pre-trained CNN model in DeepPavlov](Bonus:-pre-trained-CNN-model-in-DeepPavlov) Data aggregation First of all, let's download and look into data we will work with. | from deeppavlov.core.data.utils import simple_download
#download train data file for SNIPS
simple_download(url="http://files.deeppavlov.ai/datasets/snips_intents/train.csv",
destination="./snips/train.csv")
! head -n 15 snips/train.csv | text,intents
Add another song to the Cita RomГЎntica playlist. ,AddToPlaylist
add clem burke in my playlist Pre-Party R&B Jams,AddToPlaylist
Add Live from Aragon Ballroom to Trapeo,AddToPlaylist
add Unite and Win to my night out,AddToPlaylist
Add track to my Digster Future Hits,AddToPlaylist
add the piano bar to my Cindy Wilson,AddToPlaylist
Add Spanish Harlem Incident to cleaning the house,AddToPlaylist
add The Greyest of Blue Skies in Indie EspaГ±ol my playlist,AddToPlaylist
Add the name kids in the street to the plylist New Indie Mix,AddToPlaylist
add album radar latino,AddToPlaylist
Add Tranquility to the Latin Pop Rising playlist. ,AddToPlaylist
Add d flame to the Dcode2016 playlist.,AddToPlaylist
Add album to my fairy tales,AddToPlaylist
I need another artist in the New Indie Mix playlist. ,AddToPlaylist
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
DatasetReaderRead data using `BasicClassificationDatasetReader` из DeepPavlov | from deeppavlov.dataset_readers.basic_classification_reader import BasicClassificationDatasetReader
# read data from particular columns of `.csv` file
dr = BasicClassificationDatasetReader().read(
data_path='./snips/',
train='train.csv',
x = 'text',
y = 'intents'
) | 2019-02-12 12:14:23.376 WARNING in 'deeppavlov.dataset_readers.basic_classification_reader'['basic_classification_reader'] at line 96: Cannot find snips/valid.csv file
2019-02-12 12:14:23.376 WARNING in 'deeppavlov.dataset_readers.basic_classification_reader'['basic_classification_reader'] at line 96: Cannot find snips/test.csv file
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
We don't have a ready train/valid/test split. | # check train/valid/test sizes
[(k, len(dr[k])) for k in dr.keys()] | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
DatasetIteratorUse `BasicClassificationDatasetIterator` to split `train` on `train` and `valid` and to generate batches of samples. | from deeppavlov.dataset_iterators.basic_classification_iterator import BasicClassificationDatasetIterator
# initialize data iterator splitting `train` field to `train` and `valid` in proportion 0.8/0.2
train_iterator = BasicClassificationDatasetIterator(
data=dr,
field_to_split='train', # field that will be splitted
split_fields=['train', 'valid'], # fields to which the fiald above will be splitted
split_proportions=[0.8, 0.2], #proportions for splitting
split_seed=23, # seed for splitting dataset
seed=42) # seed for iteration over dataset | 2019-02-12 12:14:23.557 INFO in 'deeppavlov.dataset_iterators.basic_classification_iterator'['basic_classification_iterator'] at line 73: Splitting field <<train>> to new fields <<['train', 'valid']>>
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Let's look into training samples. | # one can get train instances (or any other data type including `all`)
x_train, y_train = train_iterator.get_instances(data_type='train')
for x, y in list(zip(x_train, y_train))[:5]:
print('x:', x)
print('y:', y)
print('=================') | x: Is it freezing in Offerman, California?
y: ['GetWeather']
=================
x: put this song in the playlist Trap Land
y: ['AddToPlaylist']
=================
x: show me a textbook with a rating of 2 and a maximum rating of 6 that is current
y: ['RateBook']
=================
x: Will the weather be okay in Northern Luzon Heroes Hill National Park 4 and a half months from now?
y: ['GetWeather']
=================
x: Rate the current album a four
y: ['RateBook']
=================
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Data preprocessing We will be using lowercasing and tokenization as data preparation. DeepPavlov also contains several other preprocessors and tokenizers. Lowercasing `str_lower` lowercases texts. | from deeppavlov.models.preprocessors.str_lower import str_lower
str_lower(['Is it freezing in Offerman, California?']) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Tokenization`NLTKTokenizer` can split string to tokens. | from deeppavlov.models.tokenizers.nltk_moses_tokenizer import NLTKMosesTokenizer
tokenizer = NLTKMosesTokenizer()
tokenizer(['Is it freezing in Offerman, California?']) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Let's preprocess all `train` part of the dataset. | train_x_lower_tokenized = str_lower(tokenizer(train_iterator.get_instances(data_type='train')[0])) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
VocabularyNow we are ready to use `vocab`. They are very usefull for:* extracting class labels and converting labels to indices and vice versa,* building of characters or tokens vocabularies. | from deeppavlov.core.data.simple_vocab import SimpleVocabulary
# initialize simple vocabulary to collect all appeared in the dataset classes
classes_vocab = SimpleVocabulary(
save_path='./snips/classes.dict',
load_path='./snips/classes.dict')
classes_vocab.fit((train_iterator.get_instances(data_type='train')[1]))
classes_vocab.save() | 2019-02-12 12:14:25.35 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 89: [saving vocabulary to /home/vimary/ipavlov/Pilot/examples/tutorials/snips/classes.dict]
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Let's see what classes the dataset contains and their indices in the vocabulary. | list(classes_vocab.items())
# also one can collect vocabulary of textual tokens appeared 2 and more times in the dataset
token_vocab = SimpleVocabulary(
save_path='./snips/tokens.dict',
load_path='./snips/tokens.dict',
min_freq=2,
special_tokens=('<PAD>', '<UNK>',),
unk_token='<UNK>')
token_vocab.fit(train_x_lower_tokenized)
token_vocab.save()
# number of tokens in dictionary
len(token_vocab)
# 10 most common words and number of times their appeared
token_vocab.freqs.most_common()[:10]
token_ids = token_vocab(str_lower(tokenizer(['Is it freezing in Offerman, California?'])))
token_ids
tokenizer(token_vocab(token_ids)) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
FeaturizationThis part contains several possible ways of featurization of text samples. One can chose any appropriate vectorizer/embedder according to available resources and given task.Bag-of-words (BoW) and TF-IDF vectorizers converts text samples to vectors (one vector per sample) while fastText, GloVe, fastText weighted by TF-IDF embedders either produce an embedding vector per token or an embedding vector per text sample (if `mean` set to True). Bag-of-wordsMatches a vector to each text sample indicating which words appeared in the given sample: text -> binary vector $v$: \[0, 1, 0, 0, 0, 1, ..., ...1, 0, 1\]. Dimensionality of vector $v$ is equal to vocabulary size.$v_i$ == 1, if word $i$ is in the text,$v_i$ == 0, else. | import numpy as np
from deeppavlov.models.embedders.bow_embedder import BoWEmbedder
# initialize bag-of-words embedder giving total number of tokens
bow = BoWEmbedder(depth=token_vocab.len)
# it assumes indexed tokenized samples
bow(token_vocab(str_lower(tokenizer(['Is it freezing in Offerman, California?']))))
# all 8 tokens are in the vocabulary
sum(bow(token_vocab(str_lower(tokenizer(['Is it freezing in Offerman, California?']))))[0]) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
TF-IDF VectorizerMatches a vector to each text sample: text -> vector $v$ from $R^N$ where $N$ is a vocabulary size.$TF-IDF(token, document) = TF(token, document) * IDF(token, document)$$TF$ is a term frequency:$TF(token, document) = \frac{n_{token}}{\sum_{k}n_k}.$$IDF$ is a inverse document frequency:$IDF(token, all\_documents) = \frac{Total\ number\ of\ documents}{number\ of\ documents\ where\ token\ appeared}.$ `SklearnComponent` in DeepPavlov is a universal wrapper for any vecotirzer/estimator from `sklearn` package. The only requirement to specify component usage is following: model class and name of infer method should be passed as parameters. | from deeppavlov.models.sklearn import SklearnComponent
# initialize TF-IDF vectorizer sklearn component with `transform` as infer method
tfidf = SklearnComponent(
model_class="sklearn.feature_extraction.text:TfidfVectorizer",
infer_method="transform",
save_path='./tfidf_v0.pkl',
load_path='./tfidf_v0.pkl',
mode='train')
# fit on textual train instances and save it
tfidf.fit(str_lower(train_iterator.get_instances(data_type='train')[0]))
tfidf.save()
tfidf(str_lower(['Is it freezing in Offerman, California?']))
# number of tokens in the TF-IDF vocabulary
len(tfidf.model.vocabulary_) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
GloVe embedder[GloVe](https://nlp.stanford.edu/projects/glove/) is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space. | from deeppavlov.models.embedders.glove_embedder import GloVeEmbedder | Using TensorFlow backend.
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Let's download GloVe embedding file | simple_download(url="http://files.deeppavlov.ai/embeddings/glove.6B.100d.txt",
destination="./glove.6B.100d.txt")
embedder = GloVeEmbedder(load_path='./glove.6B.100d.txt',
dim=100, pad_zero=True)
# output shape is (batch_size x max_num_tokens_in_the_batch x embedding_dim)
embedded_batch = embedder(str_lower(tokenizer(['Is it freezing in Offerman, California?'])))
len(embedded_batch), len(embedded_batch[0]), embedded_batch[0][0].shape | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Mean GloVe embedder Embedder returns a vector per token while we want to get a vector per text sample. Therefore, let's calculate mean vector of embeddings of tokens. For that we can either init `GloVeEmbedder` with `mean=True` parameter (`mean=false` by default), or pass `mean=true` while calling function (this way `mean` value is assigned only for this call). | # output shape is (batch_size x embedding_dim)
embedded_batch = embedder(str_lower(tokenizer(['Is it freezing in Offerman, California?'])), mean=True)
len(embedded_batch), embedded_batch[0].shape | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
GloVe weighted by TF-IDF embedderOne of the possible ways to combine TF-IDF vectorizer and any token embedder is to weigh token embeddings by TF-IDF coefficients (therefore, `mean` set to True is obligatory to obtain embeddings of interest while it still **by default** returns embeddings of tokens. | from deeppavlov.models.embedders.tfidf_weighted_embedder import TfidfWeightedEmbedder
weighted_embedder = TfidfWeightedEmbedder(
embedder=embedder, # our GloVe embedder instance
tokenizer=tokenizer, # our tokenizer instance
mean=True, # to return one vector per sample
vectorizer=tfidf # our TF-IDF vectorizer
)
# output shape is (batch_size x embedding_dim)
embedded_batch = weighted_embedder(str_lower(tokenizer(['Is it freezing in Offerman, California?'])))
len(embedded_batch), embedded_batch[0].shape | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Models | from deeppavlov.metrics.accuracy import sets_accuracy
# get all train and valid data from iterator
x_train, y_train = train_iterator.get_instances(data_type="train")
x_valid, y_valid = train_iterator.get_instances(data_type="valid") | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Models in python SklearnComponent classifier on Tfidf-features in python | # initialize sklearn classifier, all parameters for classifier could be passed
cls = SklearnComponent(
model_class="sklearn.linear_model:LogisticRegression",
infer_method="predict",
save_path='./logreg_v0.pkl',
load_path='./logreg_v0.pkl',
C=1,
mode='train')
# fit sklearn classifier and save it
cls.fit(tfidf(x_train), y_train)
cls.save()
y_valid_pred = cls(tfidf(x_valid))
# Let's look into obtained result
print("Text sample: {}".format(x_valid[0]))
print("True label: {}".format(y_valid[0]))
print("Predicted label: {}".format(y_valid_pred[0]))
# let's calculate sets accuracy (because each element is a list of labels)
sets_accuracy(np.squeeze(y_valid), y_valid_pred) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
KerasClassificationModel on GloVe embeddings in python | from deeppavlov.models.classifiers.keras_classification_model import KerasClassificationModel
from deeppavlov.models.preprocessors.one_hotter import OneHotter
from deeppavlov.models.classifiers.proba2labels import Proba2Labels
# Intialize `KerasClassificationModel` that composes CNN shallow-and-wide network
# (name here as`cnn_model`)
cls = KerasClassificationModel(save_path="./cnn_model_v0",
load_path="./cnn_model_v0",
embedding_size=embedder.dim,
n_classes=classes_vocab.len,
model_name="cnn_model",
text_size=15, # number of tokens
kernel_sizes_cnn=[3, 5, 7],
filters_cnn=128,
dense_size=100,
optimizer="Adam",
learning_rate=0.1,
learning_rate_decay=0.01,
loss="categorical_crossentropy")
# `KerasClassificationModel` assumes one-hotted distribution of classes per sample.
# `OneHotter` converts indices to one-hot vectors representation.
# To obtain indices we can use our `classes_vocab` intialized and fitted above
onehotter = OneHotter(depth=classes_vocab.len, single_vector=True)
# Train for 10 epochs
for ep in range(10):
for x, y in train_iterator.gen_batches(batch_size=64,
data_type="train"):
x_embed = embedder(tokenizer(str_lower(x)))
y_onehot = onehotter(classes_vocab(y))
cls.train_on_batch(x_embed, y_onehot)
# Save model weights and parameters
cls.save()
# Infering on validation data we get probability distribution on given data.
y_valid_pred = cls(embedder(tokenizer(str_lower(x_valid))))
# To convert probability distribution to labels,
# we first need to convert probabilities to indices,
# and then using vocabulary `classes_vocab` convert indices to labels.
#
# `Proba2Labels` converts probabilities to indices and supports three different modes:
# if `max_proba` is true, returns indices of the highest probabilities
# if `confidence_threshold` is given, returns indices with probabiltiies higher than threshold
# if `top_n` is given, returns `top_n` indices with highest probabilities
prob2labels = Proba2Labels(max_proba=True)
# Let's look into obtained result
print("Text sample: {}".format(x_valid[0]))
print("True label: {}".format(y_valid[0]))
print("Predicted probability distribution: {}".format(dict(zip(classes_vocab.keys(),
y_valid_pred[0]))))
print("Predicted label: {}".format(classes_vocab(prob2labels(y_valid_pred))[0]))
# calculate sets accuracy
sets_accuracy(y_valid, classes_vocab(prob2labels(y_valid_pred))) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
SklearnComponent classifier on GloVe weighted by TF-IDF embeddings in python | # initialize sklearn classifier, all parameters for classifier could be passed
cls = SklearnComponent(
model_class="sklearn.linear_model:LogisticRegression",
infer_method="predict",
save_path='./logreg_v1.pkl',
load_path='./logreg_v1.pkl',
C=1,
mode='train')
# fit sklearn classifier and save it
cls.fit(weighted_embedder(str_lower(tokenizer(x_train))), y_train)
cls.save()
y_valid_pred = cls(weighted_embedder(str_lower(tokenizer(x_valid))))
# Let's look into obtained result
print("Text sample: {}".format(x_valid[0]))
print("True label: {}".format(y_valid[0]))
print("Predicted label: {}".format(y_valid_pred[0]))
# let's calculate sets accuracy (because each element is a list of labels)
sets_accuracy(np.squeeze(y_valid), y_valid_pred) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Let's free our memory from embeddings and models | embedder.reset()
cls.reset() | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Models from configs | from deeppavlov import build_model
from deeppavlov import train_model | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
SklearnComponent classifier on Tfidf-features from config | logreg_config = {
"dataset_reader": {
"class_name": "basic_classification_reader",
"x": "text",
"y": "intents",
"data_path": "./snips"
},
"dataset_iterator": {
"class_name": "basic_classification_iterator",
"seed": 42,
"split_seed": 23,
"field_to_split": "train",
"split_fields": [
"train",
"valid"
],
"split_proportions": [
0.9,
0.1
]
},
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"pipe": [
{
"id": "classes_vocab",
"class_name": "simple_vocab",
"fit_on": [
"y"
],
"save_path": "./snips/classes.dict",
"load_path": "./snips/classes.dict",
"in": "y",
"out": "y_ids"
},
{
"in": [
"x"
],
"out": [
"x_vec"
],
"fit_on": [
"x",
"y_ids"
],
"id": "tfidf_vec",
"class_name": "sklearn_component",
"save_path": "tfidf_v1.pkl",
"load_path": "tfidf_v1.pkl",
"model_class": "sklearn.feature_extraction.text:TfidfVectorizer",
"infer_method": "transform"
},
{
"in": "x",
"out": "x_tok",
"id": "my_tokenizer",
"class_name": "nltk_moses_tokenizer",
"tokenizer": "wordpunct_tokenize"
},
{
"in": [
"x_vec"
],
"out": [
"y_pred"
],
"fit_on": [
"x_vec",
"y"
],
"class_name": "sklearn_component",
"main": True,
"save_path": "logreg_v2.pkl",
"load_path": "logreg_v2.pkl",
"model_class": "sklearn.linear_model:LogisticRegression",
"infer_method": "predict",
"ensure_list_output": True
}
],
"out": [
"y_pred"
]
},
"train": {
"batch_size": 64,
"metrics": [
"accuracy"
],
"validate_best": True,
"test_best": False
}
}
# we can train and evaluate model from config
m = train_model(logreg_config)
# or we can just load pre-trained model (conicides with what we did above)
m = build_model(logreg_config)
m(["Is it freezing in Offerman, California?"]) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
KerasClassificationModel on GloVe embeddings from config | cnn_config = {
"dataset_reader": {
"class_name": "basic_classification_reader",
"x": "text",
"y": "intents",
"data_path": "snips"
},
"dataset_iterator": {
"class_name": "basic_classification_iterator",
"seed": 42,
"split_seed": 23,
"field_to_split": "train",
"split_fields": [
"train",
"valid"
],
"split_proportions": [
0.9,
0.1
]
},
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"pipe": [
{
"id": "classes_vocab",
"class_name": "simple_vocab",
"fit_on": [
"y"
],
"level": "token",
"save_path": "./snips/classes.dict",
"load_path": "./snips/classes.dict",
"in": "y",
"out": "y_ids"
},
{
"in": "x",
"out": "x_tok",
"id": "my_tokenizer",
"class_name": "nltk_tokenizer",
"tokenizer": "wordpunct_tokenize"
},
{
"in": "x_tok",
"out": "x_emb",
"id": "my_embedder",
"class_name": "glove",
"load_path": "./glove.6B.100d.txt",
"dim": 100,
"pad_zero": True
},
{
"in": "y_ids",
"out": "y_onehot",
"class_name": "one_hotter",
"depth": "#classes_vocab.len",
"single_vector": True
},
{
"in": [
"x_emb"
],
"in_y": [
"y_onehot"
],
"out": [
"y_pred_probas"
],
"main": True,
"class_name": "keras_classification_model",
"save_path": "./cnn_model_v1",
"load_path": "./cnn_model_v1",
"embedding_size": "#my_embedder.dim",
"n_classes": "#classes_vocab.len",
"kernel_sizes_cnn": [
1,
2,
3
],
"filters_cnn": 256,
"optimizer": "Adam",
"learning_rate": 0.01,
"learning_rate_decay": 0.1,
"loss": "categorical_crossentropy",
"coef_reg_cnn": 1e-4,
"coef_reg_den": 1e-4,
"dropout_rate": 0.5,
"dense_size": 100,
"model_name": "cnn_model"
},
{
"in": "y_pred_probas",
"out": "y_pred_ids",
"class_name": "proba2labels",
"max_proba": True
},
{
"in": "y_pred_ids",
"out": "y_pred_labels",
"ref": "classes_vocab"
}
],
"out": [
"y_pred_labels"
]
},
"train": {
"epochs": 10,
"batch_size": 64,
"metrics": [
"sets_accuracy",
"f1_macro",
{
"name": "roc_auc",
"inputs": ["y_onehot", "y_pred_probas"]
}
],
"validation_patience": 5,
"val_every_n_epochs": 1,
"log_every_n_epochs": 1,
"show_examples": True,
"validate_best": True,
"test_best": False
}
}
# we can train and evaluate model from config
m = train_model(cnn_config)
# or we can just load pre-trained model (conicides with what we did above)
m = build_model(cnn_config)
m(["Is it freezing in Offerman, California?"]) | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
SklearnComponent classifier on GloVe weighted by TF-IDF embeddings from config | logreg_config = {
"dataset_reader": {
"class_name": "basic_classification_reader",
"x": "text",
"y": "intents",
"data_path": "snips"
},
"dataset_iterator": {
"class_name": "basic_classification_iterator",
"seed": 42,
"split_seed": 23,
"field_to_split": "train",
"split_fields": [
"train",
"valid"
],
"split_proportions": [
0.9,
0.1
]
},
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"pipe": [
{
"id": "classes_vocab",
"class_name": "simple_vocab",
"fit_on": [
"y"
],
"save_path": "./snips/classes.dict",
"load_path": "./snips/classes.dict",
"in": "y",
"out": "y_ids"
},
{
"in": [
"x"
],
"out": [
"x_vec"
],
"fit_on": [
"x",
"y_ids"
],
"id": "my_tfidf_vectorizer",
"class_name": "sklearn_component",
"save_path": "tfidf_v2.pkl",
"load_path": "tfidf_v2.pkl",
"model_class": "sklearn.feature_extraction.text:TfidfVectorizer",
"infer_method": "transform"
},
{
"in": "x",
"out": "x_tok",
"id": "my_tokenizer",
"class_name": "nltk_moses_tokenizer"
},
{
"in": "x_tok",
"out": "x_emb",
"id": "my_embedder",
"class_name": "glove",
"save_path": "./glove.6B.100d.txt",
"load_path": "./glove.6B.100d.txt",
"dim": 100,
"pad_zero": True
},
{
"class_name": "one_hotter",
"id": "my_onehotter",
"depth": "#classes_vocab.len",
"in": "y_ids",
"out": "y_onehot",
"single_vector": True
},
{
"in": "x_tok",
"out": "x_weighted_emb",
"class_name": "tfidf_weighted",
"id": "my_weighted_embedder",
"embedder": "#my_embedder",
"tokenizer": "#my_tokenizer",
"vectorizer": "#my_tfidf_vectorizer",
"mean": True
},
{
"in": [
"x_weighted_emb"
],
"out": [
"y_pred"
],
"fit_on": [
"x_weighted_emb",
"y"
],
"class_name": "sklearn_component",
"main": True,
"save_path": "logreg_v3.pkl",
"load_path": "logreg_v3.pkl",
"model_class": "sklearn.linear_model:LogisticRegression",
"infer_method": "predict",
"ensure_list_output": True
}
],
"out": [
"y_pred"
]
},
"train": {
"epochs": 10,
"batch_size": 64,
"metrics": [
"sets_accuracy"
],
"show_examples": False,
"validate_best": True,
"test_best": False
}
}
# we can train and evaluate model from config
m = train_model(logreg_config)
# or we can just load pre-trained model (conicides with what we did above)
m = build_model(logreg_config)
m(["Is it freezing in Offerman, California?"])
# let's free memory
del m | _____no_output_____ | Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Bonus: pre-trained CNN model in DeepPavlov Download model files (`wiki.en.bin` 8Gb embeddings): ! python -m deeppavlov download intents_snips_big Evaluate metrics on validation set (no test set provided): ! python -m deeppavlov evaluate intents_snips_big Or one can use model from python code: | from pathlib import Path
import deeppavlov
from deeppavlov import build_model, evaluate_model
from deeppavlov.download import deep_download
config_path = Path(deeppavlov.__file__).parent.joinpath('configs/classifiers/intents_snips_big.json')
# let's download all the required data - model files, embeddings, vocabularies
deep_download(config_path)
# now one can initialize model
m = build_model(config_path)
m(["Is it freezing in Offerman, California?"])
# let's free memory
del m
# or one can evaluate model WITHOUT training
evaluate_model(config_path) | 2018-12-13 18:45:33.675 WARNING in 'deeppavlov.dataset_readers.basic_classification_reader'['basic_classification_reader'] at line 97: Cannot find /home/dilyara/.deeppavlov/downloads/snips/valid.csv file
2018-12-13 18:45:33.675 WARNING in 'deeppavlov.dataset_readers.basic_classification_reader'['basic_classification_reader'] at line 97: Cannot find /home/dilyara/.deeppavlov/downloads/snips/test.csv file
2018-12-13 18:45:33.676 INFO in 'deeppavlov.dataset_iterators.basic_classification_iterator'['basic_classification_iterator'] at line 73: Splitting field <<train>> to new fields <<['train', 'valid']>>
2018-12-13 18:45:33.679 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 100: [loading vocabulary from /home/dilyara/.deeppavlov/models/classifiers/intents_snips_v10/classes.dict]
2018-12-13 18:45:33.680 INFO in 'deeppavlov.models.embedders.fasttext_embedder'['fasttext_embedder'] at line 52: [loading fastText embeddings from `/home/dilyara/.deeppavlov/downloads/embeddings/wiki.en.bin`]
2018-12-13 18:45:54.568 INFO in 'deeppavlov.models.classifiers.keras_classification_model'['keras_classification_model'] at line 287: [initializing `KerasClassificationModel` from saved]
2018-12-13 18:45:54.913 INFO in 'deeppavlov.models.classifiers.keras_classification_model'['keras_classification_model'] at line 297: [loading weights from model.h5]
2018-12-13 18:45:55.112 INFO in 'deeppavlov.models.classifiers.keras_classification_model'['keras_classification_model'] at line 137: Model was successfully initialized!
Model summary:
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, None, 300) 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 256) 230656 input_1[0][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, None, 256) 384256 input_1[0][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, None, 256) 537856 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, 256) 1024 conv1d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, 256) 1024 conv1d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, 256) 1024 conv1d_3[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, None, 256) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, None, 256) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, None, 256) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_1 (GlobalM (None, 256) 0 activation_1[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_2 (GlobalM (None, 256) 0 activation_2[0][0]
__________________________________________________________________________________________________
global_max_pooling1d_3 (GlobalM (None, 256) 0 activation_3[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 768) 0 global_max_pooling1d_1[0][0]
global_max_pooling1d_2[0][0]
global_max_pooling1d_3[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 768) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 100) 76900 dropout_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 100) 400 dense_1[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 100) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 100) 0 activation_4[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 7) 707 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 7) 28 dense_2[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 7) 0 batch_normalization_5[0][0]
==================================================================================================
Total params: 1,233,875
Trainable params: 1,232,125
Non-trainable params: 1,750
__________________________________________________________________________________________________
2018-12-13 18:45:55.113 INFO in 'deeppavlov.core.commands.train'['train'] at line 207: Testing the best saved model
| Apache-2.0 | examples/classification_tutorial.ipynb | ayeffkay/DeepPavlov |
Auto MPG data | dataset_path = '/Users/mehdi/.keras/datasets/auto-mpg.data'
# read using pandas
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
dataset = dataset.dropna()
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.shape
t_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(t_dataset.index)
val_dataset = t_dataset.sample(frac=0.25, random_state=0)
train_dataset = t_dataset.drop(val_dataset.index)
train_dataset.shape, val_dataset.shape, test_dataset.shape
t_dataset.shape, dataset.shape
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
val_labels = val_dataset.pop('MPG')
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
normed_val_data = norm(val_dataset)
import numpy as np
class DATA(object):
def __init__(self, x, y, w=1.0):
self.X = x
self.Y = y
self.W = w*np.ones_like(y)
train = DATA(normed_train_data, train_labels)
test = DATA(normed_test_data, test_labels)
val = DATA(normed_val_data, val_labels)
kw = dict(nfeature=9)
model0 = ffnn.run_model(9, train, val, test, units=[0], batch_size=20)
model1 = ffnn.run_model(9, train, val, test, units=[10, 10], batch_size=20)
model2 = ffnn.run_model(9, train, val, test, units=[50, 50], batch_size=20)
model3 = ffnn.run_model(9, train, val, test, units=[100, 100], batch_size=20)
import matplotlib.pyplot as plt
utils.plot_history([model0['history'],
model1['history'],
model2['history'],
model3['history']],
['linear',
'2 layers of 10',
'2 layers of 50',
'2 layers of 100'])
plt.show()
import pandas as pd
pd.DataFrame(model0['history'].history).tail().T
pd.DataFrame(model0['history'].history).tail().T
utils.plot_prederr(test.Y, model1['Ypred'].flatten())
model2['history'].model.get_config()
sys.path.append('/Users/mehdi/github/SYSNet/src')
import NN
help(NN.Netregression)
NN.Netregression? | _____no_output_____ | MIT | notebooks/trunk/regression-v2.ipynb | mehdirezaie/LSSutils |
The aim of this notebook is to explore the following questions: - [ ] Does CSR ongevellan have similar numbers as the incident data that has been provided by RWS - [ ] Is there a common key between the 2 datasets such that we can beef up RWS using Ongavellen. | rws = pd.read_sql('select * from rws_schema.ongevallen_raw;', con=conn)
csr = pd.read_sql('select * from rws_schema.incidents;', con=conn)
csr.head()
rws.columns
csr.columns
csr.inc_type.value_counts(normalize=True)
csr.loc[:,'inc_start'] = pd.to_datetime(csr.inc_start)
csr.loc[:,'date'] = csr.inc_start.apply(lambda x: x.date())
csr.loc[:,'year'] = csr.inc_start.apply(lambda x: x.year)
csr.loc[:,'accident'] = 1
d = csr.pivot_table(index='inc_type',columns='year', values='accident', aggfunc=sum)
d.loc['Ongeval',:]
rws.jaar.value_counts().sort_index()
csr.head(1).transpose() | _____no_output_____ | MIT | notebooks/EDA - Incident data from CSR vs Ongevallen from RWS.ipynb | G-Simeone/Learning_Accident_Occurence_on_Dutch_Highways |
Do they have a common key? | # what are the common columns
c = set(csr.columns)
r = set(rws.columns)
c.intersection(r)
r.intersection(c) | _____no_output_____ | MIT | notebooks/EDA - Incident data from CSR vs Ongevallen from RWS.ipynb | G-Simeone/Learning_Accident_Occurence_on_Dutch_Highways |
Because column names have been edited in english, so there is no direct intersection | csr.loc[csr.inc_type=='Ongeval']
rws.head()
csr.shape
pd.to_numeric(rws.id_jaar.map(lambda x: x.split('.')[0])).describe() | _____no_output_____ | MIT | notebooks/EDA - Incident data from CSR vs Ongevallen from RWS.ipynb | G-Simeone/Learning_Accident_Occurence_on_Dutch_Highways |
Non Numerical datanon_numerical_data = train.select_dtypes(include="object")non_numerical_data.head(3)train.head() | #Numerical data
numerical_data = train.select_dtypes(exclude="object")
numerical_data.head(3)
train.head()
#Sub every empty postion with smtg
numericals = train.select_dtypes(include=[np.number]).columns.tolist()
numericals.remove("TomorrowRainForecast")
#Get categoricals
categoricals = train.select_dtypes(exclude=[np.number]).columns.tolist()
#Clen data
train = basic_data_wrangling(train)
X_final_test = basic_data_wrangling(test)
train.head()
#Ready data
label = train.TomorrowRainForecast
features = train.drop('TomorrowRainForecast', axis=1)
X_train, X_test, y_train, y_test = train_test_split(features, label, test_size=0.33, random_state=0)
#Get rid of every feature with a direction
old_X_train = X_train
for col_name in X_train:
if col_name.find('Dir') != -1 and col_name.find('StrongWind')!= -1 :
X_train = X_train.drop(col_name,axis=1)
X_test = X_test.drop(col_name,axis=1)
"""
range_min=0
range_max=1
min_max_scaler = MinMaxScaler(feature_range=(range_min, range_max))
X_train = min_max_scaler.fit_transform(X_train)
pd.DataFrame( X_train)
"""
#Classifier
# Choose the model
random_forest = RandomForestClassifier(random_state=10, n_estimators=500) #, n_estimators = 500) # max_depth=10
'''
random_forest = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=6, max_features=45, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=500, n_jobs=1,
oob_score=False, random_state=10, verbose=0,
warm_start=False)
'''
# Fit the model
random_forest.fit(X_train, y_train)
# Make the predictions
random_forest_preds = random_forest.predict_proba(X_test)
cols = X_train.columns.tolist()
submit_preds = random_forest.predict_proba(X_final_test[cols])
# Score the predictions
score_frst = roc_auc_score(y_test, random_forest_preds[:,1])
print("ROC AUC: %f" % score_frst)
X_train.shape
y_train.shape
X_final_test[cols].shape
#Classifier gaussian naive bayes# Choose the model
naive_bayes = GaussianNB()# Fit the model
naive_bayes = naive_bayes.fit(X_train, y_train)# Make the predictions
naive_bayes_preds = naive_bayes.predict_proba( X_final_test[cols])
naive_bayes_preds2 = naive_bayes.predict_proba( X_test)
# Score the predictions
score_gaus = roc_auc_score(y_test, naive_bayes_preds2[:,1])
#print("ROC AUC: %f" % score_gaus)
logReg = LogisticRegression(random_state=10)
#Voting classifier
#total = score_frst + score_gaus
#weights = [score_frst/total ,score_gaus/total ]
#eclf1 = VotingClassifier(estimators=[('rand_frst', random_forest), ('naive_bayes', naive_bayes)], voting='soft', weights = weights )
eclf1 = VotingClassifier(estimators=[('rand_frst', random_forest), ('naive_bayes', naive_bayes),('logreg',logReg)], voting='soft')
eclf1 = eclf1.fit(X_train,y_train)
results=eclf1.predict_proba(X_final_test[cols])[:,1]
print(results)
X_train.head()
from sklearn.model_selection import cross_val_score, cross_validate
grid_results = cross_validate(random_forest, X_test, y_test, scoring="roc_auc",
return_train_score=True, cv=5)
grid_results
pd.DataFrame(grid_results).mean()
X_test.head()
independent_variables = numericals
estimator=DecisionTreeClassifier()
random_search_parameter_space_dist = {
"max_depth": randint(1, 100),
"max_features": randint(1, len(independent_variables)),
"class_weight": ["balanced", None]
}
randomized_search = RandomizedSearchCV(
estimator,
random_search_parameter_space_dist,
cv=5, n_iter=250,
random_state=42,
return_train_score=True,
n_jobs = 10 )
%%timeit -n 1 -r 1
randomized_search.fit(X_train, y_train)
randomized_search.best_estimator_
randomized_search.best_score_
randomized_search = RandomizedSearchCV(
RandomForestClassifier(),
random_search_parameter_space_dist,
cv=5, n_iter=250,
random_state=42,
return_train_score=True,
n_jobs = 10 )
%%timeit -n 1 -r 1
randomized_search.fit(X_train, y_train)
randomized_search.best_estimator_
randomized_search.best_score_
#results = submit_preds[:,1]
print(len(results))
predictions = pd.DataFrame({'ID': X_final_test.index,'TomorrowRainForecast':results})
#Output
predictions.to_csv('predictions_vote_2.csv', index=False)
#Output
predictions.to_csv('predictions.csv', index=False)
#Plot
"""
feature_importances = random_forest.feature_importances_
feature_importances = pd.Series(feature_importances, index=X_train.columns, name="feature_importance_value")
matplotlib.rcParams["figure.figsize"] = [18, 18]
feature_importances.plot.barh();"""
random_forest_preds | _____no_output_____ | MIT | Binary-Classification/it will rain tomorrow/notebooks/Binary Classification-Random Forest.ipynb | mamonteiro-brg/Lisbon-Data-Science-Academy |
Parcels Experiment:Expanding the polyline code to release particles at density based on local velocity normal to section._(Based on an experiment originally designed by Christina Schmidt.)__(Runs on GEOMAR Jupyter Server at https://schulung3.geomar.de/user/workshop007/lab)_ To do- Check/ask how OceanParcels deals with partial cells, if it does. - It doesn't. Does it matter? Technical preamble | %matplotlib inline
from parcels import (
AdvectionRK4_3D,
ErrorCode,
FieldSet,
JITParticle,
ParticleSet,
Variable
)
# from operator import attrgetter
from datetime import datetime, timedelta
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
import cmocean as co
import pandas as pd
import xarray as xr
# import dask as dask
| INFO: Compiled ParcelsRandom ==> /tmp/parcels-62665/libparcels_random_657e0035-5181-471b-9b3b-09640069ddf8.so
| MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Experiment settings (user input) ParametersThese can be set in papermill | # OSNAP multiline details
sectionPathname = '../data/external/'
sectionFilename = 'osnap_pos_wp.txt'
sectionname = 'osnap'
# location of input data
path_name = '/data/iAtlantic/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/'
experiment_name = 'VIKING20X.L46-KKG36107B'
data_resolution = '1m'
w_name_extension = '_repaire_depthw_time'
# location of mask data
mask_path_name = '/data/iAtlantic/ocean-only/VIKING20X.L46-KKG36107B/nemo/suppl/'
mesh_mask_filename = '1_mesh_mask.nc_notime_depthw'
# location of output data
outpath_name = '../data/raw/'
year_prefix = 201 # this does from 2000 onwards
# set line segment to use
start_vertex = 4
end_vertex = 12
# experiment duration etc
runtime_in_days = 10
dt_in_minutes = -10
# repeatdt = timedelta(days=3)
# number of particles to track
create_number_particles = 200000 # many will not be ocean points
use_number_particles = 200000
min_release_depth = 0
max_release_depth = 1_000
# max current speed for particle selection
max_current = 1.0
# set base release date and time
t_0_str = '2010-01-16T12:00:00'
t_start_str = '2016-01-16T12:00:00'
# particle positions are stored every x hours
outputdt_in_hours = 120
# select subdomain (to decrease needed resources) comment out to use whole domain
# sd_i1, sd_i2 = 0, 2404 # western/eastern limit (indices not coordinates)
# sd_j1, sd_j2 = 1200, 2499 # southern/northern limit (indices not coordinates)
# sd_z1, sd_z2 = 0, 46
# how to initialize the random number generator
# --> is set in next cell
# RNG_seed = 123
use_dask_chunks = True
# Parameters
path_name = "/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/"
data_resolution = "5d"
w_name_extension = ""
mask_path_name = "/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/suppl/"
mesh_mask_filename = "1_mesh_mask.nc"
year_prefix = ""
runtime_in_days = 3650
create_number_particles = 4000000
use_number_particles = 4000000
max_release_depth = 1000
max_current = 2.0
t_0_str = "1980-01-03T12:00:00"
t_start_str = "2019-10-20T12:00:00"
use_dask_chunks = False
| _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Derived variables | # times
t_0 = datetime.fromisoformat(t_0_str) # using monthly mean fields. Check dates.
t_start = datetime.fromisoformat(t_start_str)
# RNG seed based on release day (days since 1980-01-03)
RNG_seed = int((t_start - t_0).total_seconds() / (60*60*24))
# names of files to load
fname_U = f'1_{experiment_name}_{data_resolution}_{year_prefix}*_grid_U.nc'
fname_V = f'1_{experiment_name}_{data_resolution}_{year_prefix}*_grid_V.nc'
fname_T = f'1_{experiment_name}_{data_resolution}_{year_prefix}*_grid_T.nc'
fname_W = f'1_{experiment_name}_{data_resolution}_{year_prefix}*_grid_W.nc{w_name_extension}'
sectionPath = Path(sectionPathname)
data_path = Path(path_name)
mask_path = Path(mask_path_name)
outpath = Path(outpath_name)
display(t_0)
display(t_start)
if dt_in_minutes > 0:
direction = '_forwards_'
else:
direction = '_backward_'
year_str = str(t_start.year)
month_str = str(t_start.month).zfill(2)
day_str = str(t_start.day).zfill(2)
days = str(runtime_in_days)
seed = str(RNG_seed)
npart= str(use_number_particles)
degree2km = 1.852*60.0
| _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Construct input / output paths etc. | mesh_mask = mask_path / mesh_mask_filename
| _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Load input datasets | def fieldset_defintions(
list_of_filenames_U, list_of_filenames_V,
list_of_filenames_W, list_of_filenames_T,
mesh_mask
):
ds_mask = xr.open_dataset(mesh_mask)
filenames = {'U': {'lon': (mesh_mask),
'lat': (mesh_mask),
'depth': list_of_filenames_W[0],
'data': list_of_filenames_U},
'V': {'lon': (mesh_mask),
'lat': (mesh_mask),
'depth': list_of_filenames_W[0],
'data': list_of_filenames_V},
'W': {'lon': (mesh_mask),
'lat': (mesh_mask),
'depth': list_of_filenames_W[0],
'data': list_of_filenames_W},
'T': {'lon': (mesh_mask),
'lat': (mesh_mask),
'depth': list_of_filenames_W[0],
'data': list_of_filenames_T},
'S': {'lon': (mesh_mask),
'lat': (mesh_mask),
'depth': list_of_filenames_W[0],
'data': list_of_filenames_T},
'MXL': {'lon': (mesh_mask),
'lat': (mesh_mask),
'data': list_of_filenames_T}
}
variables = {'U': 'vozocrtx',
'V': 'vomecrty',
'W': 'vovecrtz',
'T': 'votemper',
'S': 'vosaline',
'MXL':'somxl010'
}
dimensions = {'U': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw',
'time': 'time_counter'}, # needs to be on f-nodes
'V': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw',
'time': 'time_counter'}, # needs to be on f-nodes
'W': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw',
'time': 'time_counter'}, # needs to be on f-nodes
'T': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw',
'time': 'time_counter'}, # needs to be on t-nodes
'S': {'lon': 'glamf', 'lat': 'gphif', 'depth': 'depthw',
'time': 'time_counter'}, # needs to be on t-nodes
'MXL': {'lon': 'glamf', 'lat': 'gphif',
'time': 'time_counter'}, # needs to be on t-nodes
}
# exclude the two grid cells at the edges of the nest as they contain 0
# and everything south of 20N
indices = {'lon': range(2, ds_mask.x.size-2), 'lat': range(1132, ds_mask.y.size-2)}
# indices = {
# 'U': {'depth': range(sd_z1, sd_z2), 'lon': range(sd_i1, sd_i2), 'lat': range(sd_j1, sd_j2)},
# 'V': {'depth': range(sd_z1, sd_z2), 'lon': range(sd_i1, sd_i2), 'lat': range(sd_j1, sd_j2)},
# 'W': {'depth': range(sd_z1, sd_z2), 'lon': range(sd_i1, sd_i2), 'lat':range(sd_j1, sd_j2)},
# 'T': {'depth': range(sd_z1, sd_z2), 'lon': range(sd_i1, sd_i2), 'lat':range(sd_j1, sd_j2)},
# 'S': {'depth': range(sd_z1, sd_z2), 'lon': range(sd_i1, sd_i2), 'lat':range(sd_j1, sd_j2)}
# }
if use_dask_chunks:
field_chunksizes = {'U': {'lon':('x', 1024), 'lat':('y',128), 'depth': ('depthw', 64),
'time': ('time_counter',3)}, # needs to be on f-nodes
'V': {'lon':('x', 1024), 'lat':('y',128), 'depth': ('depthw', 64),
'time': ('time_counter',3)}, # needs to be on f-nodes
'W': {'lon':('x', 1024), 'lat':('y',128), 'depth': ('depthw', 64),
'time': ('time_counter',3)}, # needs to be on f-nodes
'T': {'lon':('x', 1024), 'lat':('y',128), 'depth': ('depthw', 64),
'time': ('time_counter',3)}, # needs to be on t-nodes
'S': {'lon':('x', 1024), 'lat':('y',128), 'depth': ('depthw', 64),
'time': ('time_counter',3)}, # needs to be on t-nodes
'MXL': {'lon':('x', 1024), 'lat':('y',128),
'time': ('time_counter',3)}, # needs to be on t-nodes
}
else:
field_chunksizes = None
return FieldSet.from_nemo(
filenames, variables, dimensions,
indices=indices,
chunksize=field_chunksizes, # = None for no chunking
mesh='spherical',
tracer_interp_method='cgrid_tracer'
# ,time_periodic=time_loop_period
# ,allow_time_extrapolation=True
)
def create_fieldset(
data_path=data_path, experiment_name=experiment_name,
fname_U=fname_U, fname_V=fname_V, fname_W=fname_W, fname_T=fname_T,
mesh_mask = mesh_mask
):
files_U = list(sorted((data_path).glob(fname_U)))
files_V = list(sorted((data_path).glob(fname_V)))
files_W = list(sorted((data_path).glob(fname_W)))
files_T = list(sorted((data_path).glob(fname_T)))
print(files_U)
fieldset = fieldset_defintions(
files_U, files_V,
files_W, files_T, mesh_mask)
return fieldset
fieldset = create_fieldset() | [PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19800101_19801231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19810101_19811231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19820101_19821231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19830101_19831231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19840101_19841231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19850101_19851231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19860101_19861231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19870101_19871231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19880101_19881231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19890101_19891231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19900101_19901231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19910101_19911231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19920101_19921231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19930101_19931231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19940101_19941231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19950101_19951231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19960101_19961231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19970101_19971231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19980101_19981231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_19990101_19991231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20000101_20001231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20010101_20011231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20020101_20021231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20030101_20031231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20040101_20041231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20050101_20051231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20060101_20061231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20070101_20071231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20080101_20081231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20090101_20091231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20100101_20101231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20110101_20111231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20120101_20121231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20130101_20131231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20140101_20141231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20150101_20151231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20160101_20161231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20170101_20171231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20180101_20181231_grid_U.nc'), PosixPath('/gxfs_work1/geomar/smomw355/model_data/ocean-only/VIKING20X.L46-KKG36107B/nemo/output/1_VIKING20X.L46-KKG36107B_5d_20190101_20191231_grid_U.nc')]
| MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Create Virtual Particles add a couple of simple plotting routines | def plot_section_sdist():
plt.figure(figsize=(10,5))
u = np.array([p.uvel for p in pset]) * degree2km * 1000.0 * np.cos(np.radians(pset.lat))
v = np.array([p.vvel for p in pset]) * degree2km * 1000.0
section_index = np.searchsorted(lonlat.lon,pset.lon)-1
u_normal = v * lonlatdiff.costheta[section_index].data - u * lonlatdiff.sintheta[section_index].data
y = (pset.lat - lonlat.lat[section_index]) * degree2km
x = (pset.lon - lonlat.lon[section_index]) * degree2km*np.cos(np.radians(lonlat2mean.lat[section_index+1].data))
dist = np.sqrt(x**2 + y**2) + lonlatdiff.length_west[section_index].data
plt.scatter(
dist,
[p.depth for p in pset],
1,
u_normal,
cmap=co.cm.balance,vmin=-0.3,vmax=0.3
)
plt.ylim(1200,0)
plt.colorbar(label = r'normal velocity [$\mathrm{m\ s}^{-1}$]')
plt.xlabel('distance [km]')
plt.ylabel('depth [m]')
return
def plot_section_lon():
plt.figure(figsize=(10,5))
u = np.array([p.uvel for p in pset]) * degree2km * 1000.0 * np.cos(np.radians(pset.lat))
v = np.array([p.vvel for p in pset]) * degree2km * 1000.0
section_index = np.searchsorted(lonlat.lon,pset.lon)-1
u_normal = v * lonlatdiff.costheta[section_index].data - u * lonlatdiff.sintheta[section_index].data
plt.scatter(
[p.lon for p in pset],
[p.depth for p in pset],
1,
u_normal,
cmap=co.cm.balance,vmin=-0.3,vmax=0.3
)
plt.ylim(1200,0)
plt.colorbar(label = r'normal velocity [$\mathrm{m\ s}^{-1}$]');
plt.xlabel('longitude [$\degree$E]')
plt.ylabel('depth [m]')
return
class SampleParticle(JITParticle):
"""Add variables to the standard particle class.
Particles will sample temperature and track the age of the particle.
Particles also have a flag `alive` that is 1 if the particle is alive and 0 otherwise.
Furthermore, we have a `speed_param` that scales the velocity with which particles can
swim towards the surface.
Note that we don't initialize temp from the actual data.
This speeds up particle creation, but might render initial data point less useful.
"""
mxl = Variable('mxl', dtype=np.float32, initial=-100)
temp = Variable('temp', dtype=np.float32, initial=-100)
salt = Variable('salt', dtype=np.float32, initial=-100)
uvel = Variable('uvel', dtype=np.float32, initial=0)
vvel = Variable('vvel', dtype=np.float32, initial=0)
# wvel = Variable('wvel', dtype=np.float32, initial=0)
# alive = Variable('alive', dtype=np.int32, initial=1)
# speed_param = Variable('speed_param', dtype=np.float32, initial=1)
# age = Variable('age', dtype=np.int32, initial=0, to_write=True) | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Create a set of particles with random initial positionsWe seed the RNG to be reproducible (and to be able to quickly create a second equivalent experiment with differently chosen compatible initial positions), and create arrays of random starting times, lats, lons, depths, and speed parameters (see kernel definitions below for details).Initially create points on 'rectangle'. Land points are removed later in a OceanParcels 'run' with runtime and timedelta zero. First set up the piecewise section | lonlat = xr.Dataset(pd.read_csv(sectionPath / sectionFilename,delim_whitespace=True))
lonlat.lon.attrs['long_name']='Longitude'
lonlat.lat.attrs['long_name']='Latitude'
lonlat.lon.attrs['standard_name']='longitude'
lonlat.lat.attrs['standard_name']='latitude'
lonlat.lon.attrs['units']='degrees_east'
lonlat.lat.attrs['units']='degrees_north'
lonlatdiff = lonlat.diff('dim_0')
lonlat2mean= lonlat.rolling({'dim_0':2}).mean()
lonlat.plot.scatter(x='lon',y='lat')
lonlat2mean.plot.scatter(x='lon',y='lat')
lonlatdiff = lonlatdiff.assign({'y':lonlatdiff['lat']*degree2km})
lonlatdiff = lonlatdiff.assign({'x':lonlatdiff['lon']*degree2km*np.cos(np.radians(lonlat2mean.lat.data[1:]))})
lonlatdiff=lonlatdiff.assign({'length':np.sqrt(lonlatdiff['x']**2+lonlatdiff['y']**2)})
lonlatdiff=lonlatdiff.assign({'length_west':lonlatdiff.length.sum() - np.cumsum(lonlatdiff.length[::-1])[::-1]})
lonlatdiff=lonlatdiff.assign({'costheta':lonlatdiff['x']/lonlatdiff['length']})
lonlatdiff=lonlatdiff.assign({'sintheta':lonlatdiff['y']/lonlatdiff['length']})
total_length = lonlatdiff.length.sum().data
print(total_length)
lonlatdiff.length.shape[0] | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Seed particles uniform random along OSNAP section | np.random.seed(RNG_seed)
# define time of release for each particle relative to t0
# can start each particle at a different time if required
# here all start at time t_start.
times = []
lons = []
lats = []
depths = []
# for subsect in range(lonlatdiff.length.shape[0]):
for subsect in range(start_vertex,end_vertex):
number_particles = int(create_number_particles*lonlatdiff.length[subsect]/total_length)
time = np.zeros(number_particles)
time += (t_start - t_0).total_seconds()
# start along a line from west to east
west_lat = lonlat.lat[subsect].data
west_lon = lonlat.lon[subsect].data
east_lat = lonlat.lat[subsect+1].data
east_lon = lonlat.lon[subsect+1].data
lon = np.random.uniform(
low=west_lon, high = east_lon,
size=time.shape
)
lat = west_lat + ((lon - west_lon) * (east_lat - west_lat)/ (east_lon - west_lon))
# at depths from surface to max_release_depth
depth = np.random.uniform(
low=min_release_depth, high=max_release_depth,
size=time.shape
)
times.append(time)
lons.append(lon)
lats.append(lat)
depths.append(depth)
time = np.concatenate(times)
lon = np.concatenate(lons)
lat = np.concatenate(lats)
depth = np.concatenate(depths)
| _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Build particle set | %%time
pset = ParticleSet(
fieldset=fieldset,
pclass=SampleParticle,
lat=lat,
lon=lon,
# speed_param=speed_param,
depth=depth,
time=time
# repeatdt = repeatdt
)
print(f"Created {len(pset)} particles.")
# display(pset[:5])
# display(pset[-5:]) | Created 2643886 particles.
| MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Compose custom kernelWe'll create three additional kernels:- One Kernel adds velocity sampling- One Kernel adds temperature sampling- One kernel adds salinity samplingThen, we combine the builtin `AdvectionRK4_3D` kernel with these additional kernels. | def velocity_sampling(particle, fieldset, time):
'''Sample velocity.'''
(particle.uvel,particle.vvel) = fieldset.UV[time, particle.depth, particle.lat, particle.lon]
def temperature_sampling(particle, fieldset, time):
'''Sample temperature.'''
particle.temp = fieldset.T[time, particle.depth, particle.lat, particle.lon]
def salinity_sampling(particle, fieldset, time):
'''Sample salinity.'''
particle.salt = fieldset.S[time, particle.depth, particle.lat, particle.lon]
def mxl_sampling(particle, fieldset, time):
'''Sample mixed layer depth.'''
particle.mxl = fieldset.MXL[time, particle.depth, particle.lat, particle.lon]
custom_kernel = (
pset.Kernel(AdvectionRK4_3D)
# + pset.Kernel(temperature_sensitivity)
+ pset.Kernel(temperature_sampling)
+ pset.Kernel(salinity_sampling)
+ pset.Kernel(velocity_sampling)
+ pset.Kernel(mxl_sampling)
) | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Be able to handle errors during integrationWe have restricted our domain so in principle, particles could reach undefined positions.In that case, we want to just delete the particle (without forgetting its history). | def DeleteParticle(particle, fieldset, time):
particle.delete()
recovery_cases = {
ErrorCode.ErrorOutOfBounds: DeleteParticle,
ErrorCode.Error: DeleteParticle,
ErrorCode.ErrorInterpolation: DeleteParticle
} | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Run with runtime=0 to initialise fields | %%time
# with dask.config.set(**{'array.slicing.split_large_chunks': False}):
pset.execute(
custom_kernel,
runtime=0,
# dt=timedelta(minutes=0),
# output_file=outputfile,
recovery=recovery_cases
)
plot_section_sdist() | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Trim unwanted points from ParticleSetUse initialised fields to remove land points. We test `temp == 0.0` (the mask value over land). | t = np.array([p.temp for p in pset])
# u = np.array([p.uvel for p in pset])
# v = np.array([p.vvel for p in pset])
pset.remove_indices(np.argwhere(t == 0).flatten())
# pset.remove(np.argwhere(x * y * z == 0).flatten())
print(len(pset))
plot_section_sdist() | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Test velocity normal to section Velocity conversions from degrees lat/lon per second to m/s | u = np.array([p.uvel for p in pset])
v = np.array([p.vvel for p in pset])
u=u * degree2km * 1000.0 * np.cos(np.radians(pset.lat))
v=v * degree2km * 1000.0 | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
normal velocities | section_index = np.searchsorted(lonlat.lon,pset.lon)-1
u_normal = v * lonlatdiff.costheta[section_index].data - u * lonlatdiff.sintheta[section_index].data
abs(u_normal).max() | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
remove particles randomly with probability proportional to normal speed | u_random = np.random.rand(len(u_normal))*max_current
pset.remove_indices(np.argwhere(abs(u_normal) < u_random).flatten())
print(len(pset))
plot_section_sdist() | _____no_output_____ | MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Prepare outputWe define an output file and specify the desired output frequency. | # output_filename = 'Parcels_IFFForwards_1m_June2016_2000.nc'
npart = str(len(pset))
output_filename = 'tracks_randomvel_mxl_'+sectionname+direction+year_str+month_str+day_str+'_N'+npart+'_D'+days+'_Rnd'+ seed+'.nc'
outfile = outpath / output_filename
print(outfile)
outputfile = pset.ParticleFile(
name=outfile,
outputdt=timedelta(hours=outputdt_in_hours)
) | ../data/raw/tracks_randomvel_mxl_osnap_backward_20191020_N59894_D3650_Rnd14535.nc
| MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Execute the experimentWe'll evolve particles, log their positions and variables to the output buffer and finally export the output to a the file. Run the experiment | %%time
# with dask.config.set(**{'array.slicing.split_large_chunks': False}):
pset.execute(
custom_kernel,
runtime=timedelta(days=runtime_in_days),
dt=timedelta(minutes=dt_in_minutes),
output_file=outputfile,
recovery=recovery_cases
)
# outputfile.export()
outputfile.close()
conda list
pip list
| Package Version
----------------------------- --------------------------
alembic 1.5.5
ansiwrap 0.8.4
anyio 2.2.0
appdirs 1.4.4
argon2-cffi 20.1.0
asciitree 0.3.3
async-generator 1.10
attrs 20.3.0
Babel 2.9.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
basemap 1.2.1
black 20.8b1
bleach 3.3.0
blinker 1.4
blosc 1.10.2
bokeh 2.3.0
Bottleneck 1.3.2
brotlipy 0.7.0
cached-property 1.5.2
cachetools 4.2.1
Cartopy 0.18.0
certifi 2020.12.5
certipy 0.1.3
cffi 1.14.5
cftime 1.4.1
cgen 2020.1
chardet 4.0.0
click 7.1.2
click-plugins 1.1.1
cligj 0.7.1
cloudpickle 1.6.0
cmocean 2.0
colorcet 2.0.6
colorspacious 1.1.2
conda 4.9.2
conda-package-handling 1.7.2
cryptography 3.4.4
cycler 0.10.0
cytoolz 0.11.0
dask 2021.2.0
datashader 0.12.0
datashape 0.5.4
decorator 4.4.2
defusedxml 0.6.0
distributed 2021.2.0
entrypoints 0.3
fasteners 0.14.1
Fiona 1.8.18
fsspec 0.8.7
GDAL 3.2.1
geopandas 0.9.0
geoviews 0.0.0+g33876c88.gitarchive
gsw 3.4.0
h5netcdf 0.10.0
h5py 3.1.0
HeapDict 1.0.1
holoviews 1.14.2
hvplot 0.7.1
idna 2.10
importlib-metadata 3.7.0
ipykernel 5.5.0
ipython 7.21.0
ipython-genutils 0.2.0
jedi 0.18.0
Jinja2 2.11.3
joblib 1.0.1
json5 0.9.5
jsonschema 3.2.0
jupyter-client 6.1.11
jupyter-core 4.7.1
jupyter-packaging 0.7.12
jupyter-server 1.4.1
jupyter-telemetry 0.1.0
jupyterhub 1.3.0
jupyterlab 3.0.9
jupyterlab-pygments 0.1.2
jupyterlab-server 2.3.0
kiwisolver 1.3.1
llvmlite 0.36.0
locket 0.2.0
Mako 1.1.4
mamba 0.7.14
Markdown 3.3.4
MarkupSafe 1.1.1
matplotlib 3.3.4
mistune 0.8.4
monotonic 1.5
msgpack 1.0.2
multipledispatch 0.6.0
munch 2.5.0
mypy-extensions 0.4.3
nbclassic 0.2.6
nbclient 0.5.3
nbconvert 6.0.7
nbformat 5.1.2
nest-asyncio 1.4.3
netCDF4 1.5.6
notebook 6.2.0
numba 0.53.0
numcodecs 0.7.3
numpy 1.20.1
oauthlib 3.0.1
olefile 0.46
packaging 20.9
pamela 1.0.0
pandas 1.2.3
pandocfilters 1.4.2
panel 0.11.0
papermill 2.3.3
param 1.10.1
parcels 2.2.2
parso 0.8.1
partd 1.1.0
pathspec 0.8.1
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.1.2
pip 21.0.1
progressbar2 3.53.1
prometheus-client 0.9.0
prompt-toolkit 3.0.16
psutil 5.8.0
ptyprocess 0.7.0
pycosat 0.6.3
pycparser 2.20
pyct 0.4.6
pycurl 7.43.0.6
Pygments 2.8.0
PyJWT 2.0.1
pymbolic 2020.1
pyOpenSSL 20.0.1
pyparsing 2.4.7
pyproj 3.0.1
PyQt5 5.12.3
PyQt5-sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.17.3
pyshp 2.1.3
PySocks 1.7.1
python-dateutil 2.8.1
python-editor 1.0.4
python-json-logger 2.0.1
python-utils 2.5.5
pytools 2021.2
pytz 2021.1
pyviz-comms 2.0.1
PyYAML 5.4.1
pyzmq 22.0.3
regex 2020.11.13
requests 2.25.1
Rtree 0.9.7
ruamel-yaml-conda 0.15.80
ruamel.yaml 0.16.12
ruamel.yaml.clib 0.2.2
scikit-learn 0.24.1
scipy 1.6.1
seaborn 0.11.1
seawater 3.3.4
Send2Trash 1.5.0
setuptools 49.6.0.post20210108
Shapely 1.7.1
six 1.15.0
sniffio 1.2.0
sortedcontainers 2.3.0
sparse 0.11.2
SQLAlchemy 1.3.23
statsmodels 0.12.2
tblib 1.6.0
tenacity 7.0.0
terminado 0.9.2
testpath 0.4.4
textwrap3 0.9.2
threadpoolctl 2.1.0
toml 0.10.2
toolz 0.11.1
tornado 6.1
tqdm 4.58.0
traitlets 5.0.5
typed-ast 1.4.2
typing-extensions 3.7.4.3
urllib3 1.26.3
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.36.2
xarray 0.17.0
xhistogram 0.1.2
zarr 2.6.1
zict 2.0.0
zipp 3.4.0
| MIT | notebooks/executed/037_afox_RunParcels_TS_MXL_Multiline_Randomvel_Papermill_executed_2019-10-20.ipynb | alanfox/spg_fresh_blob_202104 |
Plotly - Create Candlestick chart **Tags:** plotly chart candlestick trading dataviz Input Import libraries | import plotly.graph_objects as go
import pandas as pd
from datetime import datetime | _____no_output_____ | BSD-3-Clause | Plotly/Create Candlestick chart.ipynb | vivard/awesome-notebooks |
Model Read a csv and map the plot | df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv')
fig = go.Figure(data=[go.Candlestick(x=df['Date'],
open=df['AAPL.Open'],
high=df['AAPL.High'],
low=df['AAPL.Low'],
close=df['AAPL.Close'])]) | _____no_output_____ | BSD-3-Clause | Plotly/Create Candlestick chart.ipynb | vivard/awesome-notebooks |
Output Display result | fig.show() | _____no_output_____ | BSD-3-Clause | Plotly/Create Candlestick chart.ipynb | vivard/awesome-notebooks |
The idea is to do random patches but try out different methodologies regarding the sampling procedure. First, in the form of weighted samples where ideas from Breiman's Paper (pasting) and Adaboost can be used.Second, in the form of weighted features with respect to correlation (chi square, best of k?) between the selected samples?- [Link to breiman Paper](https://link.springer.com/content/pdf/10.1023%2FA%3A1007563306331.pdf)- [Link to Louppe Paper](https://orbi.uliege.be/bitstream/2268/130099/1/glouppe12.pdf) Some more similar ideas:- use for each new estimator the n1 closest from the same class and the n2 closest from other classes. if the average distance between the sample and the same-class samples is bigger than the avg.distance with the other-class samples, do not take it into account and say it's an outlier? n1 and n2 should be trained probably?- Based on the above, maybe pick the largest possible linearly separable dataset for this sample?- idea of linearly separable classifiers? do we need one good and create multiple linear for the misclassifier samples or go directly for multiple classifiers? | from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import BaggingClassifier
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import itertools
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from brew.base import Ensemble, EnsembleClassifier
from brew.stacking.stacker import EnsembleStack, EnsembleStackClassifier
from brew.combination.combiner import Combiner
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from mlxtend.data import wine_data, iris_data
from mlxtend.plotting import plot_decision_regions
from sklearn.base import TransformerMixin, BaseEstimator
from sklearn.ensemble import ExtraTreesClassifier
from itertools import combinations
import random
random.seed(10)
X, y = wine_data()
knn = KNeighborsClassifier()
bagging = BaggingClassifier(knn, max_samples=0.5, max_features=0.5)
print("KNN Score:")
print(cross_val_score(knn, X, y, cv=5, n_jobs=-1).mean())
print("Bagging Score:")
print(cross_val_score(bagging, X, y, cv=5, n_jobs=-1).mean())
import matplotlib.pyplot as plt
import numpy
x = numpy.linspace(0,1,100) # 100 linearly spaced numbers
yy = np.exp(0.5*np.log((1-x)/x)) # computing the values of sin(x)/x
# compose plot
plt.plot(x,yy) # sin(x)/x
#plt.plot(x,yy,'co') # same function with cyan dots
plt.show() # show the plot
print(a)
print(signs[0])
print(np.exp(-a*signs[0]))
print(signs[4])
print(np.exp(-a*signs[4]))
knn.fit(X,y)
preds2 = knn.predict(X)
sc = 1-accuracy_score(y,preds2, normalize=True)
print(sc)
a = 0.5*np.log((1-sc)/float(sc))
print(a)
signs = sign(y, preds2)
#print(signs)
temp =np.array([1/float(len(y)) for i in y])*np.exp(-a*signs)
print(temp)
print(temp/sum(temp))
for i, sign_ in enumerate(signs):
if sign_ <0:
pass
#print("DIFFERENT")
#print(y[i], preds2[i])
#print(np.mean(temp), temp[i])
elif sign_ >0:
pass
#print("SAME")
#print(y[i], preds2[i])
#print(np.mean(temp), temp[i])
ssign = np.ones_like(preds)
ssign[np.where(preds!=y)] = -1
ssign
#rp2.fit(X,y)
preds = rp2.predict(X)
rp2.samples_weights[-1][np.argsort(rp2.samples_weights[-1]).flatten()]
#for i, y_ in enumerate(preds):
#print("%d -- %d" % (y_, y[i]))
#print(rp2.samples_weights[-1][i])
from sklearn.base import BaseEstimator, ClassifierMixin, clone
from sklearn.neighbors import KNeighborsClassifier
from sklearn.utils import check_X_y, column_or_1d, check_random_state
from sklearn.utils.multiclass import check_classification_targets
from sklearn.model_selection import train_test_split
from sklearn.utils import check_random_state, check_X_y, check_array, column_or_1d
from sklearn.utils.random import sample_without_replacement
from sklearn.utils.validation import has_fit_parameter, check_is_fitted
from sklearn.utils import indices_to_mask, check_consistent_length
from sklearn.utils.metaestimators import if_delegate_has_method
from sklearn.utils.multiclass import check_classification_targets
def _generate_indices(random_state, bootstrap, n_population, n_samples, prob= None):
"""Draw randomly sampled indices."""
# Draw sample indices
if np.all(prob)!=None:
#print(prob.shape)
#print(prob)
indices = random_state.choice([i for i in xrange(0, n_population)], n_samples, p=prob)
else:
if bootstrap:
#print(random_state)
#print(n_population, n_samples)
indices = random_state.randint(0, n_population, n_samples)
else:
indices = sample_without_replacement(n_population, n_samples,
random_state=random_state)
return indices
def generate_bagging_indices(random_state, bootstrap_features,
bootstrap_samples, n_features, n_samples,
max_features, max_samples, samples_weights):
"""Randomly draw feature and sample indices."""
# Get valid random state
random_state = check_random_state(random_state)
# Draw indices
#print(random_state, type(random_state))
feature_indices = _generate_indices(random_state, bootstrap_features,
n_features, max_features)
#print(bootstrap_samples, n_samples, max_samples, samples_weights)
sample_indices = _generate_indices(random_state, bootstrap_samples,
n_samples, max_samples, prob=samples_weights)
return feature_indices, sample_indices
def sign(true, preds):
ssign = np.ones_like(true)
ssign[np.where(preds!=true)] = -1
return ssign
class Vanilla_RP(BaseEstimator, ClassifierMixin):
def __init__(self,
base_estimator_=KNeighborsClassifier(),
n_estimators=10,
max_samples=1.0,
max_features=1.0,
bootstrap_samples=True,
bootstrap_features = False,
patcher='random',
dev_set=0.1,
random_state=42):
self.base_estimator_ = base_estimator_
self.n_estimators= n_estimators
self.max_samples = max_samples
self.max_features = max_features
self.bootstrap_samples = bootstrap_samples
self.bootstrap_features = bootstrap_features
self.patcher = patcher
self.dev_set = 0.1
self.random_state = check_random_state(random_state)
self.ensemble = []
self.prev_samples_indices = []
self.estimators_features = []
self.samples_weights = []
self.samples_times_selected = None
self.scores = []
self.a = 0
self.X_dev = None
self.y_dev = None
def fit(self, X, y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
#print(y)
#print(type(y), y.shape)
#print(X.shape)
#X, self.X_dev, y, self.y_dev = train_test_split(X, y, test_size=self.dev_set, stratify=y)
n_samples, self.n_features_ = X.shape
self._n_samples = n_samples
self.samples_times_selected = np.zeros_like(y)
self.default_sample_weight = 1/float(X.shape[0])
self.max_samples = int(self.max_samples*self._n_samples)
self.max_features = int(self.max_features*self.n_features_)
y = self._validate_y(y)
for i_est in xrange(self.n_estimators):
estimator = clone(self.base_estimator_)
if self.patcher == 'random':
features_indices, samples_indices = generate_bagging_indices(
self.random_state, self.bootstrap_features,
self.bootstrap_samples, X.shape[1], X.shape[0],
self.max_features, self.max_samples, [None])
elif self.patcher == 'weighted':
# X_train, X_dev, y_train, y_dev = train_test_split(X, y, stratify = True,
# test_size=self.dev_set,
# random_state=self.random_state)
if i_est==0:
self.samples_weights.append(np.array([self.default_sample_weight for i in xrange(X.shape[0])]))
self.scores.append(1)
signs = np.ones_like(y)
else:
signs = self.update_weights(X, y)
#print("ROUND %d"%i_est)
#print(self.samples_weights[i_est])
#print("INDEX %d" % i)
#print(self.samples_weights[i_est])
features_indices, samples_indices = generate_bagging_indices(
self.random_state, self.bootstrap_features,
self.bootstrap_samples, X.shape[1], X.shape[0],
self.max_features, self.max_samples, self.samples_weights[i_est])
# print("ROUND %d" % i_est)
# print("SCORE %0.3f"%(self.scores[-1]))
if i_est != 0:
accs = []
accs_last = []
overlaps = []
accs_total = []
for jj, ens in enumerate(self.ensemble):
# print(len(self.prev_samples_indices), len(self.estimators_features))
p_pred = ens.predict(X[self.prev_samples_indices[jj]][:, self.estimators_features[jj]])
last_pred = ens.predict(X[self.prev_samples_indices[-1]][:, self.estimators_features[jj]])
total_pred = self.predict(X[self.prev_samples_indices[jj]])
accs.append(1-accuracy_score(y[self.prev_samples_indices[jj]], p_pred, normalize=True))
accs_last.append(1-accuracy_score(y[self.prev_samples_indices[-1]], last_pred, normalize=True))
accs_total.append(1-accuracy_score(y[self.prev_samples_indices[jj]], total_pred, normalize=True))
#try:
# self.ensemble[jj+1]
#except IndexError:
# print("CURRENT")
# print(self.prev_samples_indices[jj])
# print("LAST")
# print(self.prev_samples_indices[-1])
# print(np.array_equal(self.prev_samples_indices[jj], self.prev_samples_indices[-1]))
# print(np.intersect1d(self.prev_samples_indices[jj], self.prev_samples_indices[-1]).shape[0], self.prev_samples_indices[-1].shape[0], self.prev_samples_indices[jj].shape[0])
overlaps.append(np.intersect1d(self.prev_samples_indices[jj], self.prev_samples_indices[-1]).shape[0]/float(np.unique(self.prev_samples_indices[jj]).shape[0]))
accs = np.round(100*np.array(accs),3)
accs_last = np.round(100*np.array(accs_last),3)
overlaps = np.round(100*np.array(overlaps),3)
accs_total = np.round(100*np.array(accs_total),3)
# print("PREVIOUS ERRORS ON CORRESPONDING DATA")
# print(accs)
# print("ERRORS OF THE ENSEMBLE ON CORRESPONDING DATA")
# print(accs_total)
# print("PREVIOUS ERRORS ON LAST SELECTED DATA")
# print(accs_last)
# print("OVERLAPS")
# print(overlaps)
# print("a: %0.3f"%self.a)
# print("WEIGHTS")
# print(np.min(self.samples_weights[-1]),
# np.mean(self.samples_weights[-1]),
# np.max(self.samples_weights[-1]))
# #print(self.samples_weights[-1])
# print("SAMPLED INSTANCES")
# print(self.samples_weights[-1][samples_indices])
# print(signs[samples_indices])
# print(y[samples_indices])
# print("~"*50)
else:
print("UNSUPPORTED WAY OF PATCHING: %s !" % self.patcher)
# minor fix for when one class is not represented during sampling
#print(samples_indices.shape)
samples_indices = self.fix_class_indices(y, samples_indices)
#print("AFTER")
#print(samples_indices.shape)
estimator.fit(X[samples_indices][:, features_indices], y[samples_indices])
self.prev_samples_indices.append(samples_indices)
self.estimators_features.append(features_indices)
self.ensemble.append(estimator)
self.samples_times_selected[samples_indices] += 1
return self
def fix_class_indices(self, y, samples_indices):
in_set = set(y[samples_indices])
a = set(y).difference(in_set)
for item in a:
samples_indices= np.append(samples_indices, [np.where(y==item)[0][0]])
return samples_indices
def update_weights(self, X, y):
#print("REMOVED %d" % np.where(self.samples_times_selected > 3)[0].shape[0])
#self.samples_weights[-1][np.where(self.samples_times_selected > 3)] = self.default_sample_weight
#self.samples_times_selected[np.where(self.samples_times_selected > 3)] = 0
preds = self.predict(X)
self.scores.append(1-accuracy_score(y, preds, normalize=True)+0.0001)
self.a = 0.5*np.log((1-self.scores[-1])/float(self.scores[-1]))
#self.a = 9
#self.a = 1
signs = sign(y, preds)
temp = self.samples_weights[-1]*np.exp(-self.a*signs)
#print("MEAN")
#print(np.mean(temp))
#print(temp[np.where(signs<0)])
self.samples_weights.append(temp/np.sum(temp))
preds = np.ones((X.shape[0], len(self.ensemble)))
for ii, est in enumerate(self.ensemble):
sc = accuracy_score(y, est.predict(X[:, self.estimators_features[ii]]), normalize=True)
preds[:, ii] = sc*sign(y, est.predict(X[:, self.estimators_features[ii]]))
#print("INITIAL")
#print(preds)
#print(np.min(preds))
#print("AFTER COLLAPSE")
preds = np.sum(preds, axis=1)
#print(preds)
#print("AFTER RESHAPE")
preds = preds.reshape(-1,)
#print(preds)
if np.any(preds<0):
min_ = np.min(preds)
if min_ < 0:
min_ = -1*min_
#print("MIN")
#print(min_)
preds = preds + min_ + 0.001
#else:
#preds = preds - np.min(preds) + 0.001
#print("AFTER_MIN")
#print(preds)
#print("AFTER NORMALIZATION")
preds /= np.sum(preds)
#print(preds)
#print(preds)
#print(np.sum(preds))
self.samples_weights.append(preds)
return signs
def _validate_y(self, y):
y = column_or_1d(y, warn=True)
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def predict(self, X):
"""Predict class for X.
The predicted class of an input sample is computed as the class with
the highest mean predicted probability. If base estimators do not
implement a ``predict_proba`` method, then it resorts to voting.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
if hasattr(self.base_estimator_, "predict_proba"):
predicted_probability = self.predict_proba(X)
return self.classes_.take((np.argmax(predicted_probability, axis=1)),
axis=0)
else:
predicted_probability = np.zeros((X.shape[0],1))
for i, ens in enumerate(self.ensemble):
predicted_probability = np.hstack((predicted_probability, ens.predict(X[:, self.estimators_features[i]]).reshape(-1,1)))
predicted_probability = np.delete(predicted_probability,0,axis=1)
final_pred = []
for sample in xrange(X.shape[0]):
final_pred.append(most_common(predicted_probability[sample,:]))
#votes = []
#for i, mod_vote in predictions[sample,:]:
# votes.extend([predictions[sample, i] for j in xrange(int(self.acc[i]))])
#final_pred = most_common(votes)
return np.array(final_pred)
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the mean predicted class probabilities of the base estimators in the
ensemble. If base estimators do not implement a ``predict_proba``
method, then it resorts to voting and the predicted class probabilities
of an input sample represents the proportion of estimators predicting
each class.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
all_proba = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_proba += ens.predict_proba(X[:, self.estimators_features[i]])
all_proba /= self.n_estimators
#print(all_proba.shape)
#print(all_proba)
#proba = np.sum(all_proba, axis=0) / self.n_estimators
#print(proba.shape)
#print(proba)
return all_proba
@if_delegate_has_method(delegate='base_estimator')
def decision_function(self, X):
"""Average of the decision functions of the base classifiers.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The columns correspond
to the classes in sorted order, as they appear in the attribute
``classes_``. Regression and binary classification are special
cases with ``k == 1``, otherwise ``k==n_classes``.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
all_decisions = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_decisions += ens.predict_proba(X[:, self.estimators_features[i]])
decisions = sum(all_decisions) / self.n_estimators
return decisions
def most_common(lst):
if isinstance(lst, np.ndarray):
lst = lst.tolist()
#print(lst, max(set(lst), key=lst.count) )
return max(set(lst), key=lst.count)
min(min(np.array([[1,2]])))
#X, y = wine_data()
rp2 = Vanilla_RP(knn,max_samples=0.5, max_features=0.5, patcher='weighted')
rp2.fit(X,y)
#train_test_split(X,y,test_size=0.2, stratify=y)
knn = KNeighborsClassifier()
bagging = BaggingClassifier(knn, max_samples=0.5, max_features=0.5)
rp = Vanilla_RP(knn,max_samples=0.5, max_features=0.5)
rp2 = Vanilla_RP(knn,max_samples=0.5, max_features=0.5, patcher='weighted')
cv = 10
print("KNN Score:")
print(cross_val_score(knn, X, y, cv=cv, n_jobs=-1).mean())
print("Bagging Score:")
print(cross_val_score(bagging, X, y, cv=cv, n_jobs=-1).mean())
print("RP Score:")
print(cross_val_score(rp, X, y, cv=cv, n_jobs=-1).mean())
print("RP-WEIGHTED Score:")
print(cross_val_score(rp2, X, y, cv=cv, n_jobs=-1).mean())
"""Bagging meta-estimator."""
# Author: Gilles Louppe <[email protected]>
# License: BSD 3 clause
from __future__ import division
import itertools
import numbers
import numpy as np
from warnings import warn
from abc import ABCMeta, abstractmethod
from sklearn.base import ClassifierMixin, RegressorMixin
from sklearn.externals.joblib import Parallel, delayed
from sklearn.externals.six import with_metaclass
from sklearn.externals.six.moves import zip
from sklearn.metrics import r2_score, accuracy_score
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.utils import check_random_state, check_X_y, check_array, column_or_1d
from sklearn.utils.random import sample_without_replacement
from sklearn.utils.validation import has_fit_parameter, check_is_fitted
from sklearn.utils import indices_to_mask, check_consistent_length
from sklearn.utils.metaestimators import if_delegate_has_method
from sklearn.utils.multiclass import check_classification_targets
from sklearn.base import BaseEnsemble, _partition_estimators
__all__ = ["BaggingClassifier",
"BaggingRegressor"]
MAX_INT = np.iinfo(np.int32).max
def _generate_indices(random_state, bootstrap, n_population, n_samples, prob= None):
"""Draw randomly sampled indices."""
# Draw sample indices
if prob:
indices = random_state.choice([i for i in xrange(0, n_population)], n_samples, p=prob)
else:
if bootstrap:
indices = random_state.randint(0, n_population, n_samples)
else:
indices = sample_without_replacement(n_population, n_samples,
random_state=random_state)
return indices
def _generate_bagging_indices(random_state, bootstrap_features,
bootstrap_samples, n_features, n_samples,
max_features, max_samples):
"""Randomly draw feature and sample indices."""
# Get valid random state
random_state = check_random_state(random_state)
# Draw indices
feature_indices = _generate_indices(random_state, bootstrap_features,
n_features, max_features)
sample_indices = _generate_indices(random_state, bootstrap_samples,
n_samples, max_samples)
return feature_indices, sample_indices
def sign_()
def _parallel_build_estimators(n_estimators, ensemble, X, y, sample_weight,
seeds, total_n_estimators, verbose, sample_weights):
"""Private function used to build a batch of estimators within a job."""
# Retrieve settings
n_samples, n_features = X.shape
max_features = ensemble._max_features
max_samples = ensemble._max_samples
bootstrap = ensemble.bootstrap
bootstrap_features = ensemble.bootstrap_features
support_sample_weight = has_fit_parameter(ensemble.base_estimator_,
"sample_weight")
if not support_sample_weight and sample_weight is not None:
raise ValueError("The base estimator doesn't support sample weight")
# Build estimators
estimators = []
estimators_features = []
for i in range(n_estimators):
if verbose > 1:
print("Building estimator %d of %d for this parallel run "
"(total %d)..." % (i + 1, n_estimators, total_n_estimators))
random_state = np.random.RandomState(seeds[i])
estimator = ensemble._make_estimator(append=False,
random_state=random_state)
# Draw random feature, sample indices
a = 0.5*np.log((1-self.oob_score_)/self.oob_score_)
sample_weights =
features, indices = _generate_bagging_indices(random_state,
bootstrap_features,
bootstrap, n_features,
n_samples, max_features,
max_samples, prob=sample_weights)
# Draw samples, using sample weights, and then fit
if support_sample_weight:
if sample_weight is None:
curr_sample_weight = np.ones((n_samples,))
else:
curr_sample_weight = sample_weight.copy()
if bootstrap:
sample_counts = np.bincount(indices, minlength=n_samples)
curr_sample_weight *= sample_counts
else:
not_indices_mask = ~indices_to_mask(indices, n_samples)
curr_sample_weight[not_indices_mask] = 0
estimator.fit(X[:, features], y, sample_weight=curr_sample_weight)
# Draw samples, using a mask, and then fit
else:
estimator.fit((X[indices])[:, features], y[indices])
estimators.append(estimator)
estimators_features.append(features)
return estimators, estimators_features
def _parallel_predict_proba(estimators, estimators_features, X, n_classes):
"""Private function used to compute (proba-)predictions within a job."""
n_samples = X.shape[0]
p_parallel_predict_probaroba = np.zeros((n_samples, n_classes))
for estimator, features in zip(estimators, estimators_features):
if hasattr(estimator, "predict_proba"):
proba_estimator = estimator.predict_proba(X[:, features])
if n_classes == len(estimator.classes_):
proba += proba_estimator
else:
proba[:, estimator.classes_] += \
proba_estimator[:, range(len(estimator.classes_))]
else:
# Resort to voting
predictions = estimator.predict(X[:, features])
for i in range(n_samples):
proba[i, predictions[i]] += 1
return proba
def _parallel_predict_log_proba(estimators, estimators_features, X, n_classes):
"""Private function used to compute log probabilities within a job."""
n_samples = X.shape[0]
log_proba = np.empty((n_samples, n_classes))
log_proba.fill(-np.inf)
all_classes = np.arange(n_classes, dtype=np.int)
for estimator, features in zip(estimators, estimators_features):
log_proba_estimator = estimator.predict_log_proba(X[:, features])
if n_classes == len(estimator.classes_):
log_proba = np.logaddexp(log_proba, log_proba_estimator)
else:
log_proba[:, estimator.classes_] = np.logaddexp(
log_proba[:, estimator.classes_],
log_proba_estimator[:, range(len(estimator.classes_))])
missing = np.setdiff1d(all_classes, estimator.classes_)
log_proba[:, missing] = np.logaddexp(log_proba[:, missing],
-np.inf)
return log_proba
def _parallel_decision_function(estimators, estimators_features, X):
"""Private function used to compute decisions within a job."""
return sum(estimator.decision_function(X[:, features])
for estimator, features in zip(estimators,
estimators_features))
def _parallel_predict_regression(estimators, estimators_features, X):
"""Private function used to compute predictions within a job."""
return sum(estimator.predict(X[:, features])
for estimator, features in zip(estimators,
estimators_features))
class BaseBagging(with_metaclass(ABCMeta, BaseEnsemble)):
"""Base class for Bagging meta-estimator.
Warning: This class should not be used directly. Use derived classes
instead.
"""
@abstractmethod
def __init__(self,
base_estimator=None,
n_estimators=10,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
oob_score=False,
warm_start=False,
n_jobs=1,
random_state=None,
verbose=0):
super(BaseBagging, self).__init__(
base_estimator=base_estimator,
n_estimators=n_estimators)
self.max_samples = max_samples
self.max_features = max_features
self.bootstrap = bootstrap
self.bootstrap_features = bootstrap_features
self.oob_score = oob_score
self.warm_start = warm_start
self.n_jobs = n_jobs
self.random_state = random_state
self.verbose = verbose
self.samples_weights = None
def fit(self, X, y, sample_weight=None):
"""Build a Bagging ensemble of estimators from the training
set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
y : array-like, shape = [n_samples]
The target values (class labels in classification, real numbers in
regression).
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted.
Note that this is supported only if the base estimator supports
sample weighting.
Returns
-------
self : object
Returns self.
"""
return self._fit(X, y, self.max_samples, sample_weight=sample_weight)
def _fit(self, X, y, max_samples=None, max_depth=None, sample_weight=None):
"""Build a Bagging ensemble of estimators from the training
set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
y : array-like, shape = [n_samples]
The target values (class labels in classification, real numbers in
regression).
max_samples : int or float, optional (default=None)
Argument to use instead of self.max_samples.
max_depth : int, optional (default=None)
Override value used when constructing base estimator. Only
supported if the base estimator has a max_depth parameter.
sample_weight : array-like, shape = [n_samples] or None
Sample weights. If None, then samples are equally weighted.
Note that this is supported only if the base estimator supports
sample weighting.
Returns
-------
self : object
Returns self.
"""
random_state = check_random_state(self.random_state)
# Convert data
X, y = check_X_y(X, y, ['csr', 'csc'])
if sample_weight is not None:
sample_weight = check_array(sample_weight, ensure_2d=False)
check_consistent_length(y, sample_weight)
# Remap output
n_samples, self.n_features_ = X.shape
self._n_samples = n_samples
self.samples_weights = np.array([1/float(n_samples) for i in xrange(n_samples)])
y = self._validate_y(y)
# Check parameters
self._validate_estimator()
if max_depth is not None:
self.base_estimator_.max_depth = max_depth
# Validate max_samples
if max_samples is None:
max_samples = self.max_samples
elif not isinstance(max_samples, (numbers.Integral, np.integer)):
max_samples = int(max_samples * X.shape[0])
if not (0 < max_samples <= X.shape[0]):
raise ValueError("max_samples must be in (0, n_samples]")
# Store validated integer row sampling value
self._max_samples = max_samples
# Validate max_features
if isinstance(self.max_features, (numbers.Integral, np.integer)):
max_features = self.max_features
else: # float
max_features = int(self.max_features * self.n_features_)
if not (0 < max_features <= self.n_features_):
raise ValueError("max_features must be in (0, n_features]")
# Store validated integer feature sampling value
self._max_features = max_features
# Other checks
if not self.bootstrap and self.oob_score:
raise ValueError("Out of bag estimation only available"
" if bootstrap=True")
if self.warm_start and self.oob_score:
raise ValueError("Out of bag estimate only available"
" if warm_start=False")
if hasattr(self, "oob_score_") and self.warm_start:
del self.oob_score_
if not self.warm_start or not hasattr(self, 'estimators_'):
# Free allocated memory, if any
self.estimators_ = []
self.estimators_features_ = []
n_more_estimators = self.n_estimators - len(self.estimators_)
if n_more_estimators < 0:
raise ValueError('n_estimators=%d must be larger or equal to '
'len(estimators_)=%d when warm_start==True'
% (self.n_estimators, len(self.estimators_)))
elif n_more_estimators == 0:
warn("Warm-start fitting without increasing n_estimators does not "
"fit new trees.")
return self
# Parallel loop
n_jobs, n_estimators, starts = _partition_estimators(n_more_estimators,
self.n_jobs)
total_n_estimators = sum(n_estimators)
# Advance random state to state after training
# the first n_estimators
if self.warm_start and len(self.estimators_) > 0:
random_state.randint(MAX_INT, size=len(self.estimators_))
seeds = random_state.randint(MAX_INT, size=n_more_estimators)
self._seeds = seeds
all_results = Parallel(n_jobs=n_jobs, verbose=self.verbose)(
delayed(_parallel_build_estiself.samples_weights = np.array([1/float(n_samples) for i in xrange(n_samples)])mators)(
n_estimators[i],
self,
X,
y,
sample_weight,
seeds[starts[i]:starts[i + 1]],
total_n_estimators,
verbose=self.verbose,
sample_weights = self.samples_weights)
for i in range(n_jobs))
# Reduce
self.estimators_ += list(itertools.chain.from_iterable(
t[0] for t in all_results))
self.estimators_features_ += list(itertools.chain.from_iterable(
t[1] for t in all_results))
if self.oob_score:
self._set_oob_score(X, y)
return self
@abstractmethod
def _set_oob_score(self, X, y):
"""Calculate out of bag predictions and score."""
def _validate_y(self, y):
# Default implementation
return column_or_1d(y, warn=True)
def _get_estimators_indices(self):
# Get drawn indices along both sample and feature axes
for seed in self._seeds:
# Operations accessing random_state must be performed identically
# to those in `_parallel_build_estimators()`
random_state = np.random.RandomState(seed)
feature_indices, sample_indices = _generate_bagging_indices(
random_state, self.bootstrap_features, self.bootstrap,
self.n_features_, self._n_samples, self._max_features,
self._max_samples)
yield feature_indices, sample_indices
@property
def estimators_samples_(self):
"""The subset of drawn samples for each base estimator.
Returns a dynamically generated list of boolean masks identifying
the samples used for fitting each member of the ensemble, i.e.,
the in-bag samples.
Note: the list is re-created at each call to the property in order
to reduce the object memory footprint by not storing the sampling
data. Thus fetching the property may be slower than expected.
"""
sample_masks = []
for _, sample_indices in self._get_estimators_indices():
mask = indices_to_mask(sample_indices, self._n_samples)
sample_masks.append(mask)
return sample_masks
class BaggingClassifier2(BaseBagging, ClassifierMixin):
"""A Bagging classifier.
A Bagging classifier is an ensemble meta-estimator that fits base
classifiers each on random subsets of the original dataset and then
aggregate their individual predictions (either by voting or by averaging)
to form a final prediction. Such a meta-estimator can typically be used as
a way to reduce the variance of a black-box estimator (e.g., a decision
tree), by introducing randomization into its construction procedure and
then making an ensemble out of it.
This algorithm encompasses several works from the literature. When random
subsets of the dataset are drawn as random subsets of the samples, then
this algorithm is known as Pasting [1]_. If samples are drawn with
replacement, then the method is known as Bagging [2]_. When random subsets
of the dataset are drawn as random subsets of the features, then the method
is known as Random Subspaces [3]_. Finally, when base estimators are built
on subsets of both samples and features, then the method is known as
Random Patches [4]_.
Read more in the :ref:`User Guide <bagging>`.
Parameters
----------
base_estimator : object or None, optional (default=None)
The base estimator to fit on random subsets of the dataset.
If None, then the base estimator is a decision tree.
n_estimators : int, optional (default=10)
The number of base estimators in the ensemble.
max_samples : int or float, optional (default=1.0)
The number of samples to draw from X to train each base estimator.
- If int, then draw `max_samples` samples.
- If float, then draw `max_samples * X.shape[0]` samples.
max_features : int or float, optional (default=1.0)
The number of features to draw from X to train each base estimator.
- If int, then draw `max_features` features.
- If float, then draw `max_features * X.shape[1]` features.
bootstrap : boolean, optional (default=True)
Whether samples are drawn with replacement.
bootstrap_features : boolean, optional (default=False)
Whether features are drawn with replacement.
oob_score : bool
Whether to use out-of-bag samples to estimate
the generalization error.
warm_start : bool, optional (default=False)
When set to True, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just fit
a whole new ensemble.
.. versionadded:: 0.17
*warm_start* constructor parameter.
n_jobs : int, optional (default=1)
The number of jobs to run in parallel for both `fit` and `predict`.
If -1, then the number of jobs is set to the number of cores.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
verbose : int, optional (default=0)
Controls the verbosity of the building process.
Attributes
----------
base_estimator_ : estimator
The base estimator from which the ensemble is grown.
estimators_ : list of estimators
The collection of fitted base estimators.
estimators_samples_ : list of arrays
The subset of drawn samples (i.e., the in-bag samples) for each base
estimator. Each subset is defined by a boolean mask.
estimators_features_ : list of arrays
The subset of drawn features for each base estimator.
classes_ : array of shape = [n_classes]
The classes labels.
n_classes_ : int or list
The number of classes.
oob_score_ : float
Score of the training dataset obtained using an out-of-bag estimate.
oob_decision_function_ : array of shape = [n_samples, n_classes]
Decision function computed with out-of-bag estimate on the training
set. If n_estimators is small it might be possible that a data point
was never left out during the bootstrap. In this case,
`oob_decision_function_` might contain NaN.
References
----------
.. [1] L. Breiman, "Pasting small votes for classification in large
databases and on-line", Machine Learning, 36(1), 85-103, 1999.
.. [2] L. Breiman, "Bagging predictors", Machine Learning, 24(2), 123-140,
1996.
.. [3] T. Ho, "The random subspace method for constructing decision
forests", Pattern Analysis and Machine Intelligence, 20(8), 832-844,
1998.
.. [4] G. Louppe and P. Geurts, "Ensembles on Random Patches", Machine
Learning and Knowledge Discovery in Databases, 346-361, 2012.
"""
def __init__(self,
base_estimator=None,
n_estimators=10,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
oob_score=False,
warm_start=False,
n_jobs=1,
random_state=None,
verbose=0):
super(BaggingClassifier, self).__init__(
base_estimator,
n_estimators=n_estimators,
max_samples=max_samples,
max_features=max_features,
bootstrap=bootstrap,
bootstrap_features=bootstrap_features,
oob_score=oob_score,
warm_start=warm_start,
n_jobs=n_jobs,
random_state=random_state,
verbose=verbose)
def _validate_estimator(self):
"""Check the estimator and set the base_estimator_ attribute."""
super(BaggingClassifier, self)._validate_estimator(
default=DecisionTreeClassifier())
def _set_oob_score(self, X, y):
n_samples = y.shape[0]
n_classes_ = self.n_classes_
classes_ = self.classes_
predictions = np.zeros((n_samples, n_classes_))
for estimator, samples, features in zip(self.estimators_,
self.estimators_samples_,
self.estimators_features_):
# Create mask for OOB samples
mask = ~samples
if hasattr(estimator, "predict_proba"):
predictions[mask, :] += estimator.predict_proba(
(X[mask, :])[:, features])
else:
p = estimator.predict((X[mask, :])[:, features])
j = 0
for i in range(n_samples):
if mask[i]:
predictions[i, p[j]] += 1
j += 1
if (predictions.sum(axis=1) == 0).any():
warn("Some inputs do not have OOB scores. "
"This probably means too few estimators were used "
"to compute any reliable oob estimates.")
oob_decision_function = (predictions /
predictions.sum(axis=1)[:, np.newaxis])
oob_score = accuracy_score(y, np.argmax(predictions, axis=1))
self.oob_decision_function_ = oob_decision_function
self.oob_score_ = oob_score
def _validate_y(self, y):
y = column_or_1d(y, warn=True)
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def predict(self, X):
"""Predict class for X.
The predicted class of an input sample is computed as the class with
the highest mean predicted probability. If base estimators do not
implement a ``predict_proba`` method, then it resorts to voting.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
predicted_probabilitiy = self.predict_proba(X)
return self.classes_.take((np.argmax(predicted_probabilitiy, axis=1)),
axis=0)
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the mean predicted class probabilities of the base estimators in the
ensemble. If base estimators do not implement a ``predict_proba``
method, then it resorts to voting and the predicted class probabilities
of an input sample represents the proportion of estimators predicting
each class.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(X, accept_sparse=['csr', 'csc'])
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
# Parallel loop
n_jobs, n_estimators, starts = _partition_estimators(self.n_estimators,
self.n_jobs)
all_proba = Parallel(n_jobs=n_jobs, verbose=self.verbose)(
delayed(_parallel_predict_proba)(
self.estimators_[starts[i]:starts[i + 1]],
self.estimators_features_[starts[i]:starts[i + 1]],
X,
self.n_classes_)
for i in range(n_jobs))
# Reduce
proba = sum(all_proba) / self.n_estimators
return proba
def predict_log_proba(self, X):
"""Predict class log-probabilities for X.
The predicted class log-probabilities of an input sample is computed as
the log of the mean predicted class probabilities of the base
estimators in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class log-probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
if hasattr(self.base_estimator_, "predict_log_proba"):
# Check data
X = check_array(X, accept_sparse=['csr', 'csc'])
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} "
"and input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
# Parallel loop
n_jobs, n_estimators, starts = _partition_estimators(
self.n_estimators, self.n_jobs)
all_log_proba = Parallel(n_jobs=n_jobs, verbose=self.verbose)(
delayed(_parallel_predict_log_proba)(
self.estimators_[starts[i]:starts[i + 1]],
self.estimators_features_[starts[i]:starts[i + 1]],
X,
self.n_classes_)
for i in range(n_jobs))
# Reduce
log_proba = all_log_proba[0]
for j in range(1, len(all_log_proba)):
log_proba = np.logaddexp(log_proba, all_log_proba[j])
log_proba -= np.log(self.n_estimators)
return log_proba
else:
return np.log(self.predict_proba(X))
@if_delegate_has_method(delegate='base_estimator')
def decision_function(self, X):
"""Average of the decision functions of the base classifiers.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The columns correspond
to the classes in sorted order, as they appear in the attribute
``classes_``. Regression and binary classification are special
cases with ``k == 1``, otherwise ``k==n_classes``.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(X, accept_sparse=['csr', 'csc'])
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
# Parallel loop
n_jobs, n_estimators, starts = _partition_estimators(self.n_estimators,
self.n_jobs)
all_decisions = Parallel(n_jobs=n_jobs, verbose=self.verbose)(
delayed(_parallel_decision_function)(
self.estimators_[starts[i]:starts[i + 1]],
self.estimators_features_[starts[i]:starts[i + 1]],
X)
for i in range(n_jobs))
# Reduce
decisions = sum(all_decisions) / self.n_estimators
return decisions
#print(y)
#print(type(y), y.shape)
#print(X.shape)
#X, self.X_dev, y, self.y_dev = train_test_split(X, y, test_size=self.dev_set, stratify=y)
n_samples, self.n_features_ = X.shape
self._n_samples = n_samples
self.samples_times_selected = np.zeros_like(y)
self.default_sample_weight = 1/float(X.shape[0])
self.max_samples = int(self.max_samples*self._n_samples)
self.max_features = int(self.max_features*self.n_features_)
y = self._validate_y(y)
for i_est in xrange(self.n_estimators):
estimator = clone(self.base_estimator_)
if self.patcher == 'random':
features_indices, samples_indices = generate_bagging_indices(
self.random_state, self.bootstrap_features,
self.bootstrap_samples, X.shape[1], X.shape[0],
self.max_features, self.max_samples, [None])
elif self.patcher == 'weighted':
# X_train, X_dev, y_train, y_dev = train_test_split(X, y, stratify = True,
# test_size=self.dev_set,
# random_state=self.random_state)
if i_est==0:
self.samples_weights.append(np.array([self.default_sample_weight for i in xrange(X.shape[0])]))
self.scores.append(1)
signs = np.ones_like(y)
else:
signs = self.update_weights(X, y)
#print("ROUND %d"%i_est)
#print(self.samples_weights[i_est])
#print("INDEX %d" % i)
#print(self.samples_weights[i_est])
features_indices, samples_indices = generate_bagging_indices(
self.random_state, self.bootstrap_features,
self.bootstrap_samples, X.shape[1], X.shape[0],
self.max_features, self.max_samples, self.samples_weights[i_est])
print("ROUND %d" % i_est)
print("SCORE %0.3f"%(self.scores[-1]))
if i_est != 0:
accs = []
accs_last = []
overlaps = []
accs_total = []
for jj, ens in enumerate(self.ensemble):
print(len(self.prev_samples_indices), len(self.estimators_features))
p_pred = ens.predict(X[self.prev_samples_indices[jj]][:, self.estimators_features[jj]])
last_pred = ens.predict(X[self.prev_samples_indices[-1]][:, self.estimators_features[jj]])
total_pred = self.predict(X[self.prev_samples_indices[jj]])
accs.append(1-accuracy_score(y[self.prev_samples_indices[jj]], p_pred, normalize=True))
accs_last.append(1-accuracy_score(y[self.prev_samples_indices[-1]], last_pred, normalize=True))
accs_total.append(1-accuracy_score(y[self.prev_samples_indices[jj]], total_pred, normalize=True))
#try:
# self.ensemble[jj+1]
#except IndexError:
# print("CURRENT")
# print(self.prev_samples_indices[jj])
# print("LAST")
# print(self.prev_samples_indices[-1])
# print(np.array_equal(self.prev_samples_indices[jj], self.prev_samples_indices[-1]))
# print(np.intersect1d(self.prev_samples_indices[jj], self.prev_samples_indices[-1]).shape[0], self.prev_samples_indices[-1].shape[0], self.prev_samples_indices[jj].shape[0])
overlaps.append(np.intersect1d(self.prev_samples_indices[jj], self.prev_samples_indices[-1]).shape[0]/float(np.unique(self.prev_samples_indices[jj]).shape[0]))
accs = np.round(100*np.array(accs),3)
accs_last = np.round(100*np.array(accs_last),3)
overlaps = np.round(100*np.array(overlaps),3)
accs_total = np.round(100*np.array(accs_total),3)
print("PREVIOUS ERRORS ON CORRESPONDING DATA")
print(accs)
print("ERRORS OF THE ENSEMBLE ON CORRESPONDING DATA")
print(accs_total)
print("PREVIOUS ERRORS ON LAST SELECTED DATA")
print(accs_last)
print("OVERLAPS")
print(overlaps)
print("a: %0.3f"%self.a)
print("WEIGHTS")
print(np.min(self.samples_weights[-1]),
np.mean(self.samples_weights[-1]),
np.max(self.samples_weights[-1]))
#print(self.samples_weights[-1])
print("SAMPLED INSTANCES")
print(self.samples_weights[-1][samples_indices])
print(signs[samples_indices])
print(y[samples_indices])
print("~"*50)
else:
print("UNSUPPORTED WAY OF PATCHING: %s !" % self.patcher)
# minor fix for when one class is not represented during sampling
#print(samples_indices.shape)
samples_indices = self.fix_class_indices(y, samples_indices)
#print("AFTER")
#print(samples_indices.shape)
estimator.fit(X[samples_indices][:, features_indices], y[samples_indices])
self.prev_samples_indices.append(samples_indices)
self.estimators_features.append(features_indices)
self.ensemble.append(estimator)
self.samples_times_selected[samples_indices] += 1
return
def fix_class_indices(self, y, samples_indices):
in_set = set(y[samples_indices])
a = set(y).difference(in_set)
for item in a:
samples_indices= np.append(samples_indices, [np.where(y==item)[0][0]])
return samples_indices
def update_weights(self, X, y):
#print("REMOVED %d" % np.where(self.samples_times_selected > 3)[0].shape[0])
#self.samples_weights[-1][np.where(self.samples_times_selected > 3)] = self.default_sample_weight
#self.samples_times_selected[np.where(self.samples_times_selected > 3)] = 0
preds = self.predict(X)
self.scores.append(1-accuracy_score(y, preds, normalize=True)+0.0001)
self.a = 0.5*np.log((1-self.scores[-1])/float(self.scores[-1]))
#self.a = 9
#self.a = 1
signs = sign(y, preds)
temp = self.samples_weights[-1]*np.exp(-self.a*signs)
#print("MEAN")
#print(np.mean(temp))
#print(temp[np.where(signs<0)])
self.samples_weights.append(temp/np.sum(temp))
preds = np.ones((X.shape[0], len(self.ensemble)))
for ii, est in enumerate(self.ensemble):
sc = accuracy_score(y, est.predict(X[:, self.estimators_features[ii]]), normalize=True)
preds[:, ii] = sc*sign(y, est.predict(X[:, self.estimators_features[ii]]))
#print("INITIAL")
#print(preds)
#print(np.min(preds))
#print("AFTER COLLAPSE")
preds = np.sum(preds, axis=1)
#print(preds)
#print("AFTER RESHAPE")
preds = preds.reshape(-1,)
#print(preds)
if np.any(preds<0):
min_ = np.min(preds)
if min_ < 0:
min_ = -1*min_
#print("MIN")
#print(min_)
preds = preds + min_ + 0.001
#print("AFTER_MIN")
#print(preds)
#print("AFTER NORMALIZATION")
preds /= np.sum(preds)
#print(preds)
#print(preds)
#print(np.sum(preds))
self.samples_weights.append(preds)
return signs
from sklearn.utils import check_random_state, check_X_y, column_or_1d
from sklearn.utils.multiclass import check_classification_targets
from sklearn.base import BaseEstimator, ClassifierMixin, clone
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import pairwise_distances
from sklearn.metrics import accuracy_score, get_scorer
import copy
class Adversarial_Cascade(BaseEstimator, ClassifierMixin):
def __init__(self, base_estimator=KNeighborsClassifier(), n_estimators=10, acc_target=0.99,
num_adversaries_per_instance=4, way = 'prob',
random_state=42, optim=False, parameters=None, metric='accuracy', oob=False, oob_size=0.1):
self.base_estimator = base_estimator
self.n_estimators = n_estimators
self.acc_target = acc_target
self.num_adversaries_per_instance = num_adversaries_per_instance
self.way = way
self.random_state = check_random_state(random_state)
self.optim = optim
self.oob = oob
self.oob_size = 0.1
self.X_oob = None
self.y_oob = None
if self.optim:
self.parameters = parameters
else:
self.parameters = None
self.scoring = get_scorer(metric)
self.acc = 0
self.ensemble = []
self.selected_indices = []
def fit(self, X, y):
return self._fit(X, y)
def _fit(self,X,y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
y = self._validate_y(y)
if self.oob:
X, self.X_oob, y, self.y_oob = train_test_split(X,y,test_size=0.1,stratify=y)
n_samples, self.n_features_ = X.shape
cur_X, cur_y = X, y
self.selected_indices.append([i for i in xrange(X.shape[0])])
flag_target = False
for i_est in xrange(self.n_estimators):
cur_mod = clone(self.base_estimator)
if self.optim:
grid_search = GridSearchCV(cur_mod, self.parameters, n_jobs=-1, verbose=1, refit=True)
grid_search.fit(cur_X, cur_y)
cur_mod = grid_search.best_estimator_
else:
cur_mod.fit(cur_X, cur_y)
self.ensemble.append(cur_mod)
cur_X, cur_y, flag_target = self._create_next_batch(X, y)
if flag_target:
break
#print(cur_X.shape, cur_y.shape)
print("%d ESTIMATORS -- %0.3f" % (len(self.ensemble), 100*accuracy_score(y, self.predict(X), normalize=True)))
return self
def _create_next_batch(self, X, y):
if self.oob:
preds = self.predict(self.X_oob)
centroids = self.X_oob[preds != self.y_oob]
centroids_ind = np.argwhere(preds != self.y_oob).reshape(-1,)
cur_X = copy.deepcopy(self.X_oob[centroids_ind,:])
cur_y = copy.deepcopy(self.y_oob[centroids_ind])
str_target = "OOB SAMPLE"
self.acc = accuracy_score(self.y_oob, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(self.X_oob.shape[0]))
else:
preds = self.predict(X)
centroids = X[preds != y]
centroids_ind = np.argwhere(preds!=y).reshape(-1,)
cur_X = copy.deepcopy(X[centroids_ind,:])
cur_y = copy.deepcopy(y[centroids_ind])
str_target = "TRAIN SAMPLE"
self.acc = accuracy_score(y, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(X.shape[0]))
if self.acc > self.acc_target:
#return X, y, False
#print("ACCURACY ON THE %s IS %0.3f" % (str_target, 100*(1-(centroids.shape[0])/float(X.shape[0]))))
#print("STOPPING WITH %d BASE MODELS" % len(self.ensemble))
return _, _,True
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
for i_centr in xrange(probas.shape[0]):
if self.way == 'prob':
indices = self.random_state.choice([i for i in xrange(0, probas.shape[1])],
self.num_adversaries_per_instance, p=probas[i_centr,:])
if self.way == 'furthest':
indices = np.argsort(probas[i_centr,:])[::-1][:self.num_adversaries_per_instance]
indices = self._fix_class_indices(y, indices)
#print(cur_X.shape, X[indices,:].shape)
cur_X = np.vstack((cur_X, X[indices,:]))
cur_y = np.append(cur_y, y[indices])
#cur_y.extend(indices)
#cur_X = np.delete(cur_X, 0, axis=0)
#cur_y = y[cur_y]
return cur_X, cur_y, False
def _fix_class_indices(self, y, samples_indices):
in_set = set(y[samples_indices])
a = set(y).difference(in_set)
for item in a:
samples_indices= np.append(samples_indices, [np.where(y==item)[0][0]])
return samples_indices
def _validate_y(self, y):
y = column_or_1d(y, warn=True)
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def predict(self, X):
"""Predict class for X.
The predicted class of an input sample is computed as the class with
the highest mean predicted probability. If base estimators do not
implement a ``predict_proba`` method, then it resorts to voting.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
if hasattr(self.base_estimator, "predict_proba"):
predicted_probability = self.predict_proba(X)
return self.classes_.take((np.argmax(predicted_probability, axis=1)),
axis=0)
else:
predicted_probability = np.zeros((X.shape[0],1), dtype=int)
for i, ens in enumerate(self.ensemble):
predicted_probability = np.hstack((predicted_probability, ens.predict(X).reshape(-1,1)))
predicted_probability = np.delete(predicted_probability,0,axis=1)
final_pred = []
for sample in xrange(X.shape[0]):
final_pred.append(most_common(predicted_probability[sample,:]))
#votes = []
#for i, mod_vote in predictions[sample,:]:
# votes.extend([predictions[sample, i] for j in xrange(int(self.acc[i]))])
#final_pred = most_common(votes)
return np.array(final_pred)
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the mean predicted class probabilities of the base estimators in the
ensemble. If base estimators do not implement a ``predict_proba``
method, then it resorts to voting and the predicted class probabilities
of an input sample represents the proportion of estimators predicting
each class.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
all_proba = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_proba += ens.predict_proba(X)
all_proba /= self.n_estimators
#print(all_proba.shape)
#print(all_proba)
#proba = np.sum(all_proba, axis=0) / self.n_estimators
#print(proba.shape)
#print(proba)
return all_proba
@if_delegate_has_method(delegate='base_estimator')
def decision_function(self, X):
"""Average of the decision functions of the base classifiers.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The columns correspond
to the classes in sorted order, as they appear in the attribute
``classes_``. Regression and binary classification are special
cases with ``k == 1``, otherwise ``k==n_classes``.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
all_decisions = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_decisions += ens.predict_proba(X)
decisions = sum(all_decisions) / self.n_estimators
return decisions
def most_common(lst):
if isinstance(lst, np.ndarray):
lst = lst.tolist()
#print(lst, max(set(lst), key=lst.count) )
return max(set(lst), key=lst.count)
ac = Adversarial_Cascade(base_estimator=knn, oob=False, num_adversaries_per_instance=10)
ac.fit(X,y)
preds = ac.predict(X)
print(accuracy_score(y,preds,normalize=True))
#ac.selected_indices
X_train, X_dev, y_train, y_dev = train_test_split(X,y,stratify=y,test_size=0.1, random_state=42)
ac = Adversarial_Cascade(base_estimator=KNeighborsClassifier(),
oob=False,
num_adversaries_per_instance=5, way='furthest')
ac.fit(X_train,y_train)
preds = ac.predict(X_dev)
print(accuracy_score(y_dev,preds,normalize=True))
ac = Adversarial_Cascade(base_estimator=base, optim=True, parameters=parameters, oob=True, random_state=i)
print(cross_val_score(ac, X, y, cv=5, n_jobs=-1).mean())
knn.fit(X,y)
preds = knn.predict(X)
centroids = X[preds!=y]
knn.fit(X,y)
list(knn.predict(X))
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
check_random_state(42).rand
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
parameters = {
'clf__alpha': (0.00001, 0.000001),
'clf__penalty': ('l2', 'elasticnet'),
'clf__max_iter': (10, 50, 80, 150),
}
pipeline = Pipeline([ ('std', StandardScaler()), ('clf', SGDClassifier())])
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1)
model_names = ['BASE', "BAGGING", "RANDOM_PATCHES", "WEIGHTED_RP", "AC"]
base = pipeline
models = [
base,
BaggingClassifier(base, max_samples=0.5, max_features=0.5),
Vanilla_RP(base,max_samples=0.5, max_features=0.5),
Vanilla_RP(base, max_samples=0.5, max_features=0.5, patcher='weighted'),
Adversarial_Cascade(base_estimator=base, optim=False, parameters=parameters, oob=True)
]
scores = {}
for m in model_names:
scores[m] = []
for i in xrange(10):
print("ROUND %d" % i)
for m_i in xrange(len(model_names)):
try:
models[m_i].set_params(**{'clf__random_state':i})
except ValueError:
models[m_i].set_params(**{'random_state':i})
print(models[m_i])
sc = cross_val_score(models[m_i], X, y, cv=5, n_jobs=-1).mean()
print("MODEL %s -- %0.3f" %(model_names[m_i], 100*sc))
scores[model_names[m_i]].append(sc)
base = KNeighborsClassifier()
bagging = BaggingClassifier(base, max_samples=0.5, max_features=0.5, random_state=42)
rp = Vanilla_RP(base,max_samples=0.5, max_features=0.5, random_state=42)
rp_w = Vanilla_RP(base, max_samples=0.5, max_features=0.5, patcher='weighted', random_state=42)
ac = Adversarial_Cascade(base_estimator=base, num_adversaries_per_instance=10,
optim=False,
parameters=parameters,
oob=False, random_state=42)
print("KNN Score:")
print(cross_val_score(knn, X, y, cv=5, n_jobs=-1).mean())
print("Bagging Score:")
print(cross_val_score(bagging, X, y, cv=5, n_jobs=-1).mean())
print("RP Score:")
print(cross_val_score(rp, X, y, cv=5, n_jobs=-1).mean())
print("RP-WEIGHTED Score:")
print(cross_val_score(rp_w, X, y, cv=5, n_jobs=-1).mean())
print("AC")
print(cross_val_score(ac, X, y, cv=5, n_jobs=-1).mean())
print("~"*50)
for i in xrange(10):
print("ROUND %d" % i)
base = pipeline
bagging = BaggingClassifier(base, max_samples=0.5, max_features=0.5, random_state=i)
rp = Vanilla_RP(base,max_samples=0.5, max_features=0.5, random_state=i)
rp_w = Vanilla_RP(base, max_samples=0.5, max_features=0.5, patcher='weighted', random_state=i)
ac = Adversarial_Cascade(base_estimator=base, optim=True, parameters=parameters, oob=True, random_state=i)
print("KNN Score:")
print(cross_val_score(knn, X, y, cv=5, n_jobs=-1).mean())
print("Bagging Score:")
print(cross_val_score(bagging, X, y, cv=5, n_jobs=-1).mean())
print("RP Score:")
print(cross_val_score(rp, X, y, cv=5, n_jobs=-1).mean())
print("RP-WEIGHTED Score:")
print(cross_val_score(rp_w, X, y, cv=5, n_jobs=-1).mean())
print("AC")
print(cross_val_score(ac, X, y, cv=5, n_jobs=-1).mean())
print("~"*50)
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
def make_meshgrid(x, y, h=.02):
"""Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
"""
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
"""Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
"""
#print(clf)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
# import some data to play with
iris = datasets.load_iris()
# Take the first two features. We could avoid this by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
from sklearn.tree import DecisionTreeClassifier, ExtraTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
C = 1.0 # SVM regularization parameter
base = AdaBoostClassifier()
#KNeighborsClassifier()
#ExtraTreeClassifier()
#DecisionTreeClassifier()
#SGDClassifier()
#svm.SVC(kernel='linear', C=C, probability=True)
bagging = BaggingClassifier(base, max_samples=0.5, max_features=0.5, random_state=42)
rp = Vanilla_RP(base,max_samples=0.5, max_features=0.5, random_state=42)
rp_w = Vanilla_RP(base, max_samples=0.5, max_features=0.5, patcher='weighted', random_state=42)
ac = Adversarial_Cascade(base_estimator=base, num_adversaries_per_instance=10,
optim=False,
parameters=None, oob=True, way='furthest')
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
models = (svm.SVC(kernel='linear', C=C),
svm.SVC(kernel='rbf', gamma=0.7, C=C),
KNeighborsClassifier(),
pipeline,
bagging,
rp,
rp_w,
ac)
models = (clf.fit(X, y) for clf in models)
# title for the plots
titles = ('SVC with linear kernel',
'SVC with RBF kernel',
'KNN',
'SGD',
"Bagging",
"RP",
"RP-W",
'AC')
# Set-up 2x2 grid for plotting.
fig, sub = plt.subplots(2, 4)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
for clf, title, ax in zip(models, titles, sub.flatten()):
print("%s : %0.3f"% (title, 100*accuracy_score(y, clf.predict(X))))
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
fig.set_figwidth(20)
fig.set_figheight(10)
plt.show()
class Adversarial_Cascade(BaseEstimator, ClassifierMixin):
def __init__(self, base_estimator=KNeighborsClassifier(), n_estimators=10, acc_target=0.99,
num_adversaries_per_instance=4, way = 'prob',
random_state=42, optim=False, parameters=None, metric='accuracy', oob=False, oob_size=0.1):
self.base_estimator = base_estimator
self.n_estimators = n_estimators
self.acc_target = acc_target
self.num_adversaries_per_instance = num_adversaries_per_instance
self.way = way
self.random_state = check_random_state(random_state)
self.optim = optim
self.oob = oob
self.oob_size = 0.1
self.X_oob = None
self.y_oob = None
if self.optim:
self.parameters = parameters
else:
self.parameters = None
self.scoring = get_scorer(metric)
self.acc = 0
self.ensemble = []
self.selected_indices = []
def fit(self, X, y):
return self._fit(X, y)
def _fit(self,X,y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
y = self._validate_y(y)
if self.oob:
X, self.X_oob, y, self.y_oob = train_test_split(X,y,test_size=0.1,stratify=y)
n_samples, self.n_features_ = X.shape
cur_X, cur_y = X, y
self.selected_indices.append([i for i in xrange(X.shape[0])])
flag_target = False
for i_est in xrange(self.n_estimators):
cur_mod = clone(self.base_estimator)
if self.optim:
grid_search = GridSearchCV(cur_mod, self.parameters, n_jobs=-1, verbose=1, refit=True)
grid_search.fit(cur_X, cur_y)
cur_mod = grid_search.best_estimator_
else:
cur_mod.fit(cur_X, cur_y)
self.ensemble.append(cur_mod)
cur_X, cur_y, flag_target = self._create_next_batch(X, y)
if flag_target:
break
#print(cur_X.shape, cur_y.shape)
print("%d ESTIMATORS -- %0.3f" % (len(self.ensemble), 100*accuracy_score(y, self.predict(X), normalize=True)))
return self
def _create_next_batch(self, X, y):
if self.oob:
preds = self.predict(self.X_oob)
centroids = self.X_oob[preds != self.y_oob]
centroids_ind = np.argwhere(preds != self.y_oob).reshape(-1,)
cur_X = copy.deepcopy(self.X_oob[centroids_ind,:])
cur_y = copy.deepcopy(self.y_oob[centroids_ind])
str_target = "OOB SAMPLE"
self.acc = accuracy_score(self.y_oob, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(self.X_oob.shape[0]))
else:
preds = self.predict(X)
centroids = X[preds != y]
centroids_ind = np.argwhere(preds!=y).reshape(-1,)
cur_X = copy.deepcopy(X[centroids_ind,:])
cur_y = copy.deepcopy(y[centroids_ind])
str_target = "TRAIN SAMPLE"
self.acc = accuracy_score(y, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(X.shape[0]))
if self.acc > self.acc_target:
#return X, y, False
#print("ACCURACY ON THE %s IS %0.3f" % (str_target, 100*(1-(centroids.shape[0])/float(X.shape[0]))))
#print("STOPPING WITH %d BASE MODELS" % len(self.ensemble))
return _, _,True
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
for i_centr in xrange(probas.shape[0]):
# Make zero the probability that a same-class sample is picked
cur_prob = copy.deepcopy(probas[i_centr,:])
cur_prob[y[centroids_ind[i_centr]]==y]=0
print(cur_prob.shape, np.sum(cur_prob))
cur_prob /= np.sum(cur_prob)
if self.way == 'prob':
indices = self.random_state.choice([i for i in xrange(0, probas.shape[1])],
self.num_adversaries_per_instance, p=cur_prob)
if self.way == 'furthest':
indices = np.argsort(cur_prob)[::-1][:self.num_adversaries_per_instance]
if self.way == 'closest':
cur_prob[y[centroids_ind[i_centr]]==y]=1
indices = np.argsort(cur_prob)[:self.num_adversaries_per_instance]
indices = self._fix_class_indices(y, indices)
#print(cur_X.shape, X[indices,:].shape)
cur_X = np.vstack((cur_X, X[indices,:]))
cur_y = np.append(cur_y, y[indices])
#cur_y.extend(indices)
#cur_X = np.delete(cur_X, 0, axis=0)
#cur_y = y[cur_y]
return cur_X, cur_y, False
def _fix_class_indices(self, y, samples_indices):
in_set = set(y[samples_indices])
a = set(y).difference(in_set)
for item in a:
samples_indices= np.append(samples_indices, [np.where(y==item)[0][0]])
return samples_indices
def _validate_y(self, y):
y = column_or_1d(y, warn=True)
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def predict(self, X):
"""Predict class for X.
The predicted class of an input sample is computed as the class with
the highest mean predicted probability. If base estimators do not
implement a ``predict_proba`` method, then it resorts to voting.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
if hasattr(self.base_estimator, "predict_proba"):
predicted_probability = self.predict_proba(X)
return self.classes_.take((np.argmax(predicted_probability, axis=1)),
axis=0)
else:
predicted_probability = np.zeros((X.shape[0],1), dtype=int)
for i, ens in enumerate(self.ensemble):
predicted_probability = np.hstack((predicted_probability, ens.predict(X).reshape(-1,1)))
predicted_probability = np.delete(predicted_probability,0,axis=1)
final_pred = []
for sample in xrange(X.shape[0]):
final_pred.append(most_common(predicted_probability[sample,:]))
#votes = []
#for i, mod_vote in predictions[sample,:]:
# votes.extend([predictions[sample, i] for j in xrange(int(self.acc[i]))])
#final_pred = most_common(votes)
return np.array(final_pred)
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the mean predicted class probabilities of the base estimators in the
ensemble. If base estimators do not implement a ``predict_proba``
method, then it resorts to voting and the predicted class probabilities
of an input sample represents the proportion of estimators predicting
each class.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
all_proba = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_proba += ens.predict_proba(X)
all_proba /= self.n_estimators
#print(all_proba.shape)
#print(all_proba)
#proba = np.sum(all_proba, axis=0) / self.n_estimators
#print(proba.shape)
#print(proba)
return all_proba
@if_delegate_has_method(delegate='base_estimator')
def decision_function(self, X):
"""Average of the decision functions of the base classifiers.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The columns correspond
to the classes in sorted order, as they appear in the attribute
``classes_``. Regression and binary classification are special
cases with ``k == 1``, otherwise ``k==n_classes``.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
all_decisions = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_decisions += ens.predict_proba(X)
decisions = sum(all_decisions) / self.n_estimators
return decisions
def viz_fit(self, X, y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
y = self._validate_y(y)
if self.oob:
X, self.X_oob, y, self.y_oob = train_test_split(X,y,test_size=0.1,stratify=y)
n_samples, self.n_features_ = X.shape
cur_X, cur_y = X, y
self.selected_indices.append([i for i in xrange(X.shape[0])])
flag_target = False
for i_est in xrange(self.n_estimators):
cur_mod = clone(self.base_estimator)
if self.optim:
grid_search = GridSearchCV(cur_mod, self.parameters, n_jobs=-1, verbose=1, refit=True)
grid_search.fit(cur_X, cur_y)
cur_mod = grid_search.best_estimator_
else:
cur_mod.fit(cur_X, cur_y)
self.ensemble.append(cur_mod)
cur_X, cur_y, flag_target = self.viz_create_next_batch(X, y)
if flag_target:
break
#print(cur_X.shape, cur_y.shape)
print("%d ESTIMATORS -- %0.3f" % (len(self.ensemble), 100*accuracy_score(y, self.predict(X), normalize=True)))
return self
def viz_create_next_batch(self, X, y):
if self.oob:
preds = self.predict(self.X_oob)
centroids = self.X_oob[preds != self.y_oob]
centroids_ind = np.argwhere(preds != self.y_oob).reshape(-1,)
cur_X = copy.deepcopy(self.X_oob[centroids_ind,:])
cur_y = copy.deepcopy(self.y_oob[centroids_ind])
str_target = "OOB SAMPLE"
self.acc = accuracy_score(self.y_oob, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(self.X_oob.shape[0]))
else:
preds = self.predict(X)
centroids = X[preds != y]
centroids_ind = np.argwhere(preds!=y).reshape(-1,)
cur_X = copy.deepcopy(X[centroids_ind,:])
cur_y = copy.deepcopy(y[centroids_ind])
str_target = "TRAIN SAMPLE"
self.acc = accuracy_score(y, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(X.shape[0]))
if self.acc > self.acc_target:
#return X, y, False
#print("ACCURACY ON THE %s IS %0.3f" % (str_target, 100*(1-(centroids.shape[0])/float(X.shape[0]))))
#print("STOPPING WITH %d BASE MODELS" % len(self.ensemble))
return _, _,True
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
for i_centr in xrange(probas.shape[0]):
# Make zero the probability that a same-class sample is picked
cur_prob = copy.deepcopy(probas[i_centr,:])
cur_prob[y[centroids_ind[i_centr]]==y]=0
print(cur_prob.shape, np.sum(cur_prob))
cur_prob /= np.sum(cur_prob)
if self.way == 'prob':
indices = self.random_state.choice([i for i in xrange(0, probas.shape[1])],
self.num_adversaries_per_instance, p=cur_prob)
if self.way == 'furthest':
indices = np.argsort(cur_prob)[::-1][:self.num_adversaries_per_instance]
if self.way == 'closest':
cur_prob[y[centroids_ind[i_centr]]==y]=1
indices = np.argsort(cur_prob)[:self.num_adversaries_per_instance]
indices = self._fix_class_indices(y, indices)
#print(cur_X.shape, X[indices,:].shape)
cur_X = np.vstack((cur_X, X[indices,:]))
cur_y = np.append(cur_y, y[indices])
#cur_y.extend(indices)
#cur_X = np.delete(cur_X, 0, axis=0)
#cur_y = y[cur_y]
plot_selected_points(self, X,y, centroids_ind[i_centr], indices)
cc = raw_input()
if cc == 'q':
exit
return cur_X, cur_y, False
def most_common(lst):
if isinstance(lst, np.ndarray):
lst = lst.tolist()
#print(lst, max(set(lst), key=lst.count) )
return max(set(lst), key=lst.count)
ac = Adversarial_Cascade(way='closest')
ac.viz_fit(X,y)
def plot_selected_points(clf, X, y, center_id, indices, s=100):
fig = plt.figure(figsize=(10,10))
ax = plt.gca()
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
plot_contours(ax, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=s, edgecolors='grey')
ax.scatter(X[center_id,0], X[center_id,1], s=s, edgecolors='green', facecolors='none')
ax.scatter(X[indices,0], X[indices, 1], s=s, edgecolors='black', facecolors='none')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
fig = plt.figure(figsize=(10,10))
ax = plt.gca()
plt.subplots_adjust(wspace=0.4, hspace=0.4)
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
print("%s : %0.3f"% (title, 100*accuracy_score(y, clf.predict(X))))
plot_contours(ax, clf, xx, yy,
cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=80, edgecolors='grey', facecolors='none')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
class Linear_Classifiers(BaseEstimator, ClassifierMixin):
def __init__(self, base_estimator=KNeighborsClassifier(), n_estimators=10, acc_target=0.99,
num_adversaries_per_instance=4, way = 'prob',
random_state=42, optim=False, parameters=None, metric='accuracy', oob=False, oob_size=0.1):
self.base_estimator = base_estimator
self.n_estimators = n_estimators
self.acc_target = acc_target
self.num_adversaries_per_instance = num_adversaries_per_instance
self.way = way
self.random_state = check_random_state(random_state)
self.optim = optim
self.oob = oob
self.oob_size = 0.1
self.X_oob = None
self.y_oob = None
if self.optim:
self.parameters = parameters
else:
self.parameters = None
self.scoring = get_scorer(metric)
self.acc = 0
self.ensemble = []
self.selected_indices = []
def fit(self, X, y):
return self._fit(X, y)
def _fit(self,X,y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
y = self._validate_y(y)
if self.oob:
X, self.X_oob, y, self.y_oob = train_test_split(X,y,test_size=0.1,stratify=y)
n_samples, self.n_features_ = X.shape
cur_X, cur_y = X, y
self.selected_indices.append([i for i in xrange(X.shape[0])])
flag_target = False
for i_est in xrange(self.n_estimators):
cur_mod = clone(self.base_estimator)
if self.optim:
grid_search = GridSearchCV(cur_mod, self.parameters, n_jobs=-1, verbose=1, refit=True)
grid_search.fit(cur_X, cur_y)
cur_mod = grid_search.best_estimator_
else:
cur_mod.fit(cur_X, cur_y)
self.ensemble.append(cur_mod)
cur_X, cur_y, flag_target = self._create_next_batch(X, y)
if flag_target:
break
#print(cur_X.shape, cur_y.shape)
print("%d ESTIMATORS -- %0.3f" % (len(self.ensemble), 100*accuracy_score(y, self.predict(X), normalize=True)))
return self
def _create_next_batch(self, X, y):
if self.oob:
preds = self.predict(self.X_oob)
centroids = self.X_oob[preds != self.y_oob]
centroids_ind = np.argwhere(preds != self.y_oob).reshape(-1,)
cur_X = copy.deepcopy(self.X_oob[centroids_ind,:])
cur_y = copy.deepcopy(self.y_oob[centroids_ind])
str_target = "OOB SAMPLE"
self.acc = accuracy_score(self.y_oob, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(self.X_oob.shape[0]))
else:
preds = self.predict(X)
centroids = X[preds != y]
centroids_ind = np.argwhere(preds!=y).reshape(-1,)
cur_X = copy.deepcopy(X[centroids_ind,:])
cur_y = copy.deepcopy(y[centroids_ind])
str_target = "TRAIN SAMPLE"
self.acc = accuracy_score(y, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(X.shape[0]))
if self.acc > self.acc_target:
#return X, y, False
#print("ACCURACY ON THE %s IS %0.3f" % (str_target, 100*(1-(centroids.shape[0])/float(X.shape[0]))))
#print("STOPPING WITH %d BASE MODELS" % len(self.ensemble))
return _, _,True
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
for i_centr in xrange(probas.shape[0]):
# Make zero the probability that a same-class sample is picked
cur_prob = copy.deepcopy(probas[i_centr,:])
cur_prob[y[centroids_ind[i_centr]]==y]=0
print(cur_prob.shape, np.sum(cur_prob))
cur_prob /= np.sum(cur_prob)
if self.way == 'prob':
indices = self.random_state.choice([i for i in xrange(0, probas.shape[1])],
self.num_adversaries_per_instance, p=cur_prob)
if self.way == 'furthest':
indices = np.argsort(cur_prob)[::-1][:self.num_adversaries_per_instance]
if self.way == 'closest':
cur_prob[y[centroids_ind[i_centr]]==y]=1
indices = np.argsort(cur_prob)[:self.num_adversaries_per_instance]
indices = self._fix_class_indices(y, indices)
#print(cur_X.shape, X[indices,:].shape)
cur_X = np.vstack((cur_X, X[indices,:]))
cur_y = np.append(cur_y, y[indices])
#cur_y.extend(indices)
#cur_X = np.delete(cur_X, 0, axis=0)
#cur_y = y[cur_y]
return cur_X, cur_y, False
def _fix_class_indices(self, y, samples_indices):
in_set = set(y[samples_indices])
a = set(y).difference(in_set)
for item in a:
samples_indices= np.append(samples_indices, [np.where(y==item)[0][0]])
return samples_indices
def _validate_y(self, y):
y = column_or_1d(y, warn=True)
check_classification_targets(y)
self.classes_, y = np.unique(y, return_inverse=True)
self.n_classes_ = len(self.classes_)
return y
def predict(self, X):
"""Predict class for X.
The predicted class of an input sample is computed as the class with
the highest mean predicted probability. If base estimators do not
implement a ``predict_proba`` method, then it resorts to voting.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
if hasattr(self.base_estimator, "predict_proba"):
predicted_probability = self.predict_proba(X)
return self.classes_.take((np.argmax(predicted_probability, axis=1)),
axis=0)
else:
predicted_probability = np.zeros((X.shape[0],1), dtype=int)
for i, ens in enumerate(self.ensemble):
predicted_probability = np.hstack((predicted_probability, ens.predict(X).reshape(-1,1)))
predicted_probability = np.delete(predicted_probability,0,axis=1)
final_pred = []
for sample in xrange(X.shape[0]):
final_pred.append(most_common(predicted_probability[sample,:]))
#votes = []
#for i, mod_vote in predictions[sample,:]:
# votes.extend([predictions[sample, i] for j in xrange(int(self.acc[i]))])
#final_pred = most_common(votes)
return np.array(final_pred)
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the mean predicted class probabilities of the base estimators in the
ensemble. If base estimators do not implement a ``predict_proba``
method, then it resorts to voting and the predicted class probabilities
of an input sample represents the proportion of estimators predicting
each class.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
p : array of shape = [n_samples, n_classes]
The class probabilities of the input samples. The order of the
classes corresponds to that in the attribute `classes_`.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1}."
"".format(self.n_features_, X.shape[1]))
all_proba = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_proba += ens.predict_proba(X)
all_proba /= self.n_estimators
#print(all_proba.shape)
#print(all_proba)
#proba = np.sum(all_proba, axis=0) / self.n_estimators
#print(proba.shape)
#print(proba)
return all_proba
@if_delegate_has_method(delegate='base_estimator')
def decision_function(self, X):
"""Average of the decision functions of the base classifiers.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrices are accepted only if
they are supported by the base estimator.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The columns correspond
to the classes in sorted order, as they appear in the attribute
``classes_``. Regression and binary classification are special
cases with ``k == 1``, otherwise ``k==n_classes``.
"""
check_is_fitted(self, "classes_")
# Check data
X = check_array(
X, accept_sparse=['csr', 'csc'], dtype=None,
force_all_finite=False
)
if self.n_features_ != X.shape[1]:
raise ValueError("Number of features of the model must "
"match the input. Model n_features is {0} and "
"input n_features is {1} "
"".format(self.n_features_, X.shape[1]))
all_decisions = np.zeros((X.shape[0], self.n_classes_))
for i, ens in enumerate(self.ensemble):
all_decisions += ens.predict_proba(X)
decisions = sum(all_decisions) / self.n_estimators
return decisions
def viz_fit(self, X, y):
X, y = check_X_y(
X, y, ['csr', 'csc'], dtype=None, force_all_finite=False,
multi_output=True)
y = self._validate_y(y)
if self.oob:
X, self.X_oob, y, self.y_oob = train_test_split(X,y,test_size=0.1,stratify=y)
n_samples, self.n_features_ = X.shape
cur_X, cur_y = X, y
self.selected_indices.append([i for i in xrange(X.shape[0])])
flag_target = False
for i_est in xrange(self.n_estimators):
cur_mod = clone(self.base_estimator)
if self.optim:
grid_search = GridSearchCV(cur_mod, self.parameters, n_jobs=-1, verbose=1, refit=True)
grid_search.fit(cur_X, cur_y)
cur_mod = grid_search.best_estimator_
else:
cur_mod.fit(cur_X, cur_y)
self.ensemble.append(cur_mod)
cur_X, cur_y, flag_target = self.viz_create_next_batch(X, y)
if flag_target:
break
#print(cur_X.shape, cur_y.shape)
print("%d ESTIMATORS -- %0.3f" % (len(self.ensemble), 100*accuracy_score(y, self.predict(X), normalize=True)))
return self
def viz_create_next_batch(self, X, y):
if self.oob:
preds = self.predict(self.X_oob)
centroids = self.X_oob[preds != self.y_oob]
centroids_ind = np.argwhere(preds != self.y_oob).reshape(-1,)
cur_X = copy.deepcopy(self.X_oob[centroids_ind,:])
cur_y = copy.deepcopy(self.y_oob[centroids_ind])
str_target = "OOB SAMPLE"
self.acc = accuracy_score(self.y_oob, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(self.X_oob.shape[0]))
else:
preds = self.predict(X)
centroids = X[preds != y]
centroids_ind = np.argwhere(preds!=y).reshape(-1,)
cur_X = copy.deepcopy(X[centroids_ind,:])
cur_y = copy.deepcopy(y[centroids_ind])
str_target = "TRAIN SAMPLE"
self.acc = accuracy_score(y, preds, normalize=True)
#acc = (1-(centroids.shape[0])/float(X.shape[0]))
if self.acc > self.acc_target:
#return X, y, False
#print("ACCURACY ON THE %s IS %0.3f" % (str_target, 100*(1-(centroids.shape[0])/float(X.shape[0]))))
#print("STOPPING WITH %d BASE MODELS" % len(self.ensemble))
return _, _,True
probas = pairwise_distances(centroids, X)
probas /= np.sum(probas, axis=1).reshape(-1,1)
for i_centr in xrange(probas.shape[0]):
# Make zero the probability that a same-class sample is picked
cur_prob = copy.deepcopy(probas[i_centr,:])
cur_prob[y[centroids_ind[i_centr]]==y]=0
print(cur_prob.shape, np.sum(cur_prob))
cur_prob /= np.sum(cur_prob)
if self.way == 'prob':
indices = self.random_state.choice([i for i in xrange(0, probas.shape[1])],
self.num_adversaries_per_instance, p=cur_prob)
if self.way == 'furthest':
indices = np.argsort(cur_prob)[::-1][:self.num_adversaries_per_instance]
if self.way == 'closest':
cur_prob[y[centroids_ind[i_centr]]==y]=1
indices = np.argsort(cur_prob)[:self.num_adversaries_per_instance]
indices = self._fix_class_indices(y, indices)
#print(cur_X.shape, X[indices,:].shape)
cur_X = np.vstack((cur_X, X[indices,:]))
cur_y = np.append(cur_y, y[indices])
#cur_y.extend(indices)
#cur_X = np.delete(cur_X, 0, axis=0)
#cur_y = y[cur_y]
plot_selected_points(self, X,y, centroids_ind[i_centr], indices)
cc = raw_input()
if cc == 'q':
exit
return cur_X, cur_y, False
def most_common(lst):
if isinstance(lst, np.ndarray):
lst = lst.tolist()
#print(lst, max(set(lst), key=lst.count) )
return max(set(lst), key=lst.count) | _____no_output_____ | MIT | RANDOM PATCHES WITH NON-UNIFORM SAMPLING.ipynb | kbogas/Cascada |
CutMix Callback> Callback to apply [CutMix](https://arxiv.org/pdf/1905.04899.pdf) data augmentation technique to the training data. From the [research paper](https://arxiv.org/pdf/1905.04899.pdf), `CutMix` is a way to combine two images. It comes from `MixUp` and `Cutout`. In this data augmentation technique:> patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patchesAlso, from the paper: > By making efficient use of training pixels and retaining the regularization effect of regional dropout, CutMix consistently outperforms the state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on the ImageNet weakly-supervised localization task. Moreover, unlike previous augmentation methods, our CutMix-trained ImageNet classifier, when used as a pretrained model, results in consistent performance gains in Pascal detection and MS-COCO image captioning benchmarks. We also show that CutMix improves the model robustness against input corruptions and its out-of-distribution detection performances. | #export
class CutMix(Callback):
"Implementation of `https://arxiv.org/abs/1905.04899`"
run_after,run_valid = [Normalize],False
def __init__(self, alpha=1.): self.distrib = Beta(tensor(alpha), tensor(alpha))
def before_fit(self):
self.stack_y = getattr(self.learn.loss_func, 'y_int', False)
if self.stack_y: self.old_lf,self.learn.loss_func = self.learn.loss_func,self.lf
def after_fit(self):
if self.stack_y: self.learn.loss_func = self.old_lf
def before_batch(self):
W, H = self.xb[0].size(3), self.xb[0].size(2)
lam = self.distrib.sample((1,)).squeeze().to(self.x.device)
lam = torch.stack([lam, 1-lam])
self.lam = lam.max()
shuffle = torch.randperm(self.y.size(0)).to(self.x.device)
xb1,self.yb1 = tuple(L(self.xb).itemgot(shuffle)),tuple(L(self.yb).itemgot(shuffle))
nx_dims = len(self.x.size())
x1, y1, x2, y2 = self.rand_bbox(W, H, self.lam)
self.learn.xb[0][:, :, x1:x2, y1:y2] = xb1[0][:, :, x1:x2, y1:y2]
self.lam = (1 - ((x2-x1)*(y2-y1))/float(W*H)).item()
if not self.stack_y:
ny_dims = len(self.y.size())
self.learn.yb = tuple(L(self.yb1,self.yb).map_zip(torch.lerp,weight=unsqueeze(self.lam, n=ny_dims-1)))
def lf(self, pred, *yb):
if not self.training: return self.old_lf(pred, *yb)
with NoneReduce(self.old_lf) as lf:
loss = torch.lerp(lf(pred,*self.yb1), lf(pred,*yb), self.lam)
return reduce_loss(loss, getattr(self.old_lf, 'reduction', 'mean'))
def rand_bbox(self, W, H, lam):
cut_rat = torch.sqrt(1. - lam)
cut_w = (W * cut_rat).type(torch.long)
cut_h = (H * cut_rat).type(torch.long)
# uniform
cx = torch.randint(0, W, (1,)).to(self.x.device)
cy = torch.randint(0, H, (1,)).to(self.x.device)
x1 = torch.clamp(cx - cut_w // 2, 0, W)
y1 = torch.clamp(cy - cut_h // 2, 0, H)
x2 = torch.clamp(cx + cut_w // 2, 0, W)
y2 = torch.clamp(cy + cut_h // 2, 0, H)
return x1, y1, x2, y2 | _____no_output_____ | Apache-2.0 | nbs/74_callback.cutmix.ipynb | hanshin-back/fastai |
How does the batch with `CutMix` data augmentation technique look like? First, let's quickly create the `dls` using `ImageDataLoaders.from_name_re` DataBlocks API. | path = untar_data(URLs.PETS)
pat = r'([^/]+)_\d+.*$'
fnames = get_image_files(path/'images')
item_tfms = [Resize(256, method='crop')]
batch_tfms = [*aug_transforms(size=224), Normalize.from_stats(*imagenet_stats)]
dls = ImageDataLoaders.from_name_re(path, fnames, pat, bs=64, item_tfms=item_tfms,
batch_tfms=batch_tfms) | _____no_output_____ | Apache-2.0 | nbs/74_callback.cutmix.ipynb | hanshin-back/fastai |
Next, let's initialize the callback `CutMix`, create a learner, do one batch and display the images with the labels. `CutMix` inside updates the loss function based on the ratio of the cutout bbox to the complete image. | cutmix = CutMix(alpha=1.)
with Learner(dls, resnet18(), loss_func=CrossEntropyLossFlat(), cbs=cutmix) as learn:
learn.epoch,learn.training = 0,True
learn.dl = dls.train
b = dls.one_batch()
learn._split(b)
learn('before_batch')
_,axs = plt.subplots(3,3, figsize=(9,9))
dls.show_batch(b=(cutmix.x,cutmix.y), ctxs=axs.flatten()) | _____no_output_____ | Apache-2.0 | nbs/74_callback.cutmix.ipynb | hanshin-back/fastai |
Using `CutMix` in Training | learn = cnn_learner(dls, resnet18, loss_func=CrossEntropyLossFlat(), cbs=cutmix, metrics=[accuracy, error_rate])
# learn.fit_one_cycle(1) | _____no_output_____ | Apache-2.0 | nbs/74_callback.cutmix.ipynb | hanshin-back/fastai |
Export - | #hide
from nbdev.export import notebook2script
notebook2script() | Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
| Apache-2.0 | nbs/74_callback.cutmix.ipynb | hanshin-back/fastai |
Installing required packages | from IPython.display import clear_output
!pip install --upgrade pip
!pip install findspark
!pip install pyspark
clear_output(wait=False) | _____no_output_____ | MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Importing global objects | import findspark, pyspark
from pyspark.sql import SparkSession
from pyspark import SparkFiles | _____no_output_____ | MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Global SettingsNeeded for environments not Databricks | findspark.init()
spark = SparkSession.builder.getOrCreate() | _____no_output_____ | MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Reading data source | url = 'https://raw.githubusercontent.com/edsonlourenco/public_datasets/main/Carros.csv'
spark.sparkContext.addFile(url)
csv_cars = SparkFiles.get("Carros.csv")
df_cars = spark.read.csv(csv_cars, header=True, inferSchema=True, sep=';') | _____no_output_____ | MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Checking **data** | df_cars.orderBy('Consumo').show(truncate=False) | +-------+---------+-----------+---------------+----+-----+---------+-----------+-------+-----------+---+
|Consumo|Cilindros|Cilindradas|RelEixoTraseiro|Peso|Tempo|TipoMotor|Transmissao|Marchas|Carburadors|HP |
+-------+---------+-----------+---------------+----+-----+---------+-----------+-------+-----------+---+
|15 |8 |301 |354 |357 |146 |0 |1 |5 |8 |335|
|21 |6 |160 |39 |2875|1702 |0 |1 |4 |4 |110|
|21 |6 |160 |39 |262 |1646 |0 |1 |4 |4 |110|
|26 |4 |1203 |443 |214 |167 |0 |1 |5 |2 |91 |
|104 |8 |472 |293 |525 |1798 |0 |0 |3 |4 |205|
|104 |8 |460 |3 |5424|1782 |0 |0 |3 |4 |215|
|133 |8 |350 |373 |384 |1541 |0 |0 |3 |4 |245|
|143 |8 |360 |321 |357 |1584 |0 |0 |3 |4 |245|
|147 |8 |440 |323 |5345|1742 |0 |0 |3 |4 |230|
|152 |8 |2758 |307 |378 |18 |0 |0 |3 |3 |180|
|152 |8 |304 |315 |3435|173 |0 |0 |3 |2 |150|
|155 |8 |318 |276 |352 |1687 |0 |0 |3 |2 |150|
|158 |8 |351 |422 |317 |145 |0 |1 |5 |4 |264|
|164 |8 |2758 |307 |407 |174 |0 |0 |3 |3 |180|
|173 |8 |2758 |307 |373 |176 |0 |0 |3 |3 |180|
|178 |6 |1676 |392 |344 |189 |1 |0 |4 |4 |123|
|181 |6 |225 |276 |346 |2022 |1 |0 |3 |1 |105|
|187 |8 |360 |315 |344 |1702 |0 |0 |3 |2 |175|
|192 |6 |1676 |392 |344 |183 |1 |0 |4 |4 |123|
|192 |8 |400 |308 |3845|1705 |0 |0 |3 |2 |175|
+-------+---------+-----------+---------------+----+-----+---------+-----------+-------+-----------+---+
only showing top 20 rows
| MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Transform VectorAssembler Importing **VectorAssembler** class | from pyspark.ml.feature import VectorAssembler | _____no_output_____ | MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Doing transformation and creating features column | vectas = VectorAssembler(inputCols=[
"Consumo",
"Cilindros",
"Cilindradas",
"RelEixoTraseiro",
"Peso",
"Tempo",
"TipoMotor",
"Transmissao",
"Marchas",
"Carburadors"
],
outputCol="features")
df_cars_vet = vectas.transform(df_cars)
df_cars_vet.orderBy('Consumo').select('features').show(truncate=True) #('caracteristicas').display() | +--------------------+
| features|
+--------------------+
|[15.0,8.0,301.0,3...|
|[21.0,6.0,160.0,3...|
|[21.0,6.0,160.0,3...|
|[26.0,4.0,1203.0,...|
|[104.0,8.0,472.0,...|
|[104.0,8.0,460.0,...|
|[133.0,8.0,350.0,...|
|[143.0,8.0,360.0,...|
|[147.0,8.0,440.0,...|
|[152.0,8.0,2758.0...|
|[152.0,8.0,304.0,...|
|[155.0,8.0,318.0,...|
|[158.0,8.0,351.0,...|
|[164.0,8.0,2758.0...|
|[173.0,8.0,2758.0...|
|[178.0,6.0,1676.0...|
|[181.0,6.0,225.0,...|
|[187.0,8.0,360.0,...|
|[192.0,6.0,1676.0...|
|[192.0,8.0,400.0,...|
+--------------------+
only showing top 20 rows
| MIT | 3_VectorAssembler_example.ipynb | edsonlourenco/pyspark_ml_examples |
Export | #default_exp templ
from nbdev.export import notebook2script
notebook2script() | Converted om.ipynb.
Converted pspace.ipynb.
Converted templ.ipynb.
| Apache-2.0 | templ.ipynb | mirkoklukas/nbx |
Additional dependenciesYou will need to have tensorflow keras pydot and graphviz in your OS installed and added to the path ```bashpython -m pip install pydot``````bashyay graphviz ```bashsudo apt install python-pydot python-pydot-ng graphviz``` | import os
import sys
import time
import warnings
warnings.filterwarnings("ignore")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import numpy as np
import pandas as pd
from pyspark.sql import SparkSession
packages = [
'JohnSnowLabs:spark-nlp: 2.4.2'
]
spark = SparkSession \
.builder \
.appName("ML SQL session") \
.config('spark.jars.packages', ','.join(packages)) \
.config("spark.driver.memory","2g") \
.getOrCreate()
import sparknlp
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
from pyspark.sql import SQLContext
sql = SQLContext(spark)
trainBalancedSarcasmDF = spark.read.option("header", True).option("inferSchema", True) \
.csv("/tmp/train-balanced-sarcasm.csv")
trainBalancedSarcasmDF.printSchema()
# Let's create a temp view (table) for our SQL queries
trainBalancedSarcasmDF.createOrReplaceTempView('sarcasm')
sql.sql('SELECT COUNT(*) FROM sarcasm').collect()
df = sql.sql('''
select label, concat(parent_comment,"\n",comment) as comment
from sarcasm
where comment is not null and parent_comment is not null limit 100000''')
print(type(df))
df.printSchema()
print('rows', df.count())
df = df.limit(2000) #minimize dataset if you are not running on a cluster
df.show()
from sparknlp.annotator import *
from sparknlp.common import *
from sparknlp.base import *
from pyspark.ml import Pipeline
document_assembler = DocumentAssembler() \
.setInputCol("comment") \
.setOutputCol("document")
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence") \
.setUseAbbreviations(True)
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer])
nlp_model = nlp_pipeline.fit(df)
processed = nlp_model.transform(df)
processed.show()
train, test = processed.randomSplit(weights=[0.7, 0.3], seed=123)
print(train.count())
print(test.count())
glove = WordEmbeddingsModel.pretrained()
train_featurized = glove.transform(train)
train_featurized.show()
test_featurized = glove.transform(test)
test_featurized.show()
def get_features(row):
result = []
for tk in row:
result.append(tk['embeddings'])
return np.array(result)
def build_data(df, chunks=10):
x_train = []
y_train = []
row_count = df.count()
i = 0
chunks = df.randomSplit(weights=[1/chunks] * chunks)
for chunk in chunks:
rows = chunk.collect()
for row in rows:
if i % 1000 == 0:
print('row {} / {} ({:.1f} %)'.format(i, row_count, 100 * i / row_count))
embeddings = get_features(row['embeddings'])
label = row['label']
x_train.append(embeddings)
y_train.append(label)
i += 1
x_train = np.array(x_train)
y_train = np.array(y_train)
return x_train, y_train
x_train, y_train = build_data(train_featurized)
x_test, y_test = build_data(test_featurized)
spark.stop()
print('Train Labels:\n', pd.Series(y_train).value_counts())
print('Test Labels:\n', pd.Series(y_test).value_counts())
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
# set parameters for our model:
maxlen = 100 #max 50 words per article
batch_size = 32 #size of the batch
filters = 50 #dimension of filters for the convolutional layer
kernel_size = 3 #size of the kernel used in the convolutional layer
hidden_dims = 250 #dimension of the hidden layer
epochs = 5 #number of training epochs
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('Build model...')
model = Sequential()
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
from keras import backend as K
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy','mae'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
from IPython.display import Image
from keras.utils.vis_utils import model_to_dot
dot = model_to_dot(model)
Image(dot.create_png()) | _____no_output_____ | Apache-2.0 | tutorials/jupyter/8- Sarcasm Classifiers (GloVe and CNN).ipynb | nabinkhadka/spark-nlp-workshop |
Train your Unet with membrane datamembrane data is in folder membrane/, it is a binary classification task.The input shape of image and mask are the same :(batch_size,rows,cols,channel = 1) Train with data generator | data_gen_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')
myGene = trainGenerator(2,'data/membrane/train','image','label',data_gen_args,save_to_dir = None)
model = unet()
model_checkpoint = ModelCheckpoint('unet_membrane.hdf5', monitor='loss',verbose=1, save_best_only=True)
model.fit_generator(myGene,steps_per_epoch=2000,epochs=5,callbacks=[model_checkpoint]) | _____no_output_____ | MIT | trainUnet.ipynb | chitrakumarsai/Semantic-segmentation---Unet- |
Train with npy file | #imgs_train,imgs_mask_train = geneTrainNpy("data/membrane/train/aug/","data/membrane/train/aug/")
#model.fit(imgs_train, imgs_mask_train, batch_size=2, nb_epoch=10, verbose=1,validation_split=0.2, shuffle=True, callbacks=[model_checkpoint]) | _____no_output_____ | MIT | trainUnet.ipynb | chitrakumarsai/Semantic-segmentation---Unet- |
test your model and save predicted results | testGene = testGenerator("data/membrane/test")
model = unet()
model.load_weights("unet_membrane.hdf5")
results = model.predict_generator(testGene,30,verbose=1)
saveResult("data/membrane/test",results) | C:\Users\xuhaozhi\Documents\Study\unet\model.py:34: UserWarning: The `merge` function is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
merge6 = merge([drop4,up6], mode = 'concat', concat_axis = 3)
C:\SoftWare\Anaconda2\envs\python3\lib\site-packages\keras\legacy\layers.py:465: UserWarning: The `Merge` layer is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
name=name)
C:\Users\xuhaozhi\Documents\Study\unet\model.py:39: UserWarning: The `merge` function is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
merge7 = merge([conv3,up7], mode = 'concat', concat_axis = 3)
C:\Users\xuhaozhi\Documents\Study\unet\model.py:44: UserWarning: The `merge` function is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
merge8 = merge([conv2,up8], mode = 'concat', concat_axis = 3)
C:\Users\xuhaozhi\Documents\Study\unet\model.py:49: UserWarning: The `merge` function is deprecated and will be removed after 08/2017. Use instead layers from `keras.layers.merge`, e.g. `add`, `concatenate`, etc.
merge9 = merge([conv1,up9], mode = 'concat', concat_axis = 3)
C:\Users\xuhaozhi\Documents\Study\unet\model.py:55: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=Tensor("in..., outputs=Tensor("co...)`
model = Model(input = inputs, output = conv10)
| MIT | trainUnet.ipynb | chitrakumarsai/Semantic-segmentation---Unet- |
**Note: Please use the [pyEOF](https://pyeof.readthedocs.io/en/latest/installation.html) environment for this script** This script is used to implement EOF, REOF and k-means clustering to get regions | from pyEOF import *
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import gc
import warnings
import pickle
from matplotlib.ticker import MaxNLocator
from tqdm import tqdm
from sklearn.cluster import KMeans
import cartopy.crs as ccrs
import cartopy.feature as cfeature
warnings.filterwarnings('ignore')
# select region
def sel_extent(ds):
return ds.sel(lat=slice(6,36),lon=slice(68,98))
# path
ds_path = "./data/daily_surface_pm25_RH50.nc"
mask_path = "./data/land_mask.nc" | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
plot the time series of the mean PM2.5 We decided to use April and August as the testing data | mask = xr.open_dataset(mask_path)
ds = sel_extent(xr.open_dataset(ds_path)).where(mask["mask"])
ds["PM25"].groupby('time.month').mean(dim=["lon","lat","time"]).plot()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
get the data and implement EOFsBased on the results, we will select 4 PCs (n=4) and implemented varimax rotated EOFs (REOFs) | n=28
# remove months "4" (April) and "8" (August), to be consistant with training data
ds = ds.sel(time=ds.time.dt.month.isin([1,2,3,
5,6,7,
9,10,11,12]))
df = ds["PM25"].to_dataframe().reset_index() # get df from ds
# process the data for implementing pyEOF
df_data = get_time_space(df, time_dim = "time", lumped_space_dims = ["lat","lon"])
# implement PCA/EOF
pca = df_eof(df_data)
eofs = pca.eofs(s=2, n=n)
eofs_ds = eofs.stack(["lat","lon"], dropna=False).to_xarray()
pcs = pca.pcs(s=2, n=n)
evf = pca.evf(n=n)
# show the results
df = pd.DataFrame({"n":np.arange(1,n+1),
"evf":pca.evf(n)*100.0,
"accum":pca.evf(n).cumsum()*100.0
})
plt.scatter(df["n"],df["accum"], s=8)
plt.axhline(y=50,c="r",ls="-.")
plt.xlabel("# of PCs")
plt.ylabel("acc. explained variance [%]")
plt.show()
display(df.transpose()) | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
implement REOFs | n=4
# implement REOF
pca = df_eof(df_data,pca_type="varimax",n_components=n)
eofs = pca.eofs(s=2, n=n)
eofs_ds = eofs.stack(["lat","lon"], dropna=False).to_xarray()
pcs = pca.pcs(s=2, n=n)
evf = pca.evf(n=n)
df = pd.DataFrame({"n":np.arange(1,n+1),
"evf":pca.evf(n)*100.0,
"accum":pca.evf(n).cumsum()*100.0
})
# show the reults
fig = plt.figure(figsize=(6,2))
ax = fig.add_subplot(121)
ax.scatter(df["n"],df["evf"], s=8)
ax.set_ylim(0,20)
ax.set_ylabel("explained variance [%]")
ax.set_xlabel("PC")
ax = fig.add_subplot(122)
ax.scatter(df["n"],df["accum"], s=8)
ax.set_ylim(0,60)
ax.set_ylabel("acc. explained variance [%]")
ax.set_xlabel("PC")
ax.axhline(y=50,c="r",ls="-.")
plt.tight_layout()
plt.show()
print("explained variance and acc. explained variance")
display(df.transpose())
print("EOFs loading")
eofs.transpose().describe().loc[["max","min"]].transpose()
eofs_ds = eofs.stack(["lat","lon"], dropna=False).to_xarray()
fig = plt.figure(figsize=(10,2))
for i in range(1,n+1):
ax = fig.add_subplot(1,4,i)
eofs_ds["PM25"].sel(EOF=i).plot(ax=ax,vmax=1.0,vmin=-1.0,cbar_kwargs={'label': ""},cmap="bwr")
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
weighted EOFs loading | eofs_w = pd.DataFrame(data = (eofs.values * evf.reshape(n,1)),
index = eofs.index,
columns = eofs.columns)
eofs_w_ds = eofs_w.stack(["lat","lon"], dropna=False).to_xarray()
fig = plt.figure(figsize=(10,2))
for i in range(1,n+1):
ax = fig.add_subplot(1,4,i)
eofs_w_ds["PM25"].sel(EOF=i).plot(ax=ax,cmap="bwr",
vmax=1.0*evf[i-1],
vmin=-1.0*evf[i-1],
cbar_kwargs={'label': ""})
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
implement k-Means | # get the index which is not "nan"
placeholder_idx = np.argwhere(~np.isnan((eofs_w.values)[0])).reshape(-1)
# get the matrix without missing values: locations (row) * EOFs (columns)
m = eofs_w.values[:,placeholder_idx].transpose()
# clustering and calculate the Sum_of_squared_distances
Sum_of_squared_distances = []
K = range(2,15)
for n_clusters in tqdm(K):
kmeans = KMeans(n_clusters=n_clusters, random_state=66).fit(m)
Sum_of_squared_distances.append(kmeans.inertia_)
ssd = Sum_of_squared_distances
residual_x = K[1:]
residual = [(x - y) for x,y in zip(ssd,ssd[1:])]
pd.DataFrame({"n_clusters":K, "ssd":Sum_of_squared_distances}).transpose()
fig = plt.figure(figsize=(8,4))
cluster_list = [3,4,5,6,7,8]
for i in range(len(cluster_list)):
n_cluster = cluster_list[i]
ax = fig.add_subplot(2,3,i+1)
clusters = KMeans(n_clusters=n_cluster, random_state=66).fit_predict(m)
df_f = eofs.copy()
df_f.loc[str(n+1),:] = np.nan
df_f.iloc[n,placeholder_idx] = clusters
df_fs = df_f.stack(["lat","lon"], dropna=False).to_xarray()
df_fs.sel(EOF=str(n+1))["PM25"].plot(ax=ax, cbar_kwargs={"label":"cluster"})
ax.set_title(f"clusters={n_cluster}")
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
we select n_cluster = 6 to further the analysis | n_cluster = 6
fig = plt.figure(figsize=(8,3))
ax = fig.add_subplot(121)
ax.plot(K, Sum_of_squared_distances, 'bx-')
ax.plot(n_cluster,Sum_of_squared_distances[n_cluster-2],"rx")
ax.set_xlabel('number of clusters')
ax.set_ylabel('sum of squared distances')
# ax.set_title('Elbow method for optimal number of clusters')
ax = fig.add_subplot(122)
ax.plot(residual_x, residual, 'bx-')
ax.set_xlabel('number of clusters')
ax.set_ylabel('diff. in ssd')
ax.plot(n_cluster,residual[n_cluster-3],"rx")
# ax.set_title('difference in sum of squared distances')
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
use n_cluster = 6 to implement the clusters | clusters = KMeans(n_clusters=n_cluster, random_state=66).fit_predict(m)
df_f = eofs.copy()
df_f.loc[str(n+1),:] = np.nan
df_f.iloc[n,placeholder_idx] = clusters
ds_f = df_f.stack(["lat","lon"], dropna=False).to_xarray()
fig = plt.figure(figsize=(3,3))
ax = fig.add_subplot(111, projection=ccrs.PlateCarree())
ax.set_extent([68,98,6,36],crs=ccrs.PlateCarree())
ds_regions = ds_f.sel(EOF=str(n+1))["PM25"].drop("EOF")
ds_regions.plot(ax=ax,cbar_kwargs={"label":"cluster"})
# ax.add_feature(cfeature.STATES.with_scale('10m'),
# facecolor='none',
# edgecolor='black')
# ax.add_feature(cfeature.BORDERS,edgecolor='red')
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
save the clusters masks | fig = plt.figure(figsize=(2*n_cluster,2))
for i in range(n_cluster):
ax = fig.add_subplot(1,n_cluster,i+1)
ds_f["mask_"+str(i)] = ds_regions.where(ds_regions==i).notnull().squeeze()
ds_f["mask_"+str(i)].plot(ax=ax,cbar_kwargs={"label":""})
ax.set_title("mask_"+str(i))
plt.tight_layout()
plt.show()
mask_ls = ["mask_"+str(i) for i in range(n_cluster)]
ds_f[mask_ls].to_netcdf("./data/cluster_mask_"+str(n_cluster)+".nc",engine="scipy") | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
save the regional masks | cluster_mask = xr.open_dataset("./data/cluster_mask_"+str(n_cluster)+".nc",engine="scipy")
fig = plt.figure(figsize=(2*n_cluster,2))
for i in range(n_cluster):
ax = fig.add_subplot(1,n_cluster,i+1)
ds.mean(dim="time").where(cluster_mask["mask_"+str(i)])["PM25"].plot(ax=ax,cbar_kwargs={"label":""})
ax.set_title("mask_"+str(i))
ax.set_xlabel("")
ax.set_ylabel("")
plt.tight_layout()
plt.show()
# process
cluster_mask["mask_"+str(0)] = cluster_mask.where((cluster_mask.lon>=90) & (cluster_mask.lat>=15),0)["mask_"+str(0)]
cluster_mask["mask_"+str(3)] = cluster_mask.where((cluster_mask.lat>=12) & (cluster_mask.lat<=30),0)["mask_"+str(3)]
cluster_mask["mask_"+str(4)] = cluster_mask.where((cluster_mask.lat<=20.5) & (cluster_mask.lat>=8),0)["mask_"+str(4)]
# rename
cluster_mask = cluster_mask[["mask_0","mask_1","mask_3","mask_4"]]\
.rename({"mask_0":"E","mask_1":"W","mask_3":"N","mask_4":"S"})
loc_name = list(cluster_mask)
fig = plt.figure(figsize=(n_cluster*2,2))
for i in range(len(loc_name)):
ax = fig.add_subplot(1,n_cluster,i+1)
ds.mean(dim="time").where(cluster_mask[loc_name[i]])["PM25"].plot(ax=ax,cbar_kwargs={"label":""})
ax.set_title(loc_name[i])
ax.set_xlabel("")
ax.set_ylabel("")
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
save and load the regional mask | # save the regional mask
cluster_mask.to_netcdf("./data/r_mask.nc",engine="scipy")
# load the regional mask
test = xr.open_dataset("./data/r_mask.nc",engine="scipy")
fig = plt.figure(figsize=(n_cluster*2,2))
for i in range(len(loc_name)):
ax = fig.add_subplot(1,n_cluster,i+1)
ds.mean(dim="time").where(test[loc_name[i]])["PM25"].plot(ax=ax,cbar_kwargs={"label":""})
ax.set_title(loc_name[i])
ax.set_xlabel("")
ax.set_ylabel("")
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | 1_get_regions_eof_reofs.ipynb | zzheng93/code_DSI_India_AutoML |
 Link Prediction - IntroductionIn this Notebook we are going to examine the process of using Amazon Neptune ML feature to perform link prediction in a property graph. Note: This notebook take approximately 1 hour to complete[Neptune ML](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning.htmlmachine-learning-overview) is a feature of Amazon Neptune that enables users to automate the creation, management, and usage of Graph Neural Network (GNN) machine learning models within Amazon Neptune. Neptune ML is built using [Amazon SageMaker](https://aws.amazon.com/sagemaker/) and [Deep Graph Library](https://www.dgl.ai/) and provides a simple and easy to use mechanism to build/train/maintain these models and then use the predictive capabilities of these models within a Gremlin query to predict elements or property values in the graph. For this notebook we are going to show how to perform a common machine learning task known as **link prediction**. Link prediction is a unsupervised machine learning task where a model built using nodes and edges in the graph to predict whether an edge exists between two particular nodes. Link prediction is not unique to GNN based models (look at DeepWalk or node2vec) but the GNN based models in Neptune ML provide additional context to the predictions by combining the connectivity and features of the local neighborhood of a node to create a more predictive model.Link prediction is commonly used to solve many common buisness problems such as:* Predicting group membership in a social or identity network* [Entity Resolution in an identity graph](https://github.com/awslabs/sagemaker-graph-entity-resolution/blob/master/source/sagemaker/dgl-entity-resolution.ipynb)* Knowledge graph completion* Product recommendationNeptune ML uses a four step process to automate the process of creating production ready GNN models:1. **Load Data** - Data is loaded into a Neptune cluster using any of the normal methods such as the Gremlin drivers or using the Neptune Bulk Loader.2. **Export Data** - A service call is made specifying the machine learning model type and model configuration parameters. The data and model configuration parameters are then exported from a Neptune cluster to an S3 bucket.3. **Model Training** - A set of service calls are made to pre-process the exported data, train the machine learning model, and then generate an Amazon SageMaker endpoint that exposes the model.4. **Run Queries** - The final step is to use this inference endpoint within our Gremlin queries to infer data using the machine learning model. For this notebook we'll use the [MovieLens 100k dataset](https://grouplens.org/datasets/movielens/100k/) provided by [GroupLens Research](https://grouplens.org/datasets/movielens/). This dataset consists of movies, users, and ratings of those movies by users. For this notebook we'll walk through how Neptune ML can predict product recommendations in a product knowledge graph. To demonstrate this we'll predict the movies a user would be most likely to rate as well as which users are most likely to rate a given movie. We'll walk through each step of loading and exporting the data, configuring and training the model, and finally we'll show how to use that model to infer the genre of movies using Gremlin traversals. Checking that we are ready to run Neptune ML Run the code below to check that this cluster is configured to run Neptune ML. | import neptune_ml_utils as neptune_ml
neptune_ml.check_ml_enabled() | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
If the check above did not say that this cluster is ready to run Neptune ML jobs then please check that the cluster meets all the pre-requisites defined [here](https://docs.aws.amazon.com/neptune/latest/userguide/machine-learning.htmlmachine-learning-overview). Load the dataThe first step in building a Neptune ML model is to load data into the Neptune cluster. Loading data for Neptune ML follows the standard process of ingesting data into Amazon Neptune, for this example we'll be using the Bulk Loader. We have written a script that automates the process of downloading the data from the MovieLens websites and formatting it to load into Neptune. All you need to provide is an S3 bucket URI that is located in the same region as the cluster.Note: This is the only step that requires any specific input from the user, all remaining cells will automatically propogate the required values. | s3_bucket_uri="s3://<INSERT S3 BUCKET OR PATH>"
# remove trailing slashes
s3_bucket_uri = s3_bucket_uri[:-1] if s3_bucket_uri.endswith('/') else s3_bucket_uri | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
Now that you have provided an S3 bucket, run the cell below which will download and format the MovieLens data into a format compatible with Neptune's bulk loader. | response = neptune_ml.prepare_movielens_data(s3_bucket_uri) | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
This process only takes a few minutes and once it has completed you can load the data using the `%load` command in the cell below. | %load -s {response} -f csv -p OVERSUBSCRIBE --run | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
Check to make sure the data is loadedOnce the cell has completed, the data has been loaded into the cluster. We verify the data loaded correctly by running the traversals below to see the count of nodes by label: Note: The numbers below assume no other data is in the cluster | %%gremlin
g.V().groupCount().by(label).unfold() | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
If our nodes loaded correctly then the output is:* 19 genres* 1682 movies* 100000 rating* 943 usersTo check that our edges loaded correctly we check the edge counts: | %%gremlin
g.E().groupCount().by(label).unfold() | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
If our edges loaded correctly then the output is:* 100000 about* 2893 included_in* 100000 rated* 100000 wrote Preparing for exportWith our data validated let's remove a few `rated` vertices so that we can build a model that predicts these missing connections. In a normal scenario, the data you would like to predict is most likely missing from the data being loaded so removing these values prior to building our machine learning model simulates that situation.Specifically, let's remove the `rated` edgesfor `user_1`, to provide us with a few candidate vertices to run our link prediction tasks. Let's start by taking a look at what `rated` edges currently exist. | %%gremlin
g.V('user_1').outE('rated') | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
Now let's remove these edges to simulate them missing from our data. | %%gremlin
g.V('user_1').outE('rated').drop() | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
Checking our data again we see that the edges have now been removed. | %%gremlin
g.V('user_1').outE('rated') | _____no_output_____ | ISC | src/graph_notebook/notebooks/04-Machine-Learning/Neptune-ML-03-Introduction-to-Link-Prediction-Gremlin.ipynb | zacharyrs/graph-notebook |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.