markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Override these arguments as needed: | address = args.address
smoke_test = args.smoke_test
num_actors = args.num_actors
cpus_per_actor = args.cpus_per_actor
num_actors_inference = args.num_actors_inference
cpus_per_actor_inference = args.cpus_per_actor_inference | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
Connecting to the Ray clusterNow, let's connect our Python script to this newly deployed Ray cluster! | if not ray.is_initialized():
ray.init(address=address) | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
Data PreparationWe will use the [HIGGS dataset from the UCI Machine Learning datasetrepository](https://archive.ics.uci.edu/ml/datasets/HIGGS). The HIGGSdataset consists of 11,000,000 samples and 28 attributes, which is largeenough size to show the benefits of distributed computation. | LABEL_COLUMN = "label"
if smoke_test:
# Test dataset with only 10,000 records.
FILE_URL = "https://ray-ci-higgs.s3.us-west-2.amazonaws.com/simpleHIGGS" ".csv"
else:
# Full dataset. This may take a couple of minutes to load.
FILE_URL = (
"https://archive.ics.uci.edu/ml/machine-learning-databases"
"/00280/HIGGS.csv.gz"
)
colnames = [LABEL_COLUMN] + ["feature-%02d" % i for i in range(1, 29)]
load_data_start_time = time.time()
df = pd.read_csv(FILE_URL, names=colnames)
load_data_end_time = time.time()
load_data_duration = load_data_end_time - load_data_start_time
print(f"Dataset loaded in {load_data_duration} seconds.") | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
Split data into training and validation. | df_train, df_validation = train_test_split(df)
print(df_train, df_validation) | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
Distributed TrainingThe ``train_xgboost`` function contains all the logic necessary fortraining using XGBoost-Ray.Distributed training can not only speed up the process, but also allow youto use datasets that are too large to fit in memory of a single node. Withdistributed training, the dataset is sharded across different actorsrunning on separate nodes. Those actors communicate with each other tocreate the final model.First, the dataframes are wrapped in ``RayDMatrix`` objects, which handledata sharding across the cluster. Then, the ``train`` function is called.The evaluation scores will be saved to ``evals_result`` dictionary. Thefunction returns a tuple of the trained model (booster) and the evaluationscores.The ``ray_params`` variable expects a ``RayParams`` object that containsRay-specific settings, such as the number of workers. | def train_xgboost(config, train_df, test_df, target_column, ray_params):
train_set = RayDMatrix(train_df, target_column)
test_set = RayDMatrix(test_df, target_column)
evals_result = {}
train_start_time = time.time()
# Train the classifier
bst = train(
params=config,
dtrain=train_set,
evals=[(test_set, "eval")],
evals_result=evals_result,
verbose_eval=False,
num_boost_round=100,
ray_params=ray_params,
)
train_end_time = time.time()
train_duration = train_end_time - train_start_time
print(f"Total time taken: {train_duration} seconds.")
model_path = "model.xgb"
bst.save_model(model_path)
print("Final validation error: {:.4f}".format(evals_result["eval"]["error"][-1]))
return bst, evals_result | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
We can now pass our Modin dataframes and run the function. We will use``RayParams`` to specify that the number of actors and CPUs to train with. | # standard XGBoost config for classification
config = {
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
}
bst, evals_result = train_xgboost(
config,
df_train,
df_validation,
LABEL_COLUMN,
RayParams(cpus_per_actor=cpus_per_actor, num_actors=num_actors),
)
print(f"Results: {evals_result}") | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
PredictionWith the model trained, we can now predict on unseen data. For thepurposes of this example, we will use the same dataset for prediction asfor training.Since prediction is naively parallelizable, distributing it over multipleactors can measurably reduce the amount of time needed. | inference_df = RayDMatrix(df, ignore=[LABEL_COLUMN, "partition"])
results = predict(
bst,
inference_df,
ray_params=RayParams(
cpus_per_actor=cpus_per_actor_inference, num_actors=num_actors_inference
),
)
print(results) | _____no_output_____ | Apache-2.0 | doc/source/ray-core/examples/modin_xgboost/modin_xgboost.ipynb | richardsliu/ray |
Задача 1Проектирование функций для построения обучающих моделей по данным. В данной задача вам нужно разработать прототипы функций(объявление функций без реализаций) для задачи анализа данных из машинного обучения, должны быть учтены следующие шаги:* Загрузка данных из внешних источников* Обработка не заданных значений - пропусков* Удаление не информативных признаков и объектов* Получение модели для обучения* Оценка качества модели* Сохранение модели в файл | def loading_dataframe(path,source="file",type='csv'):
"""
Функция загружает файл из внешних источников.
Параметры:
path — путь, из которого загружается документ,
source — тип документа (file (по умолчанию), http, https, ftp),
type — расширение документа (txt,csv,xls).
Результат:
load_data — файл.
"""
pass
def preparing_nones(dataframe,*columns,mode):
"""
Функция обрабатывает незаданные значения (пропуски).
Параметры:
dataframe — файл,
columns — то, что надо обработать,
mode — то, что нужно сделать с пропусками.
Результат:
preparing_nones — файл с обработанными пропусками.
"""
pass
def moving_dataframe(dataframe):
"""Функция для удаления неинформативных признаков.
Параметр:
dataframe — объект, который нужно удалить.
Результат:
moving_nones — удаление файла"""
pass
def constructing_model(dataframe, model_name, **params):
"""
Функция для построения модели для обучения.
Параметры:
dataframe(dataframe) - исходный dataframe,
model_name(str) - название модели: xgboost, random_forest, sequential,
params(dictionary) - параметры модели.
Результат:
model над данными.
"""
pass
def scoring_model(model):
"""
Функция для оценки качества модели.
Параметр:
model.
Результат:
Оценка модели
"""
pass
def saving_model(model,file):
"""
Функция для сохранения модели в файл.
Параметры:
model — название модели,
file — название файла
"""
pass | _____no_output_____ | Apache-2.0 | module_001_python/lesson_004_function/student_tasks/HomeWork.ipynb | VanyaTihonov/ML |
Задача 2Задача повышенной сложности. Реализовать вывод треугольника паскаля, через функцию. Пример треугольника:Глубина 10 по умолчанию | def print_pascal(primary,deep=10):
for i in range(1,deep+1):
print(pascal(primary,i))
def pascal(primary,deep):
if deep == 1:
new_list = [primary]
elif deep == 2:
new_list = []
for i in range (deep):
new_list.extend(pascal(primary,1))
else:
new_list = []
for i in range(0,deep):
if i == 0 or i == deep-1:
new_list.append(primary)
else:
new_list.append(pascal(primary,deep-1)[i-1]+pascal(primary,deep-1)[i])
return new_list
print_pascal(1) | [1]
[1, 1]
[1, 2, 1]
[1, 3, 3, 1]
[1, 4, 6, 4, 1]
[1, 5, 10, 10, 5, 1]
[1, 6, 15, 20, 15, 6, 1]
[1, 7, 21, 35, 35, 21, 7, 1]
[1, 8, 28, 56, 70, 56, 28, 8, 1]
[1, 9, 36, 84, 126, 126, 84, 36, 9, 1]
| Apache-2.0 | module_001_python/lesson_004_function/student_tasks/HomeWork.ipynb | VanyaTihonov/ML |
IntroductionThis notebook describe how you can use VietOcr to train OCR model | ! pip install --quiet vietocr | [?25l
[K |█████▌ | 10kB 26.4MB/s eta 0:00:01
[K |███████████ | 20kB 1.7MB/s eta 0:00:01
[K |████████████████▋ | 30kB 2.3MB/s eta 0:00:01
[K |██████████████████████▏ | 40kB 2.5MB/s eta 0:00:01
[K |███████████████████████████▋ | 51kB 2.0MB/s eta 0:00:01
[K |████████████████████████████████| 61kB 1.8MB/s
[?25h Installing build dependencies ... [?25l[?25hdone
Getting requirements to build wheel ... [?25l[?25hdone
Preparing wheel metadata ... [?25l[?25hdone
[K |████████████████████████████████| 880kB 7.2MB/s
[K |████████████████████████████████| 952kB 17.0MB/s
[?25h Building wheel for gdown (PEP 517) ... [?25l[?25hdone
Building wheel for lmdb (setup.py) ... [?25l[?25hdone
[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.4.0 which is incompatible.[0m
| Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Inference | import matplotlib.pyplot as plt
from PIL import Image
from vietocr.tool.predictor import Predictor
from vietocr.tool.config import Cfg
config = Cfg.load_config_from_name('vgg_transformer') | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Change weights to your weights or using default weights from our pretrained model. Path can be url or local file | # config['weights'] = './weights/transformerocr.pth'
config['weights'] = 'https://drive.google.com/uc?id=13327Y1tz1ohsm5YZMyXVMPIOjoOA0OaA'
config['cnn']['pretrained']=False
config['device'] = 'cuda:0'
config['predictor']['beamsearch']=False
detector = Predictor(config)
! gdown --id 1uMVd6EBjY4Q0G2IkU5iMOQ34X0bysm0b
! unzip -qq -o sample.zip
! ls sample | shuf |head -n 5
img = './sample/031189003299.jpeg'
img = Image.open(img)
plt.imshow(img)
s = detector.predict(img)
s | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Download sample dataset | ! gdown https://drive.google.com/uc?id=19QU4VnKtgm3gf0Uw_N2QKSquW1SQ5JiE
! unzip -qq -o ./data_line.zip | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Train model 1. Load your config2. Train model using your dataset above Load the default config, we adopt VGG for image feature extraction | from vietocr.tool.config import Cfg
from vietocr.model.trainer import Trainer | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Change the config * *data_root*: the folder save your all images* *train_annotation*: path to train annotation* *valid_annotation*: path to valid annotation* *print_every*: show train loss at every n steps* *valid_every*: show validation loss at every n steps* *iters*: number of iteration to train your model* *export*: export weights to folder that you can use for inference* *metrics*: number of sample in validation annotation you use for computing full_sequence_accuracy, for large dataset it will take too long, then you can reuduce this number | config = Cfg.load_config_from_name('vgg_transformer')
#config['vocab'] = 'aAàÀảẢãÃáÁạẠăĂằẰẳẲẵẴắẮặẶâÂầẦẩẨẫẪấẤậẬbBcCdDđĐeEèÈẻẺẽẼéÉẹẸêÊềỀểỂễỄếẾệỆfFgGhHiIìÌỉỈĩĨíÍịỊjJkKlLmMnNoOòÒỏỎõÕóÓọỌôÔồỒổỔỗỖốỐộỘơƠờỜởỞỡỠớỚợỢpPqQrRsStTuUùÙủỦũŨúÚụỤưƯừỪửỬữỮứỨựỰvVwWxXyYỳỲỷỶỹỸýÝỵỴzZ0123456789!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~ '
dataset_params = {
'name':'hw',
'data_root':'./data_line/',
'train_annotation':'train_line_annotation.txt',
'valid_annotation':'test_line_annotation.txt'
}
params = {
'print_every':200,
'valid_every':15*200,
'iters':20000,
'checkpoint':'./checkpoint/transformerocr_checkpoint.pth',
'export':'./weights/transformerocr.pth',
'metrics': 10000
}
config['trainer'].update(params)
config['dataset'].update(dataset_params)
config['device'] = 'cuda:0' | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
you can change any of these params in this full list below | config | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
You should train model from our pretrained | trainer = Trainer(config, pretrained=True) | Downloading: "https://download.pytorch.org/models/vgg19_bn-c79401a0.pth" to /root/.cache/torch/hub/checkpoints/vgg19_bn-c79401a0.pth
| Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Save model configuration for inference, load_config_from_file | trainer.config.save('config.yml') | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Visualize your dataset to check data augmentation is appropriate | trainer.visualize_dataset() | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Train now | trainer.train() | iter: 000200 - train loss: 1.657 - lr: 1.91e-05 - load time: 1.08 - gpu time: 158.33
iter: 000400 - train loss: 1.429 - lr: 3.95e-05 - load time: 0.76 - gpu time: 158.76
iter: 000600 - train loss: 1.331 - lr: 7.14e-05 - load time: 0.73 - gpu time: 158.38
iter: 000800 - train loss: 1.252 - lr: 1.12e-04 - load time: 1.29 - gpu time: 158.43
iter: 001000 - train loss: 1.218 - lr: 1.56e-04 - load time: 0.84 - gpu time: 158.86
iter: 001200 - train loss: 1.192 - lr: 2.01e-04 - load time: 0.78 - gpu time: 160.20
iter: 001400 - train loss: 1.140 - lr: 2.41e-04 - load time: 1.54 - gpu time: 158.48
iter: 001600 - train loss: 1.129 - lr: 2.73e-04 - load time: 0.70 - gpu time: 159.42
iter: 001800 - train loss: 1.095 - lr: 2.93e-04 - load time: 0.74 - gpu time: 158.03
iter: 002000 - train loss: 1.098 - lr: 3.00e-04 - load time: 0.66 - gpu time: 159.21
iter: 002200 - train loss: 1.060 - lr: 3.00e-04 - load time: 1.52 - gpu time: 157.63
iter: 002400 - train loss: 1.055 - lr: 3.00e-04 - load time: 0.80 - gpu time: 159.34
iter: 002600 - train loss: 1.032 - lr: 2.99e-04 - load time: 0.74 - gpu time: 159.13
iter: 002800 - train loss: 1.019 - lr: 2.99e-04 - load time: 1.42 - gpu time: 158.27
| Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Visualize prediction from our trained model | trainer.visualize_prediction() | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Compute full seq accuracy for full valid dataset | trainer.precision() | _____no_output_____ | Apache-2.0 | vietocr_gettingstart.ipynb | uMetalooper/vietocr |
Heroes Of Pymoli Data Analysis* Of the 1163 active players, the vast majority are male (82%). There also exists, a smaller, but notable proportion of female players (16%).* Our peak age demographic falls between 20-24 (42%) with secondary groups falling between 15-19 (17.80%) and 25-29 (15.48%).* Our players are putting in significant cash during the lifetime of their gameplay. Across all major age and gender demographics, the average purchase for a user is roughly $491. ----- | import pandas as pd
import numpy as np | _____no_output_____ | ADSL | HeroesOfPymoli/.ipynb_checkpoints/HeroesOfPymoli_Example-checkpoint.ipynb | dimpalsuthar91/RePanda |
Metadata preprocessing tutorial Melusine **prepare_data.metadata_engineering subpackage** provides classes to preprocess the metadata :- **MetaExtension :** a transformer which creates an 'extension' feature extracted from regex in metadata. It extracts the extensions of mail adresses.- **MetaDate :** a transformer which creates new features from dates such as: hour, minute, dayofweek.- **Dummifier :** a transformer to dummifies categorial features.All the classes have **fit_transform** methods. Input dataframe - To use a **MetaExtension** transformer : the dataframe requires a **from** column- To use a **MetaDate** transformer : the dataframe requires a **date** column | from melusine.data.data_loader import load_email_data
df_emails = load_email_data()
df_emails = df_emails[['from','date']]
df_emails['from']
df_emails['date'] | _____no_output_____ | Apache-2.0 | tutorial/tutorial05_metadata_preprocessing.ipynb | milidris/melusine |
MetaExtension transformer A **MetaExtension transformer** creates an *extension* feature extracted from regex in metadata. It extracts the extensions of mail adresses. | from melusine.prepare_email.metadata_engineering import MetaExtension
meta_extension = MetaExtension()
df_emails = meta_extension.fit_transform(df_emails)
df_emails.extension | _____no_output_____ | Apache-2.0 | tutorial/tutorial05_metadata_preprocessing.ipynb | milidris/melusine |
MetaExtension transformer A **MetaDate transformer** creates new features from dates : **hour**, **minute** and **dayofweek**. | from melusine.prepare_email.metadata_engineering import MetaDate
meta_date = MetaDate()
df_emails = meta_date.fit_transform(df_emails)
df_emails.date[0]
df_emails.hour[0]
df_emails.loc[0,'min']
df_emails.dayofweek[0] | _____no_output_____ | Apache-2.0 | tutorial/tutorial05_metadata_preprocessing.ipynb | milidris/melusine |
Dummifier transformer A **Dummifier transformer** dummifies categorial features.Its arguments are :- **columns_to_dummify** : a list of the metadata columns to dummify. | from melusine.prepare_email.metadata_engineering import Dummifier
dummifier = Dummifier(columns_to_dummify=['extension', 'dayofweek', 'hour', 'min'])
df_meta = dummifier.fit_transform(df_emails)
df_meta.columns
df_meta.head() | _____no_output_____ | Apache-2.0 | tutorial/tutorial05_metadata_preprocessing.ipynb | milidris/melusine |
Table of Contents1 Seq2Seq With Attention1.1 Data Preparation1.2 Model Implementation1.2.1 Encoder1.2.2 Attention1.2.3 Decoder1.2.4 Seq2Seq1.3 Training Seq2Seq1.4 Evaluating Seq2Seq1.5 Summary2 Reference | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import math
import time
import spacy
import random
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
%watermark -a 'Ethen' -d -t -v -p numpy,torch,torchtext,spacy | Ethen 2019-10-09 13:46:01
CPython 3.6.4
IPython 7.7.0
numpy 1.16.5
torch 1.1.0.post2
torchtext 0.3.1
spacy 2.1.6
| MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Seq2Seq With Attention Seq2Seq framework involves a family of encoders and decoders, where the encoder encodes a source sequence into a fixed length vector from which the decoder picks up and aims to correctly generates the target sequence. The vanilla version of this type of architecture looks something along the lines of:The RNN encoder has an input sequence $x_1, x_2, x_3, x_4$. We denote the encoder states by $c_1, c_2, c_3$. The encoder outputs a single output vector $c$ which is passed as input to the decoder. Like the encoder, the decoder is also a single-layered RNN, we denote the decoder states by $s_1, s_2, s_3$ and the network's output by $y_1, y_2, y_3, y_4$. A problem with this vanilla architecture lies in the fact that the decoder needs to represent the entire input sequence $x_1, x_2, x_3, x_4$ as a single vector $c$, which can cause information loss. In other words, the fixed-length context vector is hypothesized to be the bottleneck in this framework.The attention mechanism that we'll be introducing here extends this approach by allowing the model to soft search for parts of the source sequence that are relevant to predicting the target sequence, which looks like the following:The attention mechanism is located between the encoder and the decoder, its input is composed of the encoder's output vectors $h_1, h_2, h_3, h_4$ and the states of the decoder $s_0, s_1, s_2, s_3$, the attention's output is a sequence of vectors called context vectors denoted by $c_1, c_2, c_3, c_4$. These context vectors enable the decoder to focus on certain parts of the input when predicting its output. Each context vector is a weighted sum of the encoder's output vectors $h_1, h_2, h_3, h_4$, where each vector $h_i$ contains information about the whole input sequence with a strong focus on the parts surrounding the i-th vector of the input sequence. The vectors $h_1, h_2, h_3, h_4$ are scaled by weights $\alpha_{ij}$ capturing the degree of relevance of input $x_j$ to output at time $i$, $y_i$. The context vectors $c_1, c_2, c_3, c_4$ are calculated by:\begin{align}c_i = \sum_{j=1}^4 a_{ij} h_j\end{align}The attention weights $a_{ij}$ are learned using an additional fully-connected network, denoted by $fc$, whose input consists of the decoder's hidden state $s_0, s_1, s_2, s_3$ and the encoder's output $h_1, h_2, h_3, h_4$. It's computation can be more formally defined by:\begin{align}a_{ij} = \frac{exp(e_{ij})}{\sum_{k=1}^4exp(e_{ik})}\end{align}Where:\begin{align}e_{ij} = fc(s_{i-1}, h_j)\end{align}As can be seen in the above image, the fully-connected network receives the concatenation of vectors $[s_{i-1}, h_i]$ as input at time step $i$. The network has a single fully-connected layer, the outputs of the layer, denoted by $e_{ij}$, are passed through a softmax function computing the attention weights, which lie in $[0,1]$.Note that we are using the same fully-connected network for all the concatenated pairs $[s_{i-1},h_1], [s_{i-1},h_2], [s_{i-1},h_3], [s_{i-1},h_4]$, meaning there is a single network learning the attention weights.To re-emphasize the attention weights $\alpha_{ij}$ reflects the importance of $h_j$ with respect to the previous hidden state $s_{i−1}$ in deciding the next state $s_i$ and generating $y_i$. A large $\alpha_{ij}$ attention weight causes the RNN to focus on input $x_j$ (represented by the encoder's output $h_j$), when predicting the output $y_i$. We can talk through an iteration of the algorithm to see how it all ties together.The first computation performed is the computation of vectors $h_1, h_2, h_3, h_4$ by the encoder. These are then used as inputs to the attention mechanism. This is where the decoder is first involved by inputting its initial state vector $s_0$ (note that for this initial state of the decoder, we often times use the hidden state from the encoder) and we have the first attention input sequence $[s_0, h_1], [s_0, h_2], [s_0, h_3], [s_0, h_4]$.The attention mechanism picks up the inputs and computes the first set of attention weights $\alpha_{11}, \alpha_{12}, \alpha_{13}, \alpha_{14}$ enabling the computation of the first context vector $c_1$. The decoder now uses $[s_0,c_1]$ to generate the first output $y_1$. This process then repeats itself, until we've generated all the outputs. Data Preparation This part is pretty much identical to that of the vanilla seq2seq, hence explanation is omitted. | # !python -m spacy download de
# !python -m spacy download en
SEED = 2222
random.seed(SEED)
torch.manual_seed(SEED)
# tokenize sentences into individual tokens
# https://spacy.io/usage/spacy-101#annotations-token
spacy_de = spacy.load('de_core_news_sm')
spacy_en = spacy.load('en_core_web_sm')
def tokenize_de(text):
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
return [tok.text for tok in spacy_en.tokenizer(text)]
source = Field(tokenize=tokenize_de, init_token='<sos>', eos_token='<eos>', lower=True)
target = Field(tokenize=tokenize_en, init_token='<sos>', eos_token='<eos>', lower=True)
train_data, valid_data, test_data = Multi30k.splits(exts=('.de', '.en'), fields=(source, target))
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
train_data.examples[0].src
train_data.examples[0].trg
source.build_vocab(train_data, min_freq=2)
target.build_vocab(train_data, min_freq=2)
print(f"Unique tokens in source (de) vocabulary: {len(source.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(target.vocab)}")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 128
# create batches out of the dataset and sends them to the appropriate device
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device)
test_batch = next(iter(test_iterator))
test_batch | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Model Implementation | # adjustable parameters
INPUT_DIM = len(source.vocab)
OUTPUT_DIM = len(target.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
ENC_HID_DIM = 512
DEC_HID_DIM = 512
N_LAYERS = 1
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5 | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
The following sections are heavily "borrowed" from the wonderful tutorial on this topic listed below.- [Jupyter Notebook: Neural Machine Translation by Jointly Learning to Align and Translate](https://nbviewer.jupyter.org/github/bentrevett/pytorch-seq2seq/blob/master/3%20-%20Neural%20Machine%20Translation%20by%20Jointly%20Learning%20to%20Align%20and%20Translate.ipynb)Some personal preference modifications have been made. Encoder Like other seq2seq-like architectures, we first need to specify an encoder. Here we'll be using a bidirectional GRU layer. With a bidirectional layer, we have a forward layer scanning the sentence from left to right (shown below in green), and a backward layer scanning the sentence from right to left (yellow). From the coding perspective, we need to set the `bidirectional=True` for the GRU layer's argument.More formally, we now have:$$\begin{align}h_t^\rightarrow &= \text{EncoderGRU}^\rightarrow(x_t^\rightarrow,h_{t-1}^\rightarrow)\\h_t^\leftarrow &= \text{EncoderGRU}^\leftarrow(x_t^\leftarrow,h_{t-1}^\leftarrow)\end{align}$$Where $x_0^\rightarrow = \text{}, x_1^\rightarrow = \text{guten}$ and $x_0^\leftarrow = \text{}, x_1^\leftarrow = \text{morgen}$.As before, we only pass an embedded input to our GRU layer. We'll get two context vectors, one from the forward layer after it has seen the final word in the sentence, $z^\rightarrow=h_T^\rightarrow$, and one from the backward layer after it has seen the first word in the sentence, $z^\leftarrow=h_T^\leftarrow$.As we'll be using bidirectional layer, the next section is devoted to help us understand how the output looks like before we implement the actual encoder that we'll be using. The shape of the output is explicitly printed out to make it easier to comprehend. Here, we're using GRU layer, which can be replaced with a LSTM layer, which is similar, but return an additional cell state variable that has the same size as the hidden state. | class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.emb_dim = emb_dim
self.hid_dim = hid_dim
self.input_dim = input_dim
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.GRU(emb_dim, hid_dim, n_layers, dropout=dropout,
bidirectional=True)
def forward(self, src_batch):
# src [sent len, batch size]
embedded = self.embedding(src_batch) # [sent len, batch size, emb dim]
outputs, hidden = self.rnn(embedded) # [sent len, batch size, hidden dim]
# outputs -> [sent len, batch size, hidden dim * n directions]
# hidden -> [n layers * n directions, batch size, hidden dim]
return outputs, hidden
# first experiment with n_layers = 1
n_layers = 1
encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, n_layers, ENC_DROPOUT).to(device)
outputs, hidden = encoder(test_batch.src)
outputs.shape, hidden.shape | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Notice that output's last dimension is 1024, which is the hidden dimension (512) multiplied by the number of directions (2). Whereas the hidden's first dimension is 2, representing the number of directions (2).- The returned outputs of bidirectional RNN at timestep $t$ is the output after feeding input to both the reverse and normal RNN unit at timestep $t$, where normal RNN has seen inputs $1...t$ and reverse RNN has seen inputs $t...n$, with $n$ being the length of the sequence).- The returned hidden state of bidirectional RNN is the hidden state after the whole sequence is consume. For normal RNN it's after timestep $n$; for reverse RNN it's after timestep 1.The following diagram can also come in handy when visualizing the difference between output and hidden.In the diagram $n$ notes each timestep, and $w$ denotes the number of layer.- output comprises all the hidden states in the last layer ("last" depth-wise, not time-wise).- ($h_n$, $c_n$) comprise of the hidden states after the last timestep, $t = n$, so we could potentially feed them into another LSTM layer. | # the outputs are concatenated at the last dimension
assert (outputs[-1, :, :ENC_HID_DIM] == hidden[0]).all()
assert (outputs[0, :, ENC_HID_DIM:] == hidden[1]).all()
# experiment with n_layers = 2
n_layers = 2
encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, n_layers, ENC_DROPOUT).to(device)
outputs, hidden = encoder(test_batch.src)
outputs.shape, hidden.shape | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Notice now the first dimension of the hidden cell becomes 4, which represents the number of layers (2) multiplied by the number of directions (2). The order of the hidden state is stacked by [forward_1, backward_1, forward_2, backward_2, ...] | assert (outputs[-1, :, :ENC_HID_DIM] == hidden[2]).all()
assert (outputs[0, :, ENC_HID_DIM:] == hidden[3]).all() | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
We'll need some final touches for our actual encoder. As our encoder's hidden state will be used as the decoder's initial hidden state, we need to make sure we make them the same shape. In our example, the decoder is not bidirectional, and only needs a single context vector, $z$, to use as its initial hidden state, $s_0$, and we currently have two, a forward and a backward one ($z^\rightarrow=h_T^\rightarrow$ and $z^\leftarrow=h_T^\leftarrow$, respectively). We solve this by concatenating the two context vectors together, passing them through a linear layer, $g$, and applying the $\tanh$ activation function. $$\begin{align}z=\tanh(g(h_T^\rightarrow, h_T^\leftarrow)) = \tanh(g(z^\rightarrow, z^\leftarrow)) = s_0\end{align}$$ | class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, enc_hid_dim, dec_hid_dim, n_layers, dropout):
super().__init__()
self.emb_dim = emb_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.input_dim = input_dim
self.n_layers = n_layers
self.dropout = dropout
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.GRU(emb_dim, enc_hid_dim, n_layers, dropout=dropout,
bidirectional=True)
self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)
def forward(self, src_batch):
# src [sent len, batch size]
# [sent len, batch size, emb dim]
embedded = self.embedding(src_batch)
outputs, hidden = self.rnn(embedded)
# outputs -> [sent len, batch size, hidden dim * n directions]
# hidden -> [n layers * n directions, batch size, hidden dim]
# initial decoder hidden is final hidden state of the forwards and
# backwards encoder RNNs fed through a linear layer
concated = torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)
hidden = torch.tanh(self.fc(concated))
return outputs, hidden
encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, N_LAYERS, ENC_DROPOUT).to(device)
outputs, hidden = encoder(test_batch.src)
outputs.shape, hidden.shape | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Attention The next part is the hightlight. The attention layer will take in the previous hidden state of the decoder $s_{t-1}$, and all of the stacked forward and backward hidden state from the encoder $H$. The output will be an attention vector $a_t$, that is the length of the source sentece, each element of this vector will be a floating number between 0 and 1, and the entire vector sums up to 1.Intuitively, this layer takes in what we've decoded so far $s_{t-1}$, and all of what have encoded $H$, to produce a vector $a_t$, that represents which word in the source sentence should we pay the most attention to in order to correctly predict the next thing in the target sequence $y_{t+1}$.Graphically, this looks something like below. For the very first attention vector, where we use the encoder's hidden state as the initial hidden state from the decoder. The green/yellow blocks represent the hidden states from both the forward and backward RNNs, and the attention computation is all done within the pink block. | class Attention(nn.Module):
def __init__(self, enc_hid_dim, dec_hid_dim):
super().__init__()
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
# enc_hid_dim multiply by 2 due to bidirectional
self.fc1 = nn.Linear(enc_hid_dim * 2 + dec_hid_dim, dec_hid_dim)
self.fc2 = nn.Linear(dec_hid_dim, 1, bias=False)
def forward(self, encoder_outputs, hidden):
src_len = encoder_outputs.shape[0]
batch_size = encoder_outputs.shape[1]
# repeat encoder hidden state src_len times [batch size, sent len, dec hid dim]
hidden = hidden.unsqueeze(1).repeat(1, src_len, 1)
# reshape/permute the encoder output, so that the batch size comes first
# [batch size, sent len, enc hid dim * 2], times 2 because of bidirectional
outputs = encoder_outputs.permute(1, 0, 2)
# the attention mechanism receives a concatenation of the hidden state
# and the encoder output
concat = torch.cat((hidden, outputs), dim=2)
# fully connected layer and softmax layer to compute the attention weight
# [batch size, sent len, dec hid dim]
energy = torch.tanh(self.fc1(concat))
# attention weight should be of [batch size, sent len]
attention = self.fc2(energy).squeeze(dim=2)
attention_weight = torch.softmax(attention, dim=1)
return attention_weight
attention = Attention(ENC_HID_DIM, DEC_HID_DIM).to(device)
attention_weight = attention(outputs, hidden)
attention_weight.shape | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Decoder Now comes the decoder, within the decoder, we first use the attention layer that we've created in the previous section to compute the attention weight, this gives us the weight for each source sentence that the model should pay attention to when generating the current target output in the sequence. Along with the output from the encoder, this gives us the context vector. Finally, the decoder takes the embedded input along with the context to generate the target output in the sequence. | class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, enc_hid_dim, dec_hid_dim, n_layers,
dropout, attention):
super().__init__()
self.emb_dim = emb_dim
self.enc_hid_dim = enc_hid_dim
self.dec_hid_dim = dec_hid_dim
self.output_dim = output_dim
self.n_layers = n_layers
self.dropout = dropout
self.attention = attention
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.GRU(enc_hid_dim * 2 + emb_dim, dec_hid_dim, n_layers, dropout=dropout)
self.linear = nn.Linear(dec_hid_dim, output_dim)
def forward(self, trg, encoder_outputs, hidden):
# trg [batch size]
# outputs [src sen len, batch size, enc hid dim * 2], times 2 due to bidirectional
# hidden [batch size, dec hid dim]
# [batch size, 1, sent len]
attention = self.attention(encoder_outputs, hidden).unsqueeze(1)
# [batch size, sent len, enc hid dim * 2]
outputs = encoder_outputs.permute(1, 0, 2)
# [1, batch size, enc hid dim * 2]
context = torch.bmm(attention, outputs).permute(1, 0, 2)
# input sentence -> embedding
# [1, batch size, emb dim]
embedded = self.embedding(trg.unsqueeze(0))
rnn_input = torch.cat((embedded, context), dim=2)
outputs, hidden = self.rnn(rnn_input, hidden.unsqueeze(0))
prediction = self.linear(outputs.squeeze(0))
return prediction, hidden.squeeze(0)
decoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, N_LAYERS, DEC_DROPOUT, attention).to(device)
prediction, decoder_hidden = decoder(test_batch.trg[0], outputs, hidden)
# notice the decoder_hidden's shape should match the shape that's generated by
# the encoder
prediction.shape, decoder_hidden.shape | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Seq2Seq This part is about putting the encoder and decoder together and is very much identical to the vanilla seq2seq framework, hence the explanation is omitted. | class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
def forward(self, src_batch, trg_batch, teacher_forcing_ratio=0.5):
max_len, batch_size = trg_batch.shape
trg_vocab_size = self.decoder.output_dim
# tensor to store decoder's output
outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)
# encoder_outputs : all hidden states of the input sequence (forward and backward)
# hidden : final forward and backward hidden states, passed through a linear layer
encoder_outputs, hidden = self.encoder(src_batch)
trg = trg_batch[0]
for i in range(1, max_len):
prediction, hidden = self.decoder(trg, encoder_outputs, hidden)
outputs[i] = prediction
if random.random() < teacher_forcing_ratio:
trg = trg_batch[i]
else:
trg = prediction.argmax(1)
return outputs
attention = Attention(ENC_HID_DIM, DEC_HID_DIM)
encoder = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, N_LAYERS, ENC_DROPOUT)
decoder = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, N_LAYERS, DEC_DROPOUT, attention)
seq2seq = Seq2Seq(encoder, decoder, device).to(device)
seq2seq
outputs = seq2seq(test_batch.src, test_batch.trg)
outputs.shape
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(seq2seq):,} trainable parameters') | The model has 12,975,877 trainable parameters
| MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Training Seq2Seq We've done the hard work of defining our seq2seq module. The final touch is to specify the training/evaluation loop. | optimizer = optim.Adam(seq2seq.parameters())
# ignore the padding index when calculating the loss
PAD_IDX = target.vocab.stoi['<pad>']
criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)
def train(seq2seq, iterator, optimizer, criterion):
seq2seq.train()
epoch_loss = 0
for batch in iterator:
optimizer.zero_grad()
outputs = seq2seq(batch.src, batch.trg)
# the loss function only works on 2d inputs
# and 1d targets we need to flatten each of them
outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])
trg_flatten = batch.trg[1:].view(-1)
loss = criterion(outputs_flatten, trg_flatten)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(seq2seq, iterator, criterion):
seq2seq.eval()
epoch_loss = 0
with torch.no_grad():
for batch in iterator:
# turn off teacher forcing
outputs = seq2seq(batch.src, batch.trg, teacher_forcing_ratio=0)
# trg = [trg sent len, batch size]
# output = [trg sent len, batch size, output dim]
outputs_flatten = outputs[1:].view(-1, outputs.shape[-1])
trg_flatten = batch.trg[1:].view(-1)
loss = criterion(outputs_flatten, trg_flatten)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(seq2seq, train_iterator, optimizer, criterion)
valid_loss = evaluate(seq2seq, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(seq2seq.state_dict(), 'tut2-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}') | Epoch: 01 | Time: 2m 30s
Train Loss: 4.844 | Train PPL: 126.976
Val. Loss: 4.691 | Val. PPL: 108.948
Epoch: 02 | Time: 2m 30s
Train Loss: 3.948 | Train PPL: 51.808
Val. Loss: 4.004 | Val. PPL: 54.793
Epoch: 03 | Time: 2m 31s
Train Loss: 3.230 | Train PPL: 25.281
Val. Loss: 3.498 | Val. PPL: 33.059
Epoch: 04 | Time: 2m 29s
Train Loss: 2.733 | Train PPL: 15.379
Val. Loss: 3.413 | Val. PPL: 30.360
Epoch: 05 | Time: 2m 28s
Train Loss: 2.379 | Train PPL: 10.793
Val. Loss: 3.269 | Val. PPL: 26.285
Epoch: 06 | Time: 2m 32s
Train Loss: 2.089 | Train PPL: 8.079
Val. Loss: 3.228 | Val. PPL: 25.229
Epoch: 07 | Time: 2m 29s
Train Loss: 1.862 | Train PPL: 6.438
Val. Loss: 3.201 | Val. PPL: 24.561
Epoch: 08 | Time: 2m 30s
Train Loss: 1.626 | Train PPL: 5.084
Val. Loss: 3.297 | Val. PPL: 27.044
Epoch: 09 | Time: 2m 30s
Train Loss: 1.406 | Train PPL: 4.078
Val. Loss: 3.312 | Val. PPL: 27.451
Epoch: 10 | Time: 2m 31s
Train Loss: 1.239 | Train PPL: 3.453
Val. Loss: 3.467 | Val. PPL: 32.050
| MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Evaluating Seq2Seq | seq2seq.load_state_dict(torch.load('tut2-model.pt'))
test_loss = evaluate(seq2seq, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |') | | Test Loss: 3.237 | Test PPL: 25.467 |
| MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Here, we pick a random example in our dataset, print out the original source and target sentence. Then takes a look at whether the "predicted" target sentence generated by the model. | example_idx = 0
example = train_data.examples[example_idx]
print('source sentence: ', ' '.join(example.src))
print('target sentence: ', ' '.join(example.trg))
src_tensor = source.process([example.src]).to(device)
trg_tensor = target.process([example.trg]).to(device)
print(trg_tensor.shape)
seq2seq.eval()
with torch.no_grad():
outputs = seq2seq(src_tensor, trg_tensor, teacher_forcing_ratio=0)
outputs.shape
output_idx = outputs[1:].squeeze(1).argmax(1)
' '.join([target.vocab.itos[idx] for idx in output_idx]) | _____no_output_____ | MIT | deep_learning/seq2seq/2_torch_seq2seq_attention.ipynb | certara-ShengnanHuang/machine-learning |
Categorical deduction (generic and all inferences)1. Take a mix of generic and specific statements2. Create powerset of combinations of specific statements3. create a inference graph for each combination of specific statements.4. Make all possible inferences for each graph (chain)5. present the union of possible conclusions for each node | # Syllogism specific statements
# First statement A __ B.
# Second statement B __ C.
# Third statement A ___ C -> look up tables to check if true, possible, or false.
specific_statement_options = {'disjoint from','overlaps with','subset of','superset of','identical to'}
# make a dictionary. key is a tuple with first statement type, second statement type and third statement type and value is True, Possible, False
Truth_Table = dict()
Truth_Table[( 'subset of', 'subset of', 'subset of')] = 'True'
Truth_Table[( 'identical to', 'subset of', 'subset of')] = 'True'
Truth_Table[( 'overlaps with', 'subset of', 'subset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'subset of', 'subset of')] = 'Possible'
Truth_Table[( 'superset of', 'subset of', 'subset of')] = 'Possible'
Truth_Table[( 'subset of', 'identical to', 'subset of')] = 'True'
Truth_Table[( 'identical to', 'identical to', 'subset of')] = 'False'
Truth_Table[( 'overlaps with', 'identical to', 'subset of')] = 'False'
Truth_Table[( 'disjoint from', 'identical to', 'subset of')] = 'False'
Truth_Table[( 'superset of', 'identical to', 'subset of')] = 'False'
Truth_Table[( 'subset of', 'overlaps with', 'subset of')] = 'Possible'
Truth_Table[( 'identical to', 'overlaps with', 'subset of')] = 'False'
Truth_Table[( 'overlaps with', 'overlaps with', 'subset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'overlaps with', 'subset of')] = 'Possible'
Truth_Table[( 'superset of', 'overlaps with', 'subset of')] = 'False'
Truth_Table[( 'subset of', 'disjoint from', 'subset of')] = 'False'
Truth_Table[( 'identical to', 'disjoint from', 'subset of')] = 'False'
Truth_Table[( 'overlaps with', 'disjoint from', 'subset of')] = 'False'
Truth_Table[( 'disjoint from', 'disjoint from', 'subset of')] = 'Possible'
Truth_Table[( 'superset of', 'disjoint from', 'subset of')] = 'False'
Truth_Table[( 'subset of', 'superset of', 'subset of')] = 'Possible'
Truth_Table[( 'identical to', 'superset of', 'subset of')] = 'False'
Truth_Table[( 'overlaps with', 'superset of', 'subset of')] = 'False'
Truth_Table[( 'disjoint from', 'superset of', 'subset of')] = 'False'
Truth_Table[( 'superset of', 'superset of', 'subset of')] = 'False'
Truth_Table[( 'subset of', 'subset of', 'identical to')] = 'False'
Truth_Table[( 'identical to', 'subset of', 'identical to')] = 'False'
Truth_Table[( 'overlaps with', 'subset of', 'identical to')] = 'False'
Truth_Table[( 'disjoint from', 'subset of', 'identical to')] = 'False'
Truth_Table[( 'superset of', 'subset of', 'identical to')] = 'Possible'
Truth_Table[( 'subset of', 'identical to', 'identical to')] = 'False'
Truth_Table[( 'identical to', 'identical to', 'identical to')] = 'True'
Truth_Table[( 'overlaps with', 'identical to', 'identical to')] = 'False'
Truth_Table[( 'disjoint from', 'identical to', 'identical to')] = 'False'
Truth_Table[( 'superset of', 'identical to', 'identical to')] = 'False'
Truth_Table[( 'subset of', 'overlaps with', 'identical to')] = 'False'
Truth_Table[( 'identical to', 'overlaps with', 'identical to')] = 'False'
Truth_Table[( 'overlaps with', 'overlaps with', 'identical to')] = 'Possible'
Truth_Table[( 'disjoint from', 'overlaps with', 'identical to')] = 'False'
Truth_Table[( 'superset of', 'overlaps with', 'identical to')] = 'False'
Truth_Table[( 'subset of', 'disjoint from', 'identical to')] = 'False'
Truth_Table[( 'identical to', 'disjoint from', 'identical to')] = 'False'
Truth_Table[( 'overlaps with', 'disjoint from', 'identical to')] = 'False'
Truth_Table[( 'disjoint from', 'disjoint from', 'identical to')] = 'Possible'
Truth_Table[( 'superset of', 'disjoint from', 'identical to')] = 'False'
Truth_Table[( 'subset of', 'superset of', 'identical to')] = 'Possible'
Truth_Table[( 'identical to', 'superset of', 'identical to')] = 'False'
Truth_Table[( 'overlaps with', 'superset of', 'identical to')] = 'False'
Truth_Table[( 'disjoint from', 'superset of', 'identical to')] = 'False'
Truth_Table[( 'superset of', 'superset of', 'identical to')] = 'False'
Truth_Table[( 'subset of', 'subset of', 'overlaps with')] = 'False'
Truth_Table[( 'identical to', 'subset of', 'overlaps with')] = 'False'
Truth_Table[( 'overlaps with', 'subset of', 'overlaps with')] = 'Possible'
Truth_Table[( 'disjoint from', 'subset of', 'overlaps with')] = 'Possible'
Truth_Table[( 'superset of', 'subset of', 'overlaps with')] = 'Possible'
Truth_Table[( 'subset of', 'identical to', 'overlaps with')] = 'False'
Truth_Table[( 'identical to', 'identical to', 'overlaps with')] = 'False'
Truth_Table[( 'overlaps with', 'identical to', 'overlaps with')] = 'True'
Truth_Table[( 'disjoint from', 'identical to', 'overlaps with')] = 'False'
Truth_Table[( 'superset of', 'identical to', 'overlaps with')] = 'False'
Truth_Table[( 'subset of', 'overlaps with', 'overlaps with')] = 'Possible'
Truth_Table[( 'identical to', 'overlaps with', 'overlaps with')] = 'True'
Truth_Table[( 'overlaps with', 'overlaps with', 'overlaps with')] = 'Possible'
Truth_Table[( 'disjoint from', 'overlaps with', 'overlaps with')] = 'Possible'
Truth_Table[( 'superset of', 'overlaps with', 'overlaps with')] = 'Possible'
Truth_Table[( 'subset of', 'disjoint from', 'overlaps with')] = 'False'
Truth_Table[( 'identical to', 'disjoint from', 'overlaps with')] = 'False'
Truth_Table[( 'overlaps with', 'disjoint from', 'overlaps with')] = 'Possible'
Truth_Table[( 'disjoint from', 'disjoint from', 'overlaps with')] = 'Possible'
Truth_Table[( 'superset of', 'disjoint from', 'overlaps with')] = 'Possible'
Truth_Table[( 'subset of', 'superset of', 'overlaps with')] = 'Possible'
Truth_Table[( 'identical to', 'superset of', 'overlaps with')] = 'False'
Truth_Table[( 'overlaps with', 'superset of', 'overlaps with')] = 'Possible'
Truth_Table[( 'disjoint from', 'superset of', 'overlaps with')] = 'False'
Truth_Table[( 'superset of', 'superset of', 'overlaps with')] = 'False'
Truth_Table[( 'subset of', 'subset of', 'disjoint from')] = 'False'
Truth_Table[( 'identical to', 'subset of', 'disjoint from')] = 'False'
Truth_Table[( 'overlaps with', 'subset of', 'disjoint from')] = 'False'
Truth_Table[( 'disjoint from', 'subset of', 'disjoint from')] = 'Possible'
Truth_Table[( 'superset of', 'subset of', 'disjoint from')] = 'False'
Truth_Table[( 'subset of', 'identical to', 'disjoint from')] = 'False'
Truth_Table[( 'identical to', 'identical to', 'disjoint from')] = 'False'
Truth_Table[( 'overlaps with', 'identical to', 'disjoint from')] = 'False'
Truth_Table[( 'disjoint from', 'identical to', 'disjoint from')] = 'True'
Truth_Table[( 'superset of', 'identical to', 'disjoint from')] = 'False'
Truth_Table[( 'subset of', 'overlaps with', 'disjoint from')] = 'Possible'
Truth_Table[( 'identical to', 'overlaps with', 'disjoint from')] = 'False'
Truth_Table[( 'overlaps with', 'overlaps with', 'disjoint from')] = 'Possible'
Truth_Table[( 'disjoint from', 'overlaps with', 'disjoint from')] = 'Possible'
Truth_Table[( 'superset of', 'overlaps with', 'disjoint from')] = 'False'
Truth_Table[( 'subset of', 'disjoint from', 'disjoint from')] = 'True'
Truth_Table[( 'identical to', 'disjoint from', 'disjoint from')] = 'True'
Truth_Table[( 'overlaps with', 'disjoint from', 'disjoint from')] = 'Possible'
Truth_Table[( 'disjoint from', 'disjoint from', 'disjoint from')] = 'Possible'
Truth_Table[( 'superset of', 'disjoint from', 'disjoint from')] = 'Possible'
Truth_Table[( 'subset of', 'superset of', 'disjoint from')] = 'Possible'
Truth_Table[( 'identical to', 'superset of', 'disjoint from')] = 'False'
Truth_Table[( 'overlaps with', 'superset of', 'disjoint from')] = 'Possible'
Truth_Table[( 'disjoint from', 'superset of', 'disjoint from')] = 'True'
Truth_Table[( 'superset of', 'superset of', 'disjoint from')] = 'False'
Truth_Table[( 'subset of', 'subset of', 'superset of')] = 'False'
Truth_Table[( 'identical to', 'subset of', 'superset of')] = 'False'
Truth_Table[( 'overlaps with', 'subset of', 'superset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'subset of', 'superset of')] = 'False'
Truth_Table[( 'superset of', 'subset of', 'superset of')] = 'Possible'
Truth_Table[( 'subset of', 'identical to', 'superset of')] = 'False'
Truth_Table[( 'identical to', 'identical to', 'superset of')] = 'False'
Truth_Table[( 'overlaps with', 'identical to', 'superset of')] = 'False'
Truth_Table[( 'disjoint from', 'identical to', 'superset of')] = 'False'
Truth_Table[( 'superset of', 'identical to', 'superset of')] = 'True'
Truth_Table[( 'subset of', 'overlaps with', 'superset of')] = 'False'
Truth_Table[( 'identical to', 'overlaps with', 'superset of')] = 'False'
Truth_Table[( 'overlaps with', 'overlaps with', 'superset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'overlaps with', 'superset of')] = 'False'
Truth_Table[( 'superset of', 'overlaps with', 'superset of')] = 'Possible'
Truth_Table[( 'subset of', 'disjoint from', 'superset of')] = 'False'
Truth_Table[( 'identical to', 'disjoint from', 'superset of')] = 'False'
Truth_Table[( 'overlaps with', 'disjoint from', 'superset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'disjoint from', 'superset of')] = 'Possible'
Truth_Table[( 'superset of', 'disjoint from', 'superset of')] = 'Possible'
Truth_Table[( 'subset of', 'superset of', 'superset of')] = 'Possible'
Truth_Table[( 'identical to', 'superset of', 'superset of')] = 'True'
Truth_Table[( 'overlaps with', 'superset of', 'superset of')] = 'Possible'
Truth_Table[( 'disjoint from', 'superset of', 'superset of')] = 'False'
Truth_Table[( 'superset of', 'superset of', 'superset of')] = 'True'
major_premise = 'subset of'
minor_premise = 'subset of'
conclusion = 'subset of'
truth_value = Truth_Table[(major_premise,minor_premise,conclusion)]
print(truth_value)
def truth_value_additive(major_premise,minor_premise,conclusion):
return Truth_Table[(major_premise,minor_premise,conclusion)]
def all_true_specific(major_premise,minor_premise):
return [x for x in specific_statement_options if Truth_Table[(major_premise,minor_premise,x)]=='True']
def all_possible_specific(major_premise,minor_premise):
return [x for x in specific_statement_options if Truth_Table[(major_premise,minor_premise,x)]=='Possible']
def all_false_specific(major_premise,minor_premise):
return [x for x in specific_statement_options if Truth_Table[(major_premise,minor_premise,x)]=='False']
truth_value_additive('subset of','superset of','overlaps with')
all_true_specific('subset of','overlaps with')
reverse_implications = dict()
reverse_implications['subset of']='superset of'
reverse_implications['identical to']='identical to'
reverse_implications['overlaps with']='overlaps with'
reverse_implications['disjoint from']='disjoint from'
reverse_implications['superset of']='subset of'
generic_statement_options = {'All','Some','No','Some_not'} # universal affirmative, particular affirmative, universal negative, particular negative
generic_to_specific = dict()
generic_to_specific['All'] = {'subset of','identical to'}
generic_to_specific['No'] = {'disjoint from'}
generic_to_specific['Some'] = {'overlaps with','subset of','identical to','superset of'} # generic_to_specific['All'].union({'superset of','overlaps with'})
generic_to_specific['Some_not'] = {'overlaps with','disjoint from','superset of'} # generic_to_specific['No'].union({'superset of','overlaps with'})
# generic premises and conclusion: tautology, fallacy, or possible if
# take in generic premises, make powersets of major and minor premise possibilities,
# get the truth value for each, and get the joint conclusion:
# always true (tautology), sometimes true or possible, and always false
import itertools
generic_major_premise = 'All'
generic_minor_premise = 'No'
generic_conclusion = 'No'
possibilities = list(itertools.product(generic_to_specific[generic_major_premise],generic_to_specific[generic_minor_premise],generic_to_specific[generic_conclusion]))
truth_value_list = []
for p in possibilities:
truth_value_list.append(truth_value_additive(p[0],p[1],p[2]))
print(possibilities,truth_value_list)
def generic_truth_value_additive(generic_major_premise,generic_minor_premise,generic_conclusion):
possibilities = list(itertools.product(generic_to_specific[generic_major_premise],generic_to_specific[generic_minor_premise],generic_to_specific[generic_conclusion]))
truth_value_list = []
for p in possibilities:
truth_value_list.append(truth_value_additive(p[0],p[1],p[2]))
print(possibilities,truth_value_list)
if ('True' in truth_value_list) and ('False' not in truth_value_list) and ('Possible' not in truth_value_list):
return 'True'
elif ('False' in truth_value_list) and ('True' in truth_value_list):
return 'Possible'
elif ('Possible' in truth_value_list):
return 'Possible'
elif ('False' in truth_value_list) and ('Possible' not in truth_value_list) and ('True' not in truth_value_list):
return 'False'
else:
return 'Not valid truth values'
generic_truth_value_additive('Some','No','No')
# reverse implications, additive only (A,B) - (B,C) - (A,C)
# define sets
sets = ['A','B','C']
first_statement = ['B','subset of','A']
second_statement = ['C','overlaps with','B']
third_statement = ['C','disjoint from','A']
additive_set_order_check = dict()
additive_set_order_check['first'] = (0,1)
additive_set_order_check['second'] = (1,2)
additive_set_order_check['third'] = (0,2)
# check if a statement needs to be reversed
def check_reverse_specific(statement,stype,sets):
if (statement[0]==sets[additive_set_order_check[stype][0]]) and (statement[2]==sets[additive_set_order_check[stype][1]]):
print('straight')
return statement
else:
print('reverse')
return [statement[2],reverse_implications[statement[1]],statement[0]]
# Ideally, should auto calculate order or sets. or alternatively, calculate the reverse of each statement as an inference.
# Given a set of statements in the form ['A', 'disjoint from','B'], make all inferences, find all contradictions.
import networkx as nx
del(infG)
statement_set = [['A','subset of','B'],['B','subset of','C'],['D','identical to','C']]
# make a graph?
infG = nx.DiGraph()
# get list of nodes from elt 0 and 2 from each statement
setnodes = set()
fromnodes = set()
tonodes = set()
for statement in statement_set:
fromnodes.add(statement[0])
tonodes.add(statement[2])
infG.add_edge(statement[0],statement[2],rel = statement[1])
setnodes = fromnodes.union(tonodes)
print(fromnodes,tonodes, setnodes)
roots = fromnodes-tonodes
ends = tonodes - fromnodes
print(roots,ends)
import matplotlib.pyplot as plt
#nx.draw_spectral(infG,with_labels=True,edge_labels = 'rel' font_size=18,node_size=1200)
pos = nx.spectral_layout(infG)
nx.draw(infG, pos, with_labels=True)
edge_labels = nx.get_edge_attributes(infG,'rel')
nx.draw_networkx_edge_labels(infG, pos, labels = edge_labels)
#plt.savefig('this.png')
plt.show()
#nx.get_edge_attributes(infG,'rel')
# getting reverse implications and redrawing graph
infGr = nx.DiGraph()
for statement in statement_set:
infGr.add_edge(statement[2],statement[0],rel = reverse_implications[statement[1]])
pos = nx.spectral_layout(infGr)
nx.draw(infGr, pos, with_labels=True)
edge_labels = nx.get_edge_attributes(infGr,'rel')
nx.draw_networkx_edge_labels(infGr, pos, labels = edge_labels)
#plt.savefig('this.png')
plt.show()
# note this rewrites the latest edges, and doesn't show multiple edges between nodes, which is annoying.
def make_all_inferences(infGc):
roots = {n for n in infG.nodes if list(infG.predecessors(n))==[]}
ends = {n for n in infG.nodes if list(infG.successors(n))==[]}
infG1 = infGc.copy()
prev_paths = ['']
no_more_inf_flag = 0
contradiction_found = 0
while no_more_inf_flag==0:
# calculating paths between roots and ends
paths = dict()
num_infs= 0
for r in roots:
paths[r] = dict()
for e in ends:
paths[r][e] = list(nx.all_simple_paths(infG1,r,e))
paths[r][e]= [p for p in paths[r][e] if p not in prev_paths]
prev_paths = prev_paths + paths[r][e]
for r in roots:
for e in ends:
for p in paths[r][e]:
print(p)
for i in range(len(p)-2):
print(p[i],p[i+1],p[i+2])
inf = all_true_specific(infG1.edges[(p[i],p[i+1])]['rel'],infG1.edges[(p[i+1],p[i+2])]['rel'])
if len(inf)>0:
# catch contradictions
if (p[i],p[i+2]) in infG1.edges():
if infG1.edges[(p[i],p[i+2])]['rel'] not in inf:
print('Contradicting relationship between ',p[i],' and ',p[i+2],' already exists as ',infG1.edges[(p[i],p[i+2])]['rel'])
contradiction_found = 1
else:
print('Since ',p[i],infG1.edges[(p[i],p[i+1])]['rel'],p[i+1],', and ',p[i+1],infG1.edges[(p[i+1],p[i+2])]['rel'],p[i+2],', this means')
print(p[i],inf[0],p[i+2])
infG1.add_edge(p[i],p[i+2],rel=inf[0])
fromnodes.add(p[i])
tonodes.add(p[i+2])
num_infs= num_infs+1
if (num_infs==0) or (contradiction_found==1):
no_more_inf_flag = 1
if contradiction_found==1:
print('not updating graph since contradiction found')
del(infG1)
else:
infGc = infG1.copy()
del(infG1)
edges = list(infGc.edges())
for edge in edges:
print(edge,infGc.edges[edge]['rel'])
return (infGc,contradiction_found)
(infG,cd_found) = make_all_inferences(infG)
if cd_found==1:
print('not changing graph until contradiction resolved')
else:
print('Updated graph')
del(infG2)
infG2 = nx.MultiDiGraph()
for (u,v) in infG.edges():
infG2.add_edge(u,v,0,rel=infG.edges[(u,v)]['rel'])
infG2.add_edge(v,u,1,rel=reverse_implications[infG.edges[(u,v)]['rel']])
list(nx.all_simple_paths(infG2,'D','A'))
def get_s_or_r_multi(infG3,u,v):
if infG3.edges.get((u,v,0),'')=='':
return 1
else:
return 0
def get_rel_multidigraph(infG3,u,v):
return infG3.edges[(u,v,get_s_or_r_multi(infG3,u,v))]['rel']
roots = {n for n in infG.nodes if list(infG.predecessors(n))==[]}
ends = {n for n in infG.nodes if list(infG.successors(n))==[]}
print(roots,ends)
prev_paths = ['']
paths = dict()
for r in roots:
paths[r] = dict()
for e in ends:
paths[r][e] = list(nx.all_simple_paths(infG2,r,e))
print(paths)
all_true_specific(get_rel_multidigraph(infG2,'D','C'),get_rel_multidigraph(infG2,'C','B'))
def make_all_inferences_multi(infGc):
# make multidigraph
infG2 = nx.MultiDiGraph()
roots = list(infG.nodes())
ends = list(infG.nodes())
if str(type(infGc))=="<class 'networkx.classes.multidigraph.MultiDiGraph'>":
for (u,v) in infGc.edges():
if get_s_or_r_multi(infGc,u,v)==0:
infG2.add_edge(u,v,0,rel=get_rel_multidigraph(infGc,u,v))
infG2.add_edge(v,u,1,rel=reverse_implications[get_rel_multidigraph(infGc,u,v)])
else:
infG2.add_edge(u,v,1,rel=get_rel_multidigraph(infGc,u,v))
infG2.add_edge(v,u,0,rel=reverse_implications[get_rel_multidigraph(infGc,u,v)])
elif str(type(infGc))=="<class 'networkx.classes.digraph.DiGraph'>":
#roots = {n for n in infG.nodes if list(infG.predecessors(n))==[]}
#ends = {n for n in infG.nodes if list(infG.successors(n))==[]}
for (u,v) in infGc.edges():
infG2.add_edge(u,v,0,rel=infGc.edges[(u,v)]['rel'])
infG2.add_edge(v,u,1,rel=reverse_implications[infGc.edges[(u,v)]['rel']])
else:
print('Only directed graphs or multidirected graphs accepted')
return ('','')
prev_paths = []
no_more_inf_flag = 0
contradiction_found = 0
while no_more_inf_flag==0:
# calculating paths between roots and ends
paths = dict()
num_infs= 0
for r in roots:
paths[r] = dict()
for e in ends:
paths[r][e] = list(nx.all_simple_paths(infG2,r,e))
paths[r][e]= [p for p in paths[r][e] if p not in prev_paths]
prev_paths = prev_paths + paths[r][e]
#print(prev_paths)
for r in roots:
for e in ends:
for p in paths[r][e]:
#print(p)
for i in range(len(p)-2):
#print(p[i],p[i+1],p[i+2])
#print(get_rel_multidigraph(infG2,p[i],p[i+1]))
#print(get_rel_multidigraph(infG2,p[i+1],p[i+2]))
inf = all_true_specific(get_rel_multidigraph(infG2,p[i],p[i+1]),get_rel_multidigraph(infG2,p[i+1],p[i+2]))
if len(inf)>0:
# catch contradictions
if (p[i],p[i+2]) in infG2.edges():
if get_rel_multidigraph(infG2,p[i],p[i+2]) not in inf:
print('Contradicting relationship between ',p[i],' and ',p[i+2],' already exists as ',get_rel_multidigraph(infG2,p[i],p[i+2]))
contradiction_found = 1
else:
print('Since ',p[i],get_rel_multidigraph(infG2,p[i],p[i+1]),p[i+1],', and ',p[i+1],get_rel_multidigraph(infG2,p[i+1],p[i+2]),p[i+2],', this means')
print(p[i],inf[0],p[i+2])
infG2.add_edge(p[i],p[i+2],0,rel=inf[0])
fromnodes.add(p[i])
tonodes.add(p[i+2])
num_infs= num_infs+1
if (num_infs==0) or (contradiction_found==1):
no_more_inf_flag = 1
if contradiction_found==1:
print('not updating graph since contradiction found')
del(infG2)
else:
infGc = infG2.copy()
del(infG2)
edges = list(infGc.edges())
for (u,v) in edges:
print('(',u,',',v,')',get_rel_multidigraph(infGc,u,v))
return (infGc,contradiction_found)
(infG,cd) = make_all_inferences_multi(infG)
roots = list(infG.nodes())
ends = list(infG.nodes())
for r in roots:
for e in ends:
print(list(nx.all_simple_paths(infG,r,e)))
print(generic_to_specific['All'].intersection(generic_to_specific['Some']))
print(generic_to_specific['Some'].intersection(generic_to_specific['Some_not']))
print(generic_to_specific['Some_not'].intersection(generic_to_specific['No']))
print(generic_to_specific['No'].intersection(generic_to_specific['All']))
print(generic_to_specific['All'].intersection(generic_to_specific['Some_not']))
print(generic_to_specific['Some'].intersection(generic_to_specific['No']))
def validate_statement(statement_set,new_statement):
# validating each new statement against existing statement set: assuming that the existing statement is already done with chain inferencing.
# statement set for each inference graph in possible ones should be considered, and if we can find the ones that satisfy. display the ones that don't and reduce possibilities.
# if a statement is encountered (specific or generic) that is completely new nodes, add (mark citation)
# if a statement is encountered (specific or generic) that uses one new node and one existing, add (mark citation)
# if a statement is encountered that uses the same two nodes in same order:
# if new statement and any old statement with same nodes are specific and different it is a contradiction and needs to be resolved.
# consider saving for each edge which statements it is inferred from so a chain can be established and displayed
# if new statement is specific and the combination of old ones is generic-specific combination, specific statement should be in generic_to_specific[dict] intersection of previous statements
# if new statement is generic and the old ones are a combination, the intersection of the new with the old should be displayed and verified. if intersection is nullset, throw up contradiction to resolve.
# if a statement with reverse nodes is encountered, reverse and follow above instructions.
# Generic statements sets
import itertools
#def powerset_generic_to_specific(generic_statement_set):
generic_statement_set = [['A','No','B'],['B','All','C'],['C','Some','D']]
# add one more step here for multiple generic statements between two nodes - compatible (All,Some), (Some, Some_not), (Some_not, no). incompatible (no,all), (all, some_not), (some, no)
# the incompatible types should be filtered out during entry
# set of converted generic to specific sets
possibilities_set = [list(generic_to_specific[statement[1]]) if statement[1] in generic_statement_options else [statement[1]] for statement in generic_statement_set]
node_set = [[statement[0],statement[2]] for statement in generic_statement_set]
print(possibilities_set,'\n\n\n')
def flattentup(tup):
flatlist = []
for elt in tup:
#print(elt,type(elt))
if type(elt) is not tuple:
#print('elt appended')
flatlist.append(elt)
else:
#print('calling recursive')
flatlist = flatlist + flattentup(elt)
return flatlist
#combinations = list(itertools.product([ps for ps in possibilities_set]))
#print(combinations)
combinations = possibilities_set[0]
for i in possibilities_set[1:len(possibilities_set)]:
combinations = list(itertools.product(combinations,i))
#print(combinations)
combinationslist = [flattentup(elt) for elt in combinations]
print(combinationslist)
infgraphdict =dict()
for i in range(len(combinationslist)):
infgraphdict[i] = []
for j in range(len(node_set)):
infgraphdict[i].append([node_set[j][0],combinationslist[i][j],node_set[j][1]])
print(infgraphdict)
import networkx as nx
import matplotlib.pyplot as plt
infdict = dict()
for i in infgraphdict.keys():
statement_set = infgraphdict[i]
# make a graph?
infG = nx.DiGraph()
# get list of nodes from elt 0 and 2 from each statement
setnodes = set()
fromnodes = set()
tonodes = set()
for statement in statement_set:
fromnodes.add(statement[0])
tonodes.add(statement[2])
infG.add_edge(statement[0],statement[2],rel = statement[1])
setnodes = fromnodes.union(tonodes)
print(fromnodes,tonodes, setnodes)
roots = fromnodes-tonodes
ends = tonodes - fromnodes
print(roots,ends)
pos = nx.spectral_layout(infG)
nx.draw(infG, pos, with_labels=True)
edge_labels = nx.get_edge_attributes(infG,'rel')
nx.draw_networkx_edge_labels(infG, pos, labels = edge_labels)
#plt.savefig('this.png')
plt.show()
(infdict[i],contradiction_found) = make_all_inferences_multi(infG)
del(infG)
# get possible relationships for each edge
edge_poss_dict = dict()
for i in infdict.keys():
#print(i)
for edge in infdict[i].edges():
#print(edge)
if edge not in edge_poss_dict.keys():
#print('adding new key')
edge_poss_dict[edge] = list()
#print(edge_poss_dict[edge])
#print(type(edge_poss_dict[edge]))
t = [edge[0],get_rel_multidigraph(infdict[i],edge[0],edge[1]),edge[1]]
if t not in edge_poss_dict[edge]:
edge_poss_dict[edge].append(t)
#print(edge_poss_dict[edge])
print(edge_poss_dict)
# consider converting possibles to smaller generic statement for readability.
infdict
del(edge_poss_dict) | _____no_output_____ | MIT | reasoning_engine/categorical reasoning/Categorical_deduction_generic_all_inferences.ipynb | rts1988/IntelligentTutoringSystem_Experiments |
Understanding ROS NodesThis tutorial introduces ROS graph concepts and discusses the use of `roscore`, `rosnode`, and `rosrun` commandline tools.Source: [ROS Wiki](http://wiki.ros.org/ROS/Tutorials/UnderstandingNodes) Quick Overview of Graph Concepts* Nodes: A node is an executable that uses ROS to communicate with other nodes.* Messages: ROS data type used when subscribing or publishing to a topic.* Topics: Nodes can publish messages to a topic as well as subscribe to a topic to receive messages.* Master: Name service for ROS (i.e. helps nodes find each other)* rosout: ROS equivalent of stdout/stderr* roscore: Master + rosout + parameter server (parameter server will be introduced later) roscore`roscore` is the first thing you should run when using ROS. | %%bash --bg
roscore | Starting job # 0 in a separate thread.
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
Using `rosnode``rosnode` displays information about the ROS nodes that are currently running. The `rosnode list` command lists these active nodes: | %%bash
rosnode list
%%bash
rosnode info rosout | --------------------------------------------------------------------------------
Node [/rosout]
Publications:
* /rosout_agg [rosgraph_msgs/Log]
Subscriptions:
* /rosout [unknown type]
Services:
* /rosout/get_loggers
* /rosout/set_logger_level
contacting node http://localhost:43395/ ...
Pid: 18703
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
Using `rosrun``rosrun` allows you to use the package name to directly run a node within a package (without having to know the package path). | %%bash --bg
rosrun turtlesim turtlesim_node | Starting job # 2 in a separate thread.
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
NOTE: The turtle may look different in your turtlesim window. Don't worry about it - there are [many types of turtle](http://wiki.ros.org/DistributionsCurrent_Distribution_Releases) and yours is a surprise! | %%bash
rosnode list | /rosout
/turtlesim
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
One powerful feature of ROS is that you can reassign Names from the command-line.Close the turtlesim window to stop the node. Now let's re-run it, but this time use a [Remapping Argument](http://wiki.ros.org/Remapping%20Arguments) to change the node's name: | %%bash --bg
rosrun turtlesim turtlesim_node __name:=my_turtle | Starting job # 3 in a separate thread.
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
Now, if we go back and use `rosnode list`: | %%bash
rosnode list | /my_turtle
/rosout
/turtlesim
| MIT | notebooks/ROS_Tutorials/.ipynb_checkpoints/ROS Nodes-checkpoint.ipynb | GimpelZhang/git_test |
print('Welcome to Techno Quiz: ')
ans = input('''Ready to begin (yes/no): ''')
score=0
total_Q=15
if ans.lower() =='yes' :
ans = input(''' 1.How to check your current python version ?
A. python version
B. python -V
Ans:''')
if ans.lower () == 'b' :
score+= 1
print('correct')
else:
print('Incorrect')
ans = input( '''2.What is used to define a block of code in python ?
A.Parenthesis
B.Curly braces
C.Indentation
Ans:''')
if ans.lower () == 'c':
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : C
Explanation: Python uses indentation to define block of code.
Indentations are simply Blank spaces or Tabs which is used as an indicator that indented code is the child part.
As curlybraces are used in C/C++/Java.. So, Option B is correct.''')
ans = input( '''3.All keyword in python are in
A. Lowercase
B. Uppercase
C. Both uppercase & Lowercase
Ans:''')
if ans.lower () == 'c' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : C
Explanation: All keywords in python are in lowercase except True, False and None.
So, Option C is correct.''')
ans = input( '''4. Dictionary keys must be immutable
{true/false}
Ans:''')
if ans.lower () == 'true' :
score+= 1
print('correct')
else:
print('Incorrect')
print(''' Explanation: Dictionary keys must be immutable.which means you can use strings,numbers or tuples
as dictionary keys and you can't use any mutable object as the key such as list.''')
ans = input('''5.Which of the following function convert a string to a float in python?
A. int(x [,base])
B. float(x)
C. str(x)
Ans:''')
if ans.lower () == 'b':
score+= 1
print('correct')
else:
print('Incorrect')
print('Explanation: float(x) − Converts x to a floating-point number')
ans = input( '''6. In Python, how are arguments passed?
A. pass by value
B. pass by reference
C. It gives options to user to choose
Ans:''')
if ans.lower () =='b' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : B
Explanation: All parameters (arguments) in the Python language are passed by reference.
It means if you change what a parameter refers to within a function, the change also reflects back in the calling function''')
ans = input('''7.Which function can be used on the file to display a dialog for saving a file ?
A. Filename = savefilename()
B. Filename = asksaveasfilename()
C. No such option in python
Ans:''')
if ans.lower () == 'b' :
score+= 1
print('correct')
else:
print('Incorrect')
ans = input( ''' 8.What command is used to shuffle a list 'L' ?
A. L.shuffle()
B. shuffle(L)
C. random.Shuffle(L)
Ans:''')
if ans.lower () =='c' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : C
Explanation: To shuffle the list we use random.shuffle(List-name)function''')
ans = input( '''9.What is it called when a function is defined inside a class?
A. method
B. class
C. module
Ans:''')
if ans.lower () == 'a' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : A
Explanation: method is called when a function is defined inside a class. So, option C is correct.''')
ans = input( '''10.Syntax error in python is detected by ______ at _____
A. compiler/compile time
B. interpreter/run time
C. compiler/run time
Ans:''')
if ans.lower () == 'b':
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : B
Explanation: Syntax error in python is detected by interpreter at run time.''')
ans = input( '''11.Which among the following are mutable objects in Python
(i) List
(ii) Integer
(iii) String
(iv) Tuple
A. i only
B. i and ii only
C. iii and iv only
D. iv only
Ans:''')
if ans.lower () == 'a' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : A
Explanation: List are mutable objects in Python.''')
ans = input( '''12. In python what is method inside class ?
A.attribute
B.object
C.function
Ans:''')
if ans.lower () =='c' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : C
Explanation: In OOP of Python, function is known by "method".''')
ans = input( '''13.The elements of a list are arranged in descending order. Which of the following two will give same outputs?
i. print(list_name.sort())
ii. print(max(list_name))
iii. print(list_name.reverse())
A. i, ii
B. i, iii
C. ii, iii
Ans:''')
if ans.lower () == 'b' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : B
Explanation: print(list_name.sort()) and print(list_name.reverse()) will give same outputs''')
ans = input( '''14.Which of the following is correct?
{class A:
def __init__(self,name):
self.name=name
a1=A("john")
a2=A("john") }
A. id(a1) and id(a2) will have same value.
B. id(a1) and id(a2) will have different values.
C. Two objects with same value of attribute cannot be created
Ans:''')
if ans.lower () =='b' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : B
Explanation: Although both a1 and a2 have same value of attributes,
but these two point to two different object.
Hence, their id will be different.''')
ans = input( '''15.Python was developed by
A. Guido van Rossum
B. James Gosling
C. Dennis Ritchie
Ans:''')
if ans.lower () == 'a' :
score+= 1
print('correct')
else:
print('Incorrect')
print('''Ans : A
Explanation: A Dutch Programmer Guido van Rossum developed python
at Centrum Wiskunde & Informatica (CWI) in the Netherlands as a successor to the ABC language.
''')
print('Result :', score, "questions correct.")
marks = (score/total_Q) * 100
print ("Marks Accqired [%] : ",marks)
print('''Thankyou for taking part in Techno Quiz
Have a Nice Day!!! ''')
| _____no_output_____ | Apache-2.0 | Quiz.ipynb | sandeepkumarpradhan71/sandeepkumar |
|
input x, truth y, predict (y-x) in bins.major changes:- in Datagenerator(), add y=y-X[output_idxs]- in create_predictions(): when unnormalizing, only multiply with std, dont add mean- included adaptive binsObservations- DOI takes much longer to train to same loss than normal categorical.- not much better performance with adaptive bins ToDo:- change create_prediction() function for ensemble (not done) and binned (done) prediction. Currently x is added after predictions are made.- better method to get adaptive bins. Currently bins are made on 1 year data- unable to run compute_bin_crps() for full data. kernel dies. may load in chunks. | %load_ext autoreload
%autoreload 2
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
from src.data_generator import *
from src.train import *
from src.utils import *
from src.networks import *
tf.__version__
import os
import tensorflow as tf
os.environ["CUDA_VISIBLE_DEVICES"]=str(0)
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
args = load_args('../nn_configs/B/81.1-resnet_d3_dr_0.1.yml')
args['train_years']=['2017', '2017']
args['valid_years']=['2018-01-01','2018-03-31']
args['test_years']=['2018-04-01','2018-12-31']
args['model_save_dir'] ='/home/garg/data/WeatherBench/predictions/saved_models'
args['datadir']='/home/garg/data/WeatherBench/5.625deg'
args['is_categorical']=True
args['is_doi']=True
args['bin_min']=-2; args['bin_max']=2 #checked min, max of (x-y) in train.
args['adaptive_bins']=None
args['num_bins'], args['bin_min'], args['bin_max']
args['filters'] = [128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 2*args['num_bins']]
#args['loss'] = 'lat_categorical_loss'
dg_train, dg_valid, dg_test = load_data(**args)
x,y=dg_train[0]; print(x.shape, y.shape)
x,y=dg_valid[0]; print(x.shape, y.shape)
x,y=dg_test[0]; print(x.shape, y.shape)
#changing valid shape too. maybe not a good idea/not needed.
y.min(), y.max(), y[0,0,0,0,:] | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Training | # model = build_resnet_categorical(
# **args, input_shape=dg_train.shape,
# )
# # model.summary()
# categorical_loss = create_lat_categorical_loss(dg_train.data.lat, 2)
# model.compile(keras.optimizers.Adam(1e-3), loss=categorical_loss)
# model_history=model.fit(dg_train, epochs=50)
#training is slower compared to normal categorical without DOI
# #exp_id=args['exp_id']
# exp_id='categorical_doi_v1'
# model_save_dir=args['model_save_dir']
# model.save(f'{model_save_dir}/{exp_id}.h5')
# model.save_weights(f'{model_save_dir}/{exp_id}_weights.h5')
# #to_pickle(model_history.history, f'{model_save_dir}/{exp_id}_history.pkl')
# checking training
# # list all data in history
# print(history.history.keys())
# # summarize history for accuracy
# plt.plot(history.history['accuracy'])
# plt.plot(history.history['val_accuracy'])
# plt.title('model accuracy')
# plt.ylabel('accuracy')
# plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
# plt.show()
# # summarize history for loss
# plt.plot(history.history['loss'])
# plt.plot(history.history['val_loss'])
# plt.title('model loss')
# plt.ylabel('loss')
# plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
# plt.show() | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Predictions | exp_id='categorical_doi_v1'
model_save_dir=args['model_save_dir']
#args['ext_mean'] = xr.open_dataarray(f'{args["model_save_dir"]}/{args["exp_id"]}_mean.nc')
#args['ext_std'] = xr.open_dataarray(f'{args["model_save_dir"]}/{args["exp_id"]}_std.nc')
#dg_test = load_data(**args, only_test=True)
model = keras.models.load_model(
f'{model_save_dir}/{exp_id}.h5',
custom_objects={'PeriodicConv2D': PeriodicConv2D, 'categorical_loss': keras.losses.mse}
)
# #small test
# xtrue ,ytrue=dg_valid[0]
# ypred=model.predict(xtrue)
# print(ytrue.shape, ytrue.max(),ytrue.min(), ytrue.mean())
# print(ypred.shape, ypred.max(),ypred.min(), ypred.mean())
# print(xtrue[...,12].min(), xtrue[...,12].max(), xtrue[...,12].mean(), xtrue.shape)
#full-data (apr-dec 2018)
preds = create_predictions(model, dg_test, is_categorical=True,
is_doi=True, adaptive_bins=None,
bin_min=args['bin_min'], bin_max=args['bin_max'])
preds
#maybe add bin_min, bin_max, is_doi, to **kwargs in function.
#extremely small values may increase numerical error? (x-y)
preds.t.min(), preds.t.max(), preds.t.mean()
preds.t.bin_edges
#attempt 1: add actual x values to prediction
#attempt 2: add unnormalized x (using mean, std of dg_test) -no need.
#attempt 1
datadir=args['datadir']
z500_valid = load_test_data(f'{datadir}/geopotential_500', 'z').drop('level')
t850_valid = load_test_data(f'{datadir}/temperature_850', 't').drop('level')
valid = xr.merge([z500_valid, t850_valid])
valid
dg_test.lead_time
# valid_x=valid.sel(time=preds.time-np.timedelta64(3,'D'))
# valid_y=valid.sel(time=preds.time)
#Shifting left by 72 hours. Now this value can be added to preds.
valid_x_new=valid.shift(time=-dg_test.lead_time)
valid_x_new
print(
valid.t.isel(lat=0,lon=0).sel(time='2017-01-01T00:00:00').values,
valid.t.isel(lat=0,lon=0).sel(time='2017-01-04T00:00:00').values,
valid_x_new.t.isel(lat=0,lon=0).sel(time='2017-01-01T00:00:00').values)
#valid.t.isel(lat=0,lon=0,time=72).values,valid_x_new.t.isel(lat=0,lon=0,time=0).values | 257.84134 258.57373 258.57373
| MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Most likely class | # Using bin_mid_points of prediction with highest probability
das = []
for var in ['z', 't']:
idxs = np.argmax(preds[var], -1)
most_likely = preds[var].mid_points[idxs]
das.append(xr.DataArray(
most_likely, dims=['time', 'lat', 'lon'],
coords = [preds.time, preds.lat, preds.lon],
name=var
))
preds_ml = xr.merge(das)
preds_ml
preds_ml_new=preds_ml+valid.shift(time=-dg_test.lead_time)
#be careful of last points (2018-12-28) to 2018-12-31.
#they must contain nan values
preds_ml_new.t.isel(time=-36).values
#preds_new.t.sel(time='2018-12-28T22:00:00').values
#removing last 3 days (naive approach)
preds_ml_new=preds_ml_new.sel(time=slice(None,'2018-12-28T22:00:00'))
preds_ml_new.t.max().values, preds_ml_new.t.min().values, preds_ml_new.t.mean().values
valid.t.max().values, valid.t.min().values, valid.t.mean().values | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
RMSE | compute_weighted_rmse(preds_ml_new, valid).load()
#still very bad. for comparison, training on the same data for same epochs (loss=1.7) without difference to input method had rmse of 685 | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Binned CRPS | preds['t'].mid_points
#changing Observation directly instead of predictions for binned crps
obs=valid-valid.shift(time=-dg_test.lead_time)
obs=obs.sel(time=preds.time) #reducing to preds size
obs=obs.sel(time=slice(None,'2018-12-28T22:00:00'))#removing nan values
print(
valid.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values,
valid_x_new.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values,
obs.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values
)
obs #reduced set. 2018-01-04 to 2018-12-28
def compute_bin_crps(obs, preds, bin_edges):
"""
Last axis must be bin axis
obs: [...]
preds: [..., n_bins]
"""
obs = obs.values
preds = preds.values
# Convert observation
a = np.minimum(bin_edges[1:], obs[..., None])
b = bin_edges[:-1] * (bin_edges[0:-1] > obs[..., None])
y = np.maximum(a, b)
# Convert predictions to cumulative predictions with a zero at the beginning
cum_preds = np.cumsum(preds, -1)
cum_preds_zero = np.concatenate([np.zeros((*cum_preds.shape[:-1], 1)), cum_preds], -1)
xmin = bin_edges[..., :-1]
xmax = bin_edges[..., 1:]
lmass = cum_preds_zero[..., :-1]
umass = 1 - cum_preds_zero[..., 1:]
# y = np.atleast_1d(y)
# xmin, xmax = np.atleast_1d(xmin), np.atleast_1d(xmax)
# lmass, lmass = np.atleast_1d(lmass), np.atleast_1d(lmass)
scale = xmax - xmin
# print('scale =', scale)
y_scale = (y - xmin) / scale
# print('y_scale = ', y_scale)
z = y_scale.copy()
z[z < 0] = 0
z[z > 1] = 1
# print('z =', z)
a = 1 - (lmass + umass)
# print('a =', a)
crps = (
np.abs(y_scale - z) + z**2 * a - z * (1 - 2*lmass) +
a**2 / 3 + (1 - lmass) * umass
)
return np.sum(scale * crps, -1)
def compute_weighted_bin_crps(da_fc, da_true, mean_dims=xr.ALL_DIMS):
"""
"""
t = np.intersect1d(da_fc.time, da_true.time)
da_fc, da_true = da_fc.sel(time=t), da_true.sel(time=t)
weights_lat = np.cos(np.deg2rad(da_true.lat))
weights_lat /= weights_lat.mean()
dims = ['time', 'lat', 'lon']
if type(da_true) is xr.Dataset:
das = []
for var in da_true:
result = compute_bin_crps(da_true[var], da_fc[var], da_fc[var].bin_edges)
das.append(xr.DataArray(
result, dims=dims, coords=dict(da_true.coords), name=var
))
crps = xr.merge(das)
else:
result = compute_bin_crps(da_true, da_fc, da_fc.bin_edges)
crps = xr.DataArray(
result, dims=dims, coords=dict(da_true.coords), name=da_fc.name
)
crps = (crps * weights_lat).mean(mean_dims)
return crps
obs1 = obs.sel(time='2018-05-05')
preds1 = preds.sel(time='2018-05-05')
compute_weighted_bin_crps(preds1, obs1).load()
#pretty bad again. | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
compare to - Adaptive binning Adaptive binning | #Finding bin edges on full 1 year training data (Not possible for 40 years)
args['is_categorical']=False
dg_train, dg_valid, dg_test = load_data(**args)
args['is_categorical']=True
x,y=dg_train[0]; print(x.shape, y.shape)
diff=y-x[...,dg_train.output_idxs]
print(diff.min(), diff.max(), diff.mean())
plt.hist(diff.reshape(-1))
diff=[]
for x,y in dg_train:
diff.append(y-x[...,dg_train.output_idxs])
diff = np.array([ elem for singleList in diff for elem in singleList])
diff.shape
diff_shape=diff.shape
diff2, bins=pd.qcut(diff.reshape(-1), args['num_bins'],
labels=False, retbins=True)
bins
args['is_doi']=True
args['bin_min']=bins[0]; args['bin_max']=bins[-1]
args['adaptive_bins']=bins
args['num_bins'], args['bin_min'], args['bin_max']
#args
dg_train, dg_valid, dg_test = load_data(**args)
x,y=dg_train[0]; print(x.shape, y.shape)
x,y=dg_valid[0]; print(x.shape, y.shape)
x,y=dg_test[0]; print(x.shape, y.shape)
#changing valid shape too. maybe not a good idea.
y.min(), y.max(), y[0,0,0,0,:]
x[0,0,0,0]
#checking if data generator worked
idxs = np.argmax(y, -1)
plt.hist(idxs[...,0].reshape(-1))
# #compare distribution to non-adaptive.
# args['bins']=None; args['bin_min']=-2; args['bin_max']=2
# dg_train, dg_valid, dg_test = load_data(**args)
# x,y=dg_test[0]; print(x.shape, y.shape)
# #remember y is not same bcoz of shuffle in train. so use test.
x[0,0,0,0]
idxs = np.argmax(y, -1)
plt.hist(idxs[...,0].reshape(-1))
#so different distributions with adaptive/non-adaptive. | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Training for adaptive bins | # model2 = build_resnet_categorical(
# **args, input_shape=dg_train.shape,
# )
# # model.summary()
# categorical_loss = create_lat_categorical_loss(dg_train.data.lat, 2)
# model2.compile(keras.optimizers.Adam(1e-3), loss=categorical_loss)
# model_history=model2.fit(dg_train, epochs=50)
# exp_id='categorical_doi_adaptive_bins_v1'
# model_save_dir=args['model_save_dir']
# model2.save(f'{model_save_dir}/{exp_id}.h5')
# model2.save_weights(f'{model_save_dir}/{exp_id}_weights.h5')
# to_pickle(model_history.history, f'{model_save_dir}/{exp_id}_history.pkl')
# checking training
# # list all data in history
# print(history.history.keys())
# # summarize history for accuracy
# plt.plot(history.history['accuracy'])
# plt.plot(history.history['val_accuracy'])
# plt.title('model accuracy')
# plt.ylabel('accuracy')
# plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
# plt.show()
# # summarize history for loss
# plt.plot(history.history['loss'])
# plt.plot(history.history['val_loss'])
# plt.title('model loss')
# plt.ylabel('loss')
# plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
# plt.show() | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Predictions for Adaptive bins | exp_id='categorical_doi_adaptive_bins_v1'
model_save_dir=args['model_save_dir']
model2 = keras.models.load_model(
f'{model_save_dir}/{exp_id}.h5',
custom_objects={'PeriodicConv2D': PeriodicConv2D, 'categorical_loss': keras.losses.mse}
)
#args
#full-data (apr-dec 2018)
preds = create_predictions(model2, dg_test, is_categorical=True, is_doi=True,
bin_min=args['bin_min'], bin_max=args['bin_max'],
adaptive_bins=bins)
preds
#extremely small values may increase numerical error? (x-y)
preds.t.min(), preds.t.max(), preds.t.mean()
preds.t.bin_edges
#surprisingly end points are much larger than with non-adaptive. will Check!! | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Most likely class | # Using bin_mid_points of prediction with highest probability
das = []
for var in ['z', 't']:
idxs = np.argmax(preds[var], -1)
most_likely = preds[var].mid_points[idxs]
das.append(xr.DataArray(
most_likely, dims=['time', 'lat', 'lon'],
coords = [preds.time, preds.lat, preds.lon],
name=var
))
preds_ml = xr.merge(das)
preds_ml
datadir=args['datadir']
z500_valid = load_test_data(f'{datadir}/geopotential_500', 'z').drop('level')
t850_valid = load_test_data(f'{datadir}/temperature_850', 't').drop('level')
valid = xr.merge([z500_valid, t850_valid])
valid
preds_ml_new=preds_ml+valid.shift(time=-dg_test.lead_time)
#be careful of last points (2018-12-28) to 2018-12-31.
#they must contain nan values
preds_ml_new.t.isel(time=-36).values
#preds_new.t.sel(time='2018-12-28T22:00:00').values
#removing last 3 days (naive approach)
preds_ml_new=preds_ml_new.sel(time=slice(None,'2018-12-28T22:00:00'))
preds_ml_new.t.max().values, preds_ml_new.t.min().values, preds_ml_new.t.mean().values
#edges are more extreme!
valid.t.max().values, valid.t.min().values, valid.t.mean().values | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
RMSE | compute_weighted_rmse(preds_ml_new, valid).load()
#almost same as non-adaptive. loss comparable (~2.9 for no-adaptive. ~2.3 for adaptive) | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Binned CRPS | preds['t'].mid_points
#changing Observation directly instead of predictions for binned crps
obs=valid-valid.shift(time=-dg_test.lead_time)
obs=obs.sel(time=preds.time) #reducing to preds size
obs=obs.sel(time=slice(None,'2018-12-28T22:00:00'))#removing nan values
print(
valid.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values,
valid_x_new.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values,
obs.t.isel(lat=0,lon=0).sel(time='2018-05-05T22:00:00').values
)
obs #reduced set. 2018-01-04 to 2018-12-28
obs1 = obs.sel(time='2018-05-05')
preds1 = preds.sel(time='2018-05-05')
compute_weighted_bin_crps(preds1, obs1).load()
#pretty bad again. | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
comparing to - without input difference | from src.data_generator import *
args['bin_min']=-5; args['bin_max']=5 #checked min, max of (x-y) in train.
args['num_bins'], args['bin_min'], args['bin_max']
dg_train, dg_valid, dg_test = load_data(**args)
x,y=dg_train[0]; print(x.shape, y.shape)
x,y=dg_valid[0]; print(x.shape, y.shape)
x,y=dg_test[0]; print(x.shape, y.shape)
y[0,0,0,0,:]
args['bin_min']
model = build_resnet_categorical(
**args, input_shape=dg_train.shape,
)
# model.summary()
categorical_loss = create_lat_categorical_loss(dg_train.data.lat, 2)
model.compile(keras.optimizers.Adam(1e-3), loss=categorical_loss)
model.fit(dg_train, epochs=30)
#Much faster training.
#small test
xtrue ,ytrue=dg_valid[0]
ypred=model.predict(xtrue)
ytrue.shape, ytrue.max(),ytrue.min(), ytrue.mean()
ypred.shape, ypred.max(),ypred.min(), ypred.mean()
#apr-dec
preds = create_predictions(model, dg_test, is_categorical=True)
preds
preds.t.min(), preds.t.max(), preds.t.mean()
idxs = np.argmax(preds.t.isel(time=0), -1)
mp = preds.t.mid_points
# Most likely bin
plt.matshow(mp[idxs])
plt.colorbar();
# Let's do this for all times and compute the RMSE
das = []
for var in ['z', 't']:
idxs = np.argmax(preds[var], -1)
most_likely = preds[var].mid_points[idxs]
das.append(xr.DataArray(
most_likely, dims=['time', 'lat', 'lon'],
coords = [preds.time, preds.lat, preds.lon],
name=var
))
preds_ml = xr.merge(das)
preds_ml
datadir=args['datadir']
z500_valid = load_test_data(f'{datadir}/geopotential_500', 'z').drop('level')
t850_valid = load_test_data(f'{datadir}/temperature_850', 't').drop('level')
valid = xr.merge([z500_valid, t850_valid])
valid=valid.sel(time=preds_ml.time)
compute_weighted_rmse(preds_ml, valid).load()
preds.t.bin_width / 2
preds.t.isel(time=0).max('bin').plot()
plt.bar(preds.t.mid_points, preds.t.isel(time=0).sel(lat=50, lon=300, method='nearest'), preds.t.bin_width)
plt.bar(preds.t.mid_points, preds.t.isel(time=0).sel(lat=0, lon=150, method='nearest'), preds.t.bin_width)
plt.bar(preds.t.mid_points, preds.t.isel(time=0).sel(lat=50, lon=20, method='nearest'), preds.t.bin_width)
plt.bar(preds.t.mid_points, preds.t.isel(time=0).sel(lat=0, lon=0, method='nearest'), preds.t.bin_width) | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Binned CRPS | def compute_bin_crps(obs, preds, bin_edges):
"""
Last axis must be bin axis
obs: [...]
preds: [..., n_bins]
"""
obs = obs.values
preds = preds.values
# Convert observation
a = np.minimum(bin_edges[1:], obs[..., None])
b = bin_edges[:-1] * (bin_edges[0:-1] > obs[..., None])
y = np.maximum(a, b)
# Convert predictions to cumulative predictions with a zero at the beginning
cum_preds = np.cumsum(preds, -1)
cum_preds_zero = np.concatenate([np.zeros((*cum_preds.shape[:-1], 1)), cum_preds], -1)
xmin = bin_edges[..., :-1]
xmax = bin_edges[..., 1:]
lmass = cum_preds_zero[..., :-1]
umass = 1 - cum_preds_zero[..., 1:]
# y = np.atleast_1d(y)
# xmin, xmax = np.atleast_1d(xmin), np.atleast_1d(xmax)
# lmass, lmass = np.atleast_1d(lmass), np.atleast_1d(lmass)
scale = xmax - xmin
# print('scale =', scale)
y_scale = (y - xmin) / scale
# print('y_scale = ', y_scale)
z = y_scale.copy()
z[z < 0] = 0
z[z > 1] = 1
# print('z =', z)
a = 1 - (lmass + umass)
# print('a =', a)
crps = (
np.abs(y_scale - z) + z**2 * a - z * (1 - 2*lmass) +
a**2 / 3 + (1 - lmass) * umass
)
return np.sum(scale * crps, -1)
def compute_weighted_bin_crps(da_fc, da_true, mean_dims=xr.ALL_DIMS):
"""
"""
t = np.intersect1d(da_fc.time, da_true.time)
da_fc, da_true = da_fc.sel(time=t), da_true.sel(time=t)
weights_lat = np.cos(np.deg2rad(da_true.lat))
weights_lat /= weights_lat.mean()
dims = ['time', 'lat', 'lon']
if type(da_true) is xr.Dataset:
das = []
for var in da_true:
result = compute_bin_crps(da_true[var], da_fc[var], da_fc[var].bin_edges)
das.append(xr.DataArray(
result, dims=dims, coords=dict(da_true.coords), name=var
))
crps = xr.merge(das)
else:
result = compute_bin_crps(da_true, da_fc, da_fc.bin_edges)
crps = xr.DataArray(
result, dims=dims, coords=dict(da_true.coords), name=da_fc.name
)
crps = (crps * weights_lat).mean(mean_dims)
return crps
valid
compute_weighted_bin_crps(preds, valid)
# Ignore below | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Adaptive binning | args['is_categorical']=False
dg_train, dg_valid, dg_test = load_data(**args)
args['is_categorical']=True
x,y=dg_train[0]; print(x.shape, y.shape)
diff=y-x[...,dg_train.output_idxs]
print(diff.min(), diff.max(), diff.mean())
plt.hist(diff.reshape(-1))
diff=[]
for x,y in dg_train:
diff.append(y-x[...,dg_train.output_idxs])
diff = np.array([ elem for singleList in diff for elem in singleList])
diff.shape
diff_shape=diff.shape
diff2, bins=pd.qcut(diff.reshape(-1), args['num_bins'],
labels=False, retbins=True)
diff2=diff2.reshape(diff_shape)
diff2.shape, diff2.max(), diff2.min(), diff2.mean()
bins
diff2=np_utils.to_categorical(diff2, num_classes=args['num_bins'])
diff2.shape, diff2.max(), diff2.min(), diff2.mean()
diff3=diff
diff3_shape=diff3.shape
diff3.shape
diff3=pd.cut(diff3.reshape(-1), bins, labels=False).reshape(diff3_shape)
diff3.shape
diff3
diff3.shape, diff3.max(), diff3.min(), diff3.mean()
diff3[:,:,:,1].min()
np.argwhere(np.isnan(diff3))
diff[1743,25,49,1] | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
Unnormalized Data | a1=np.arange(100)
mean=np.mean(a1); std=np.std(a1)
a1_norm=(a1-mean)/std
a1_norm
a1[4]-a1[2]
diff=a1_norm[4]-a1_norm[2]
diff
diff*std | _____no_output_____ | MIT | nbs_probabilistic/07.2 - Difference of Input.ipynb | sagar-garg/WeatherBench |
$f(x)=exp(\sin(\pi x))$ integrate from $-1$ to $1$.--- | import math
import numpy as np
def f(x):
return math.exp(np.sin(np.pi*x))
n=10
k=-1
result=0
for i in range(n):
result+=f(k)/n
result+=f(k+2/n)/n
k=k+2/n
print(result)
n=20
k=-1
result=0
for i in range(n):
result+=f(k)/n
result+=f(k+2/n)/n
k=k+2/n
print(result)
n=40
k=-1
result=0
for i in range(n):
result+=f(k)/n
result+=f(k+2/n)/n
k=k+2/n
print(result)
| _____no_output_____ | MIT | sec5exercise02a.ipynb | teshenglin/computational_mathematics |
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need. | import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline | _____no_output_____ | MIT | Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb | prakhargurawa/PyTorch-ML |
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored | # Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/" | _____no_output_____ | MIT | Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb | prakhargurawa/PyTorch-ML |
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```. | # Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
len(IMAGE_LIST) | _____no_output_____ | MIT | Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb | prakhargurawa/PyTorch-ML |
--- 1. Visualize the input images | # Select an image and its label by list index
image_index = 0
selected_image = IMAGE_LIST[image_index][0]
selected_label = IMAGE_LIST[image_index][1]
## TODO: Print out 1. The shape of the image and 2. The image's label `selected_label`
print(selected_image.shape)
print(selected_label)
## TODO: Display a night image
# Note the differences between the day and night images
# Any measurable differences can be used to classify these images
image_index = 200
selected_image = IMAGE_LIST[image_index][0]
selected_label = IMAGE_LIST[image_index][1]
## TODO: Print out 1. The shape of the image and 2. The image's label `selected_label`
print(selected_image.shape)
print(selected_label)
plt.imshow(selected_image) | (458, 800, 3)
day
(700, 1280, 3)
night
| MIT | Intro-To-Computer-Vision-1/1_1_Image_Representation/6_1. Visualizing the Data.ipynb | prakhargurawa/PyTorch-ML |
# Installs
%%capture
!pip install --upgrade category_encoders plotly
# Imports
import os, sys
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
!pip install -r requirements.txt
os.chdir('module1')
# Disable warning
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# Imports
import pandas as pd
import numpy as np
import math
import sklearn
sklearn.__version__
# Import the models
from sklearn.linear_model import LogisticRegressionCV
from sklearn.pipeline import make_pipeline
# Import encoder and scaler and imputer
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
# Import random forest classifier
from sklearn.ensemble import RandomForestClassifier
# Import, load data and split data into train, validate and test
train_features = pd.read_csv('../data/tanzania/train_features.csv')
train_labels = pd.read_csv('../data/tanzania/train_labels.csv')
test_features = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
# Load initial train features and labels
from sklearn.model_selection import train_test_split
X_train = train_features
y_train = train_labels['status_group']
# Split the initial train features and labels 80% into new train and new validation
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, train_size = 0.80, test_size = 0.20,
stratify = y_train, random_state=42
)
X_train.shape, X_val.shape, y_train.shape, y_val.shape
# Wrangle train, validate, and test sets
def wrangle(X):
# Set bins value
bins=20
# Prevent SettingWithCopyWarning
X = X.copy()
# Clean installer
X['installer'] = X['installer'].str.lower()
X['installer'] = X['installer'].str.replace('danid', 'danida')
X['installer'] = X['installer'].str.replace('disti', 'district council')
X['installer'] = X['installer'].str.replace('commu', 'community')
X['installer'] = X['installer'].str.replace('central government', 'government')
X['installer'] = X['installer'].str.replace('kkkt _ konde and dwe', 'kkkt')
X['installer'].value_counts(normalize=True)
tops = X['installer'].value_counts()[:5].index
X.loc[~X['installer'].isin(tops), 'installer'] = 'Other'
# Clean funder and bin
X['funder'] = X['funder'].str.lower()
X['funder'] = X['funder'].str[:3]
X['funder'].value_counts(normalize=True)
tops = X['funder'].value_counts()[:20].index
X.loc[~X['funder'].isin(tops), 'funder'] = 'Other'
# Use mean for gps_height missing values
X.loc[X['gps_height'] == 0, 'gps_height'] = X['gps_height'].mean()
# Bin lga
#tops = X['lga'].value_counts()[:10].index
#X.loc[~X['lga'].isin(tops), 'lga'] = 'Other'
# Bin ward
#tops = X['ward'].value_counts()[:bins].index
#X.loc[~X['ward'].isin(tops), 'ward'] = 'Other'
# Bin subvillage
tops = X_train['subvillage'].value_counts()[:10].index
X_train.loc[~X_train['subvillage'].isin(tops), 'subvillage'] = 'Other'
# Clean latitude and longitude
average_lat = X.groupby('region').latitude.mean().reset_index()
average_long = X.groupby('region').longitude.mean().reset_index()
shinyanga_lat = average_lat.loc[average_lat['region'] == 'Shinyanga', 'latitude']
shinyanga_long = average_long.loc[average_lat['region'] == 'Shinyanga', 'longitude']
X.loc[(X['region'] == 'Shinyanga') & (X['latitude'] > -1), ['latitude']] = shinyanga_lat[17]
X.loc[(X['region'] == 'Shinyanga') & (X['longitude'] == 0), ['longitude']] = shinyanga_long[17]
mwanza_lat = average_lat.loc[average_lat['region'] == 'Mwanza', 'latitude']
mwanza_long = average_long.loc[average_lat['region'] == 'Mwanza', 'longitude']
X.loc[(X['region'] == 'Mwanza') & (X['latitude'] > -1), ['latitude']] = mwanza_lat[13]
X.loc[(X['region'] == 'Mwanza') & (X['longitude'] == 0) , ['longitude']] = mwanza_long[13]
# Impute mean for tsh based on mean of source_class/basin/waterpoint_type_group
def tsh_calc(tsh, source, base, waterpoint):
if tsh == 0:
if (source, base, waterpoint) in tsh_dict:
new_tsh = tsh_dict[source, base, waterpoint]
return new_tsh
else:
return tsh
return tsh
temp = X[X['amount_tsh'] != 0].groupby(['source_class',
'basin',
'waterpoint_type_group'])['amount_tsh'].mean()
tsh_dict = dict(temp)
X['amount_tsh'] = X.apply(lambda x: tsh_calc(x['amount_tsh'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1)
# Impute mean for the feature based on latitude and longitude
def latlong_conversion(feature, pop, long, lat):
radius = 0.1
radius_increment = 0.3
if pop <= 1:
pop_temp = pop
while pop_temp <= 1 and radius <= 2:
lat_from = lat - radius
lat_to = lat + radius
long_from = long - radius
long_to = long + radius
df = X[(X['latitude'] >= lat_from) &
(X['latitude'] <= lat_to) &
(X['longitude'] >= long_from) &
(X['longitude'] <= long_to)]
pop_temp = df[feature].mean()
if math.isnan(pop_temp):
pop_temp = pop
radius = radius + radius_increment
else:
pop_temp = pop
if pop_temp <= 1:
new_pop = X_train[feature].mean()
else:
new_pop = pop_temp
return new_pop
# Impute gps_height based on location
#X['population'] = X.apply(lambda x: latlong_conversion('population', x['population'], x['longitude'], x['latitude']), axis=1)
# Impute gps_height based on location
#X['gps_height'] = X.apply(lambda x: latlong_conversion('gps_height', x['gps_height'], x['longitude'], x['latitude']), axis=1)
# quantity & quantity_group are duplicates, so drop quantity_group
X = X.drop(columns='quantity_group')
X = X.drop(columns='num_private')
# return the wrangled dataframe
return X
# Wrangle the data
X_train = wrangle(X_train)
X_val = wrangle(X_val)
# Feature engineering
def feature_engineer(X):
# Create new feature pump_age
X['pump_age'] = 2013 - X['construction_year']
X.loc[X['pump_age'] == 2013, 'pump_age'] = 0
X.loc[X['pump_age'] == 0, 'pump_age'] = 10
# Create new feature region_district
X['region_district'] = X['region_code'].astype(str) + X['district_code'].astype(str)
#X['tsh_pop'] = X['amount_tsh']/X['population']
return X
# Feature engineer the data
X_train = feature_engineer(X_train)
X_val = feature_engineer(X_val)
# Encode a feature
def encode_feature(X, y, str):
X['status_group'] = y
X.groupby(str)['status_group'].value_counts(normalize=True)
X['functional']= (X['status_group'] == 'functional').astype(int)
X[['status_group', 'functional']]
return X
# Encode all the categorical features
train = X_train.copy()
train = encode_feature(train, y_train, 'quantity')
train = encode_feature(train, y_train, 'waterpoint_type')
train = encode_feature(train, y_train, 'extraction_type')
train = encode_feature(train, y_train, 'installer')
train = encode_feature(train, y_train, 'funder')
train = encode_feature(train, y_train, 'water_quality')
train = encode_feature(train, y_train, 'basin')
train = encode_feature(train, y_train, 'region')
train = encode_feature(train, y_train, 'payment')
train = encode_feature(train, y_train, 'source')
#train = encode_feature(train, y_train, 'lga')
#train = encode_feature(train, y_train, 'ward')
#train = encode_feature(train, y_train, 'scheme_management')
train = encode_feature(train, y_train, 'management')
train = encode_feature(train, y_train, 'region_district')
train = encode_feature(train, y_train, 'subvillage')
# use quantity feature and the numerical features but drop id
categorical_features = ['quantity', 'waterpoint_type', 'extraction_type', 'installer',
'funder', 'water_quality', 'basin', 'region', 'payment',
'source', 'management', 'region_district', 'subvillage']
numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist()
features = categorical_features + numeric_features
# make subsets using the quantity feature all numeric features except id
X_train = X_train[features]
X_val = X_val[features]
# Create the logistic regression pipeline
pipeline = make_pipeline (
ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
LogisticRegressionCV(random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
# Create the random forest pipeline
pipeline = make_pipeline (
ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
RandomForestClassifier(n_estimators=1000,
random_state=42,
min_samples_leaf=1,
max_features = 'auto',
n_jobs=-1,
verbose = 1)
)
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
pd.set_option('display.max_columns', 100)
model = pipeline.named_steps['randomforestclassifier']
encoder = pipeline.named_steps['onehotencoder']
encoded_columns = encoder.transform(X_train).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
importances.sort_values(ascending=False)
test_features['pump_age'] = 2013 - test_features['construction_year']
test_features.loc[test_features['pump_age'] == 2013, 'pump_age'] = 0
test_features.loc[test_features['pump_age'] == 0, 'pump_age'] = 10
test_features['region_district'] = test_features['region_code'].astype(str) + test_features['district_code'].astype(str)
test_features['tsh_pop'] = test_features['amount_tsh']/test_features['population']
test_features.drop(columns=['num_private'])
X_test = test_features[features]
assert all(X_test.columns == X_train.columns)
y_pred = pipeline.predict(X_test)
#submission = sample_submission.copy()
#submission['status_group'] = y_pred
#submission.to_csv('/content/submission-01.csv', index=False) | _____no_output_____ | MIT | Kaggle_Challenge_Assignment7.ipynb | JimKing100/DS-Unit-2-Kaggle-Challenge |
|
 [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb) Adverse Drug Event (ADE) Pretrained NER and Classifier Models `ADE NER`: Extracts ADE and DRUG entities from clinical texts.`ADE Classifier`: CLassify if a sentence is ADE-related (`True`) or not (`False`)We use several datasets to train these models:- Twitter dataset, which is used in paper "`Deep learning for pharmacovigilance: recurrent neural network architectures for labeling adverse drug reactions in Twitter posts`" (https://pubmed.ncbi.nlm.nih.gov/28339747/)- ADE-Corpus-V2, which is used in paper "`An Attentive Sequence Model for Adverse Drug Event Extraction from Biomedical Text`" (https://arxiv.org/abs/1801.00625) and available online: https://sites.google.com/site/adecorpus/home/document.- CADEC dataset, which is used in paper `Cadec: A corpus of adverse drug event annotations` (https://pubmed.ncbi.nlm.nih.gov/25817970) | import json
from google.colab import files
license_keys = files.upload()
with open(list(license_keys.keys())[0]) as f:
license_keys = json.load(f)
%%capture
for k,v in license_keys.items():
%set_env $k=$v
!wget https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/jsl_colab_setup.sh
!bash jsl_colab_setup.sh
import json
import os
from pyspark.ml import Pipeline,PipelineModel
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
import sparknlp
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(license_keys['SECRET'],params=params)
print (sparknlp.version())
print (sparknlp_jsl.version()) | 3.0.1
3.0.0
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
ADE Classifier Pipeline (with a pretrained model)`True` : The sentence is talking about a possible ADE`False` : The sentences doesn't have any information about an ADE. ADE Classifier with BioBert  | # Annotator that transforms a text column from dataframe into an Annotation ready for NLP
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("sentence")
# Tokenizer splits words in a relevant format for NLP
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
bert_embeddings = BertEmbeddings.pretrained("biobert_pubmed_base_cased")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")\
.setMaxSentenceLength(512)
embeddingsSentence = SentenceEmbeddings() \
.setInputCols(["sentence", "embeddings"]) \
.setOutputCol("sentence_embeddings") \
.setPoolingStrategy("AVERAGE")\
.setStorageRef('biobert_pubmed_base_cased')
classsifierdl = ClassifierDLModel.pretrained("classifierdl_ade_biobert", "en", "clinical/models")\
.setInputCols(["sentence", "sentence_embeddings"]) \
.setOutputCol("class")
ade_clf_pipeline = Pipeline(
stages=[documentAssembler,
tokenizer,
bert_embeddings,
embeddingsSentence,
classsifierdl])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_clf_model = ade_clf_pipeline.fit(empty_data)
ade_lp_pipeline = LightPipeline(ade_clf_model)
text = "I feel a bit drowsy & have a little blurred vision after taking an insulin"
ade_lp_pipeline.annotate(text)['class'][0]
text="I just took an Advil and have no gastric problems so far."
ade_lp_pipeline.annotate(text)['class'][0] | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
As you can see `gastric problems` is not detected as `ADE` as it is in a negative context. So, classifier did a good job detecting that. | text="I just took a Metformin and started to feel dizzy."
ade_lp_pipeline.annotate(text)['class'][0]
t='''
Always tired, and possible blood clots. I was on Voltaren for about 4 years and all of the sudden had a minor stroke and had blood clots that traveled to my eye. I had every test in the book done at the hospital, and they couldn't find anything. I was completley healthy! I am thinking it was from the voltaren. I have been off of the drug for 8 months now, and have never felt better. I started eating healthy and working out and that has help alot. I can now sleep all thru the night. I wont take this again. If I have the back pain, I will pop a tylonol instead.
'''
ade_lp_pipeline.annotate(t)['class'][0]
texts = ["I feel a bit drowsy & have a little blurred vision, after taking a pill.",
"I've been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it.",
"Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to take it every day for the next month to see how I get on, here goes.",
"So far its been very good, pains almost gone, but I feel a bit weird, didn't have that when on 50."]
for text in texts:
result = ade_lp_pipeline.annotate(text)
print (result['class'][0])
| True
False
True
False
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
ADE Classifier trained with conversational (short) sentences This model is trained on short, conversational sentences related to ADE and is supposed to do better on the text that is short and used in a daily context.  | conv_classsifierdl = ClassifierDLModel.pretrained("classifierdl_ade_conversational_biobert", "en", "clinical/models")\
.setInputCols(["sentence", "sentence_embeddings"]) \
.setOutputCol("class")
conv_ade_clf_pipeline = Pipeline(
stages=[documentAssembler,
tokenizer,
bert_embeddings,
embeddingsSentence,
conv_classsifierdl])
empty_data = spark.createDataFrame([[""]]).toDF("text")
conv_ade_clf_model = conv_ade_clf_pipeline.fit(empty_data)
conv_ade_lp_pipeline = LightPipeline(conv_ade_clf_model)
text = "after taking a pill, he denies any pain"
conv_ade_lp_pipeline.annotate(text)['class'][0] | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
ADE NERExtracts `ADE` and `DRUG` entities from text.  | documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
ade_ner = MedicalNerModel.pretrained("ner_ade_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
ade_ner,
ner_converter])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ner_model = ner_pipeline.fit(empty_data)
ade_ner_lp = LightPipeline(ade_ner_model)
light_result = ade_ner_lp.fullAnnotate("I feel a bit drowsy & have a little blurred vision, so far no gastric problems. I have been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it. Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to take it every day for the next month to see how I get on, here goes. So far its been very good, pains almost gone, but I feel a bit weird, didn't have that when on 50.")
chunks = []
entities = []
begin =[]
end = []
for n in light_result[0]['ner_chunk']:
begin.append(n.begin)
end.append(n.end)
chunks.append(n.result)
entities.append(n.metadata['entity'])
import pandas as pd
df = pd.DataFrame({'chunks':chunks, 'entities':entities,
'begin': begin, 'end': end})
df | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
As you can see `gastric problems` is not detected as `ADE` as it is in a negative context. So, NER did a good job ignoring that. ADE NER with Bert embeddings  | documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
bert_embeddings = BertEmbeddings.pretrained("biobert_pubmed_base_cased")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
ade_ner_bert = MedicalNerModel.pretrained("ner_ade_biobert", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
ner_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
bert_embeddings,
ade_ner_bert,
ner_converter])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ner_model_bert = ner_pipeline.fit(empty_data)
ade_ner_lp_bert = LightPipeline(ade_ner_model_bert)
light_result = ade_ner_lp_bert.fullAnnotate("I feel a bit drowsy & have a little blurred vision, so far no gastric problems. I have been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it. Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to take it every day for the next month to see how I get on, here goes. So far its been very good, pains almost gone, but I feel a bit weird, didn't have that when on 50.")
chunks = []
entities = []
begin =[]
end = []
for n in light_result[0]['ner_chunk']:
begin.append(n.begin)
end.append(n.end)
chunks.append(n.result)
entities.append(n.metadata['entity'])
import pandas as pd
df = pd.DataFrame({'chunks':chunks, 'entities':entities,
'begin': begin, 'end': end})
df | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
Looks like Bert version of NER returns more entities than clinical embeddings version. NER and Classifier combined with AssertionDL Model | assertion_ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ass_ner_chunk")\
.setWhiteList(['ADE'])
biobert_assertion = AssertionDLModel.pretrained("assertion_dl_biobert", "en", "clinical/models") \
.setInputCols(["sentence", "ass_ner_chunk", "embeddings"]) \
.setOutputCol("assertion")
assertion_ner_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
bert_embeddings,
ade_ner_bert,
ner_converter,
assertion_ner_converter,
biobert_assertion])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ass_ner_model_bert = assertion_ner_pipeline.fit(empty_data)
ade_ass_ner_model_lp_bert = LightPipeline(ade_ass_ner_model_bert)
import pandas as pd
text = "I feel a bit drowsy & have a little blurred vision, so far no gastric problems. I have been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it. Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to take it every day for the next month to see how I get on, here goes. So far its been very good, pains almost gone, but I feel a bit weird, didn't have that when on 50."
print (text)
light_result = ade_ass_ner_model_lp_bert.fullAnnotate(text)[0]
chunks=[]
entities=[]
status=[]
for n,m in zip(light_result['ass_ner_chunk'],light_result['assertion']):
chunks.append(n.result)
entities.append(n.metadata['entity'])
status.append(m.result)
df = pd.DataFrame({'chunks':chunks, 'entities':entities, 'assertion':status})
df | I feel a bit drowsy & have a little blurred vision, so far no gastric problems. I have been on Arthrotec 50 for over 10 years on and off, only taking it when I needed it. Due to my arthritis getting progressively worse, to the point where I am in tears with the agony, gp's started me on 75 twice a day and I have to take it every day for the next month to see how I get on, here goes. So far its been very good, pains almost gone, but I feel a bit weird, didn't have that when on 50.
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
Looks great ! `gastric problems` is detected as `ADE` and `absent` ADE models applied to Spark Dataframes | import pyspark.sql.functions as F
! wget -q https://raw.githubusercontent.com/JohnSnowLabs/spark-nlp-workshop/master/tutorials/Certification_Trainings/Healthcare/data/sample_ADE_dataset.csv
ade_DF = spark.read\
.option("header", "true")\
.csv("./sample_ADE_dataset.csv")\
.filter(F.col("label").isin(['True','False']))
ade_DF.show(truncate=50) | +--------------------------------------------------+-----+
| text|label|
+--------------------------------------------------+-----+
|Do U know what Meds are R for bipolar depressio...|False|
|# hypercholesterol: Because of elevated CKs (pe...| True|
|Her weight, respirtory status and I/O should be...|False|
|* DM - Pt had several episodes of hypoglycemia ...| True|
|We report the case of a female acromegalic pati...| True|
|2 . Calcipotriene 0.005% Cream Sig: One (1) App...|False|
|Always tired, and possible blood clots. I was o...| True|
|A difference in chemical structure between thes...|False|
|10 . She was left on prednisone 20mg qd due to ...|False|
|The authors suggest that risperidone may increa...| True|
|- Per oral maxillofacial surgery there is no ev...|False|
|@marionjross Cipro is just as bad! Stay away fr...|False|
|A young woman with epilepsy had tonic-clonic se...| True|
|Intravenous methotrexate is an effective adjunc...|False|
|PURPOSE: To report new indocyanine green angiog...|False|
|2 . Docusate Sodium 50 mg/5 mL Liquid [**Hospit...|False|
| consider neupogen.|False|
|He was treated allopurinol and Rasburicase for ...|False|
|Toxicity, pharmacokinetics, and in vitro hemodi...| True|
|# thrombocytopenia: Secondary to chemotherapy a...| True|
+--------------------------------------------------+-----+
only showing top 20 rows
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
**With BioBert version of NER** (will be slower but more accurate) | import pyspark.sql.functions as F
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['ADE'])
ner_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
bert_embeddings,
ade_ner_bert,
ner_converter])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ner_model = ner_pipeline.fit(empty_data)
result = ade_ner_model.transform(ade_DF)
sample_df = result.select('text','ner_chunk.result')\
.toDF('text','ADE_phrases').filter(F.size('ADE_phrases')>0).toPandas()
import pandas as pd
pd.set_option('display.max_colwidth', 0)
sample_df.sample(20) | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
**Doing the same with clinical embeddings version** (faster results) | import pyspark.sql.functions as F
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['ADE'])
ner_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
ade_ner,
ner_converter])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ner_model = ner_pipeline.fit(empty_data)
result = ade_ner_model.transform(ade_DF)
result.select('text','ner_chunk.result')\
.toDF('text','ADE_phrases').filter(F.size('ADE_phrases')>0)\
.show(truncate=70)
| +----------------------------------------------------------------------+----------------------------------------------------------------------+
| text| ADE_phrases|
+----------------------------------------------------------------------+----------------------------------------------------------------------+
|# hypercholesterol: Because of elevated CKs (peaked at 819) the pat...| [elevated CKs]|
|We report the case of a female acromegalic patient in whom multiple...| [multiple hepatic adenomas]|
|Always tired, and possible blood clots. I was on Voltaren for about...| [blood clots that traveled to my eye, back pain]|
|The authors suggest that risperidone may increase affect in patient...| [increase affect]|
|A young woman with epilepsy had tonic-clonic seizures during antine...| [tonic-clonic seizures]|
|Intravenous methotrexate is an effective adjunct to steroid therapy...| [dermatomyositis-polyositis]|
|PURPOSE: To report new indocyanine green angiographic (ICGA) findin...| [indocyanine green angiographic (ICGA) findings]|
|Toxicity, pharmacokinetics, and in vitro hemodialysis clearance of ...| [Toxicity]|
| # thrombocytopenia: Secondary to chemotherapy and MDS/AML concerns.| [thrombocytopenia]|
|A fatal massive pulmonary embolus developed in a patient treated wi...| [fatal massive pulmonary embolus]|
|# Maculopapular rash: over extremities, chest and back, thought [**...| [Maculopapular rash]|
| Hypokalemia after normal doses of neubulized albuterol (salbutamol).| [Hypokalemia]|
|A transient tonic pupillary response, denervation supersensitivity,...|[transient tonic pupillary response, denervation supersensitivity, ...|
|As per above, ID added Atovaquone for PCP [**Name9 (PRE) *] given t...| [BM suppression, liver damage]|
|Electrocardiographic findings and laboratory data indicated a diagn...| [acute myocardial infarction]|
| Hepatic reactions to cyclofenil.| [Hepatic reactions]|
|Therefore, parenteral amiodarone was implicated as the cause of acu...| [acute hepatitis]|
|Vincristine levels were also assayed and showed a dramatic decline ...| [dramatic decline in postexchange levels]|
|Eight days after the end of interferon treatment, he showed signs o...| [inability to sit]|
|2 years with no problems, then toe neuropathy for two years now and...|[toe neuropathy, toe neuropathy, stomach problems, pain, heart woul...|
+----------------------------------------------------------------------+----------------------------------------------------------------------+
only showing top 20 rows
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
Creating sentence dataframe (one sentence per row) and getting ADE entities and categories | documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")\
.setExplodeSentences(True)
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
bert_embeddings = BertEmbeddings.pretrained("biobert_pubmed_base_cased")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
embeddingsSentence = SentenceEmbeddings() \
.setInputCols(["sentence", "embeddings"]) \
.setOutputCol("sentence_embeddings") \
.setPoolingStrategy("AVERAGE")\
.setStorageRef('biobert_pubmed_base_cased')
classsifierdl = ClassifierDLModel.pretrained("classifierdl_ade_biobert", "en", "clinical/models")\
.setInputCols(["sentence", "sentence_embeddings"]) \
.setOutputCol("class")\
.setStorageRef('biobert_pubmed_base_cased')
ade_ner = MedicalNerModel.pretrained("ner_ade_biobert", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['ADE'])
ner_clf_pipeline = Pipeline(
stages=[documentAssembler,
sentenceDetector,
tokenizer,
bert_embeddings,
embeddingsSentence,
classsifierdl,
ade_ner,
ner_converter])
ade_Sentences = ner_clf_pipeline.fit(ade_DF)
import pyspark.sql.functions as F
ade_Sentences.transform(ade_DF).select('sentence.result','ner_chunk.result','class.result')\
.toDF('sentence','ADE_phrases','is_ADE').show(truncate=60) | +------------------------------------------------------------+---------------------------------------------+-------+
| sentence| ADE_phrases| is_ADE|
+------------------------------------------------------------+---------------------------------------------+-------+
| [Do U know what Meds are R for bipolar depression?]| []|[False]|
| [Currently #FDA approved #quetiapine AKA #Seroquel]| []|[False]|
|[# hypercholesterol: Because of elevated CKs (peaked at 8...| [elevated CKs]|[False]|
|[Her weight, respirtory status and I/O should be monitore...| []|[False]|
|[* DM - Pt had several episodes of hypoglycemia on lantus...| [hypoglycemia]| [True]|
|[We report the case of a female acromegalic patient in wh...| [hepatic adenomas]| [True]|
| [2 .]| []|[False]|
|[Calcipotriene 0.005% Cream Sig: One (1) Appl Topical [**...| []|[False]|
| [Always tired, and possible blood clots.]| [tired, blood clots]|[False]|
|[I was on Voltaren for about 4 years and all of the sudde...|[stroke, blood clots that traveled to my eye]|[False]|
|[I had every test in the book done at the hospital, and t...| []|[False]|
| [I was completley healthy!]| [completley healthy]|[False]|
| [I am thinking it was from the voltaren.]| []|[False]|
|[I have been off of the drug for 8 months now, and have n...| []|[False]|
|[I started eating healthy and working out and that has he...| []|[False]|
| [I can now sleep all thru the night.]| []|[False]|
| [I wont take this again.]| []|[False]|
| [If I have the back pain, I will pop a tylonol instead.]| []|[False]|
|[A difference in chemical structure between these two dru...| []|[False]|
| [10 .]| []|[False]|
+------------------------------------------------------------+---------------------------------------------+-------+
only showing top 20 rows
| Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
Creating a pretrained pipeline with ADE NER, Assertion and Classifer | # Annotator that transforms a text column from dataframe into an Annotation ready for NLP
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("sentence")
# Tokenizer splits words in a relevant format for NLP
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
bert_embeddings = BertEmbeddings.pretrained("biobert_pubmed_base_cased")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
ade_ner = MedicalNerModel.pretrained("ner_ade_biobert", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")\
.setStorageRef('biobert_pubmed_base_cased')
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")
assertion_ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ass_ner_chunk")\
.setWhiteList(['ADE'])
biobert_assertion = AssertionDLModel.pretrained("assertion_dl_biobert", "en", "clinical/models") \
.setInputCols(["sentence", "ass_ner_chunk", "embeddings"]) \
.setOutputCol("assertion")
embeddingsSentence = SentenceEmbeddings() \
.setInputCols(["sentence", "embeddings"]) \
.setOutputCol("sentence_embeddings") \
.setPoolingStrategy("AVERAGE")\
.setStorageRef('biobert_pubmed_base_cased')
classsifierdl = ClassifierDLModel.pretrained("classifierdl_ade_conversational_biobert", "en", "clinical/models")\
.setInputCols(["sentence", "sentence_embeddings"]) \
.setOutputCol("class")
ade_clf_pipeline = Pipeline(
stages=[documentAssembler,
tokenizer,
bert_embeddings,
ade_ner,
ner_converter,
assertion_ner_converter,
biobert_assertion,
embeddingsSentence,
classsifierdl])
empty_data = spark.createDataFrame([[""]]).toDF("text")
ade_ner_clf_model = ade_clf_pipeline.fit(empty_data)
ade_ner_clf_pipeline = LightPipeline(ade_ner_clf_model)
classsifierdl.getStorageRef()
text = 'Always tired, and possible blood clots. I was on Voltaren for about 4 years and all of the sudden had a minor stroke and had blood clots that traveled to my eye. I had every test in the book done at the hospital, and they couldnt find anything. I was completley healthy! I am thinking it was from the voltaren. I have been off of the drug for 8 months now, and have never felt better. I started eating healthy and working out and that has help alot. I can now sleep all thru the night. I wont take this again. If I have the back pain, I will pop a tylonol instead.'
light_result = ade_ner_clf_pipeline.fullAnnotate(text)
print (light_result[0]['class'][0].metadata)
chunks = []
entities = []
begin =[]
end = []
for n in light_result[0]['ner_chunk']:
begin.append(n.begin)
end.append(n.end)
chunks.append(n.result)
entities.append(n.metadata['entity'])
import pandas as pd
df = pd.DataFrame({'chunks':chunks, 'entities':entities,
'begin': begin, 'end': end})
df
import pandas as pd
text = 'I have always felt tired, but no blood clots. I was on Voltaren for about 4 years and all of the sudden had a minor stroke and had blood clots that traveled to my eye. I had every test in the book done at the hospital, and they couldnt find anything. I was completley healthy! I am thinking it was from the voltaren. I have been off of the drug for 8 months now, and have never felt better. I started eating healthy and working out and that has help alot. I can now sleep all thru the night. I wont take this again. If I have the back pain, I will pop a tylonol instead.'
print (text)
light_result = ade_ass_ner_model_lp_bert.fullAnnotate(text)[0]
chunks=[]
entities=[]
status=[]
for n,m in zip(light_result['ass_ner_chunk'],light_result['assertion']):
chunks.append(n.result)
entities.append(n.metadata['entity'])
status.append(m.result)
df = pd.DataFrame({'chunks':chunks, 'entities':entities, 'assertion':status})
df
result = ade_ner_clf_pipeline.annotate('I just took an Advil 100 mg and it made me drowsy')
print (result['class'])
print(list(zip(result['token'],result['ner'])))
ade_ner_clf_model.save('ade_pretrained_pipeline')
from sparknlp.pretrained import PretrainedPipeline
ade_pipeline = PretrainedPipeline.from_disk('ade_pretrained_pipeline')
ade_pipeline.annotate('I just took an Advil 100 mg then it made me drowsy')
ade_pipeline.model.stages | _____no_output_____ | Apache-2.0 | tutorials/Certification_Trainings/Healthcare/16.Adverse_Drug_Event_ADE_NER_and_Classifier.ipynb | gkovaig/spark-nlp-workshop |
Slice 136 patient 0002 | from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(d[2]),
d[2])
class_weights
import keras
model = keras.models.load_model('trial_0001_MFCcas_dim2_128_acc.h5')
m_info = m.fit([X1,X2],y,epochs= 20,batch_size = 256,class_weight = class_weights)
import matplotlib.pyplot as plt
plt.plot(m_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
m.save('trial_MFCcascade_acc.h5') | _____no_output_____ | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
eval on 128th slice 0002 | model.evaluate([X1,X2],y,batch_size = 1024)
model_info = model.fit([X1,X2],y,epochs=30,batch_size=256,class_weight= class_weights)
import matplotlib.pyplot as plt
plt.plot(model_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
model.save('trial_0001_MFCcas_dim2_128_acc.h5') | _____no_output_____ | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
eval on 100th slice 0001 | model.evaluate([X1,X2],y,batch_size = 1024)
pred = model.predict([X1,X2],batch_size = 1024)
pred = np.around(pred)
pred1 = np.dot(pred.reshape(17589,5),np.array([0,1,2,3,4]))
y1 = np.dot(y.reshape(17589,5),np.array([0,1,2,3,4]))
y2 = np.argmax(y.reshape(17589,5),axis = 1)
y2.all() == 0
y1.all()==0
from sklearn import metrics
f1 = metrics.f1_score(y1,pred1,average='micro')
f1
p1 = metrics.precision_score(y1,pred1,average='micro')
p1
r1 = metrics.recall_score(y1,pred1,average='micro')
r1
p2 = metrics.precision_score(y1,pred2,average='micro')
p2
pred2 = np.zeros((17589))
f2 = metrics.f1_score(y1,pred2,average='micro')
f2 | _____no_output_____ | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
Slice 128 patient 0001 | from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(d[2]),
d[2])
class_weights
m1.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
m1_info = m1.fit([X1,X2],y,epochs=20,batch_size=256,class_weight= class_weights) | Epoch 1/20
14541/14541 [==============================] - 127s 9ms/step - loss: 1.3402 - acc: 0.8345
Epoch 2/20
14541/14541 [==============================] - 123s 8ms/step - loss: 1.1816 - acc: 0.9560
Epoch 3/20
14541/14541 [==============================] - 123s 8ms/step - loss: 1.0906 - acc: 0.9647
Epoch 4/20
14541/14541 [==============================] - 123s 8ms/step - loss: 1.0021 - acc: 0.9735
Epoch 5/20
14541/14541 [==============================] - 123s 8ms/step - loss: 0.9231 - acc: 0.9801
Epoch 6/20
5632/14541 [==========>...................] - ETA: 1:15 - loss: 0.8693 - acc: 0.9826 | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
plot of inputcascade | import matplotlib.pyplot as plt
plt.plot(m1_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
m1.save('trial_0001_input_cascade_acc.h5')
plt.plot(m_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show() | _____no_output_____ | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
Training on slice 128, evaluating on 136 | m.evaluate([X1,X2],y,batch_size = 1024)
m.save('trial_0001_MFCcas_dim2_128_acc.h5')
pred = m.predict([X1,X2],batch_size = 1024)
print(((pred != 0.) & (pred != 1.)).any())
pred = np.around(pred)
type(y)
pred1 = np.dot(pred.reshape(17589,5),np.array([0,1,2,3,4]))
pred1.shape
y1 = np.dot(y.reshape(17589,5),np.array([0,1,2,3,4]))
from sklearn import metrics
f1 = metrics.f1_score(y1,pred1,average='micro')
f1
pred2 = np.zeros((17589,1))
f1 = f1 = metrics.f1_score(y1,pred2,average='micro')
f1
f1 = metrics.f1_score(y1,pred1,average='weighted')
f1
f1 = metrics.f1_score(y1,pred1,average='macro')
f1
plt.plot(m_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
m_info = m.fit([X1,X2],y,epochs=20,batch_size=256,class_weight= 10*class_weights)
plt.plot(m_info.history['acc'])
#plt.plot(m_info.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
import matplotlib.pyplot as plt
m.save('trial_0001_MFCcas_dim2_128_acc.h5')
import keras
def two_pathcnn(input_shape):
X_input = Input(input_shape)
X = Conv2D(64,(7,7),strides=(1,1),padding='valid')(X_input)
X = BatchNormalization()(X)
X1 = Conv2D(64,(7,7),strides=(1,1),padding='valid')(X_input)
X1 = BatchNormalization()(X1)
X = layers.Maximum()([X,X1])
X = Conv2D(64,(4,4),strides=(1,1),padding='valid',activation='relu')(X)
X2 = Conv2D(160,(13,13),strides=(1,1),padding='valid')(X_input)
X2 = BatchNormalization()(X2)
X21 = Conv2D(160,(13,13),strides=(1,1),padding='valid')(X_input)
X21 = BatchNormalization()(X21)
X2 = layers.Maximum()([X2,X21])
X3 = Conv2D(64,(3,3),strides=(1,1),padding='valid')(X)
X3 = BatchNormalization()(X3)
X31 = Conv2D(64,(3,3),strides=(1,1),padding='valid')(X)
X31 = BatchNormalization()(X31)
X = layers.Maximum()([X3,X31])
X = Conv2D(64,(2,2),strides=(1,1),padding='valid',activation='relu')(X)
X = Concatenate()([X2,X])
X = Conv2D(5,(21,21),strides=(1,1),padding='valid')(X)
X = Activation('softmax')(X)
model = Model(inputs = X_input, outputs = X)
return model
import os
m0 = two_pathcnn((33,33,4))
m0.summary()
os.chdir('drive/brat')
!ls | data.ipynb model.ipynb
data_scan_0001.pickle training.ipynb
data_trial_81.h5 trial_0001_81_accuracy.h5
data_trial_dim2_128.h5 trial_0001_81_f1.h5
data_trial.h5 trial_0001_accuracy.h5
data_trial_X.pickle trial_0001_f1.h5
data_trial_Y.pickle trial_0001_input_cascade_acc.h5
data_Y_0001.pickle trial_0001_input_cascasde_acc.h5
HG trial_0001_MFCcas_dim2_128_acc.h5
LG
| MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
for training over entire image, create batch of patches for one image, batch of labels in Y | import h5py
import numpy as np
hf = h5py.File('data_trial_dim2_128.h5', 'r')
X = hf.get('dataset_1')
Y = hf.get('dataset_2')
y = np.zeros((26169,1,1,5))
for i in range(y.shape[0]):
y[i,:,:,Y[i]] = 1
X = np.asarray(X)
X.shape
keras.__version__
import keras.backend as K
def f1_score(y_true, y_pred):
# Count positive samples.
c1 = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
c2 = K.sum(K.round(K.clip(y_true, 0, 1)))
c3 = K.sum(K.round(K.clip(y_pred, 0, 1)))
# If there are no true samples, fix the F1 score at 0.
if c3 == 0:
return 0
# How many selected items are relevant?
precision = c1 / c2
# How many relevant items are selected?
recall = c1 / c3
# Calculate f1_score
f1_score = 2 * (precision * recall) / (precision + recall)
return f1_score
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(Y),
Y)
m0.compile(optimizer='adam',loss='categorical_hinge',metrics=[f1_score])
m0_info = m0.fit(X,y,epochs=20,batch_size=1024,class_weight = class_weights)
m0.save('trial_0001_dim2_128_f1.h5')
!ls
m0.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
m0_info = m0.fit(X,y,epochs=20,batch_size=4096,class_weight = class_weights)
m0.save('trial_0001_dim2_128_accuracy.h5')
!ls
mod = keras.models.load_model('trial_0001_81_accuracy.h5')
mod.evaluate(X,y,batch_size = 1024)
pred = m0.predict(X,batch_size = 1024)
pred.shape
pred = np.floor(pred)
y.reshape(26169,5)
pred.astype(int)
pred = pred.reshape(26169,5)
y_pred = np.floor(np.dot(pred,np.array([0,1,2,3,4])))
y_pred.reshape(26169,1)
y_pred.shape
print(((y_pred != 0.) & (y_pred != 1.)).any())
from matplotlib import pyplot as plt
plt.imshow(np.uint8(y_pred*32))
plt.show()
from sklearn import metrics
f1 = metrics.f1_score(y,pred)
!pip3 install SimpleITK
import SimpleITK as sitk
import numpy as np
path = 'LG/0001'
p = os.listdir(path)
p.sort(key=str.lower)
arr = []
for i in range(len(p)):
if(i != 4):
p1 = os.listdir(path+'/'+p[i])
p1.sort()
img = sitk.ReadImage(path+'/'+p[i]+'/'+p1[-1])
arr.append(sitk.GetArrayFromImage(img))
else:
p1 = os.listdir(path+'/'+p[i])
img = sitk.ReadImage(path+'/'+p[i]+'/'+p1[0])
Y_labels = sitk.GetArrayFromImage(img)
data = np.zeros((Y_labels.shape[1],Y_labels.shape[0],Y_labels.shape[2],4))
for i in range(196):
data[i,:,:,0] = arr[0][:,i,:]
data[i,:,:,1] = arr[1][:,i,:]
data[i,:,:,2] = arr[2][:,i,:]
data[i,:,:,3] = arr[3][:,i,:]
def model_gen(input_dim,x,y,slice_no):
X1 = []
X2 = []
Y = []
for i in range(int((input_dim)/2),175-int((input_dim)/2)):
for j in range(int((input_dim)/2),195-int((input_dim)/2)):
if(x[i-16:i+17,j-16:j+17,:].any != 0):
X2.append(x[i-16:i+17,j-16:j+17,:])
X1.append(x[i-int((input_dim)/2):i+int((input_dim)/2)+1,j-int((input_dim)/2):j+int((input_dim)/2)+1,:])
Y.append(y[i,slice_no,j])
X1 = np.asarray(X1)
X2 = np.asarray(X2)
Y = np.asarray(Y)
d = [X1,X2,Y]
return d
def data_gen(data,y,slice_no,model_no):
d = []
x = data[slice_no]
if(x.any() != 0 and y.any() != 0):
if(model_no == 0):
X1 = []
for i in range(16,159):
for j in range(16,199):
if(x[i-16:i+17,j-16:j+17,:].all != 0):
X1.append(x[i-16:i+17,j-16:j+17,:])
Y1 = []
for i in range(16,159):
for j in range(16,199):
if(x[i-16:i+17,j-16:j+17,:].all != 0):
Y1.append(y[i,slice_no,j])
X1 = np.asarray(X1)
Y1 = np.asarray(Y1)
d = [X1,Y1]
elif(model_no == 1):
d = model_gen(65,x,y,slice_no)
elif(model_no == 2):
d = model_gen(56,x,y,slice_no)
elif(model_no == 3):
d = model_gen(53,x,y,slice_no)
return d
from sklearn.utils import class_weight
m0.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])
info = []
for i in range(90,data.shape[0],2):
d = data_gen(data,Y_labels,i,0)
if(len(d) != 0):
y = np.zeros((d[-1].shape[0],1,1,5))
for j in range(y.shape[0]):
y[j,:,:,d[-1][j]] = 1
X1 = d[0]
class_weights = class_weight.compute_class_weight('balanced',
np.unique(d[-1]),
d[-1])
print('slice no:'+str(i))
info.append(m0.fit(X1,y,epochs=2,batch_size=32,class_weight= class_weights))
m0.save('trial_0001_2path_acc.h5')
| _____no_output_____ | MIT | model.ipynb | abhi134/Brain_Tumor_Segmentation |
Load 10 years of accident data, from 2007 to 2016 | #load accidents data from 2007 to 2016
dbf07= DBF('accident/accident2007.dbf')
dbf08= DBF('accident/accident2008.dbf')
dbf09= DBF('accident/accident2009.dbf')
dbf10= DBF('accident/accident2010.dbf')
dbf11 = DBF('accident/accident2011.dbf')
dbf12 = DBF('accident/accident2012.dbf')
dbf13 = DBF('accident/accident2013.dbf')
dbf14 = DBF('accident/accident2014.dbf')
dbf15 = DBF('accident/accident2015.dbf')
dbf16 = DBF('accident/accident2016.dbf')
accidents07 = DataFrame(iter(dbf07))
accidents08 = DataFrame(iter(dbf08))
accidents09 = DataFrame(iter(dbf09))
accidents10 = DataFrame(iter(dbf10))
accidents11 = DataFrame(iter(dbf11))
accidents12 = DataFrame(iter(dbf12))
accidents13 = DataFrame(iter(dbf13))
accidents14 = DataFrame(iter(dbf14))
accidents15 = DataFrame(iter(dbf15))
accidents16 = DataFrame(iter(dbf16)) | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
First, we want to combine accidents10 ~ accidents16 to one dataframe. Since not all of the accident data downloaded from the U.S. Department of Transportation have the same features, by using the `jion:inner` option in `pd.concat` function, we can get the intersection of features. | # rename column name in frame07 so that columns names are the same with other frames
accidents07.rename(columns={'latitude': 'LATITUDE', 'longitud': 'LONGITUD'}, inplace=True)
# take a look inside how the accident data file looks like
#combine all accidents file
allaccidents = pd.concat([accidents07,accidents08,accidents09,accidents10,accidents11,accidents12,accidents13,accidents14,accidents15,accidents16], axis=0,join='inner')
pd.set_option('display.max_columns', 100)
allaccidents.head() | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
The allaccidents table recorded 320874 accidents from 2010-2016, and it has 42 features. Here are the meaning of some of the features according to the `FARS Analytical User’s Manual`. Explaination of variables*VE_TOTAL*: Number of Vehicle in crash *VE_FORMS*: Number of Motor Vehicles in Transport (MVIT) *PED*: Number of Persons Not in Motor Vehicles *NHS*: National Highway System*ROUTE*: Route Signing *SP_JUR*: Special Jurisdiction *HARM_EV*: First Harmful Event*TWAY_ID , TWAY_ID2* : Trafficway Identifier *MILEPT*: Milepoint *SP_JUR*: Special Jurisdiction*HARM_EV*: injury or damage producing First Harmful Event *MAN_COLL*:Manner of Collision *RELJCT1, RELJCT2*: Relation to Junction- Within Interchange Area, Specific Location. *TYP_INT*: Type of Intersection *REL_ROAD*: Relation to Trafficway *LGT_COND*: Light Condition *NOT_HOUR,MIN*: Min, Hour of Notification *ARR_HOUR,MIN*: Hour, Min arrival at scene *HOSP_HR,MIN*: Hour, Min arrival at hospital *CF1, CF2, CF3*:Related Factors- Crash Level, factors related to the crash *FATALS*: Fatalities*DRUNK_DR*: Number of Drinking Drivers *RAIL*: Rail Grade Crossing IdentifierFor more detailed information, please refer to `FARS Analytical User’s Manual`. Select variables and rename variablesObserved from the table above, some of the variables in the table are not very readable. Therefore, in order to make it easier to understand the variables,we renamed some of the variables according to `FARS Analytical User’s Manual` downloaded from the `U.S. Department of Transportation` website. In order to make all column values informative, we selected important column variables from allaccidents, replace numerical number to meaningful character description according to `FARS Analytical User’s Manual` | import warnings
warnings.filterwarnings('ignore')
accidents = allaccidents[['YEAR','ST_CASE','STATE','VE_TOTAL','PERSONS','FATALS','MONTH','DAY_WEEK','HOUR','NHS','LATITUDE','LONGITUD','MAN_COLL','LGT_COND','WEATHER','ARR_HOUR','ARR_MIN','CF1','DRUNK_DR']]
accidents.rename(columns={'ST_CASE':'CASE_NUM','VE_TOTAL':'NUM_VEHICLE','NHS': 'HIGHWAY', 'MAN_COLL': 'COLLISION_TYPE','LGT_COND':'LIGHT_CONDITION','CF1':'CRASH_FACTOR','DRUNK_DR':'DRUNK_DRIVE'}, inplace=True)
accidents['MONTH'] = accidents['MONTH'].map({1.0:'January', 2.0:'February', 3.0: 'March', 4.0:'April', 5.0:'May', 6.0:'June', 7.0:'July', 8.0:'August',9.0: 'September', 10.0:'October', 11.0:'November', 12.0:'December'})
accidents['DAY_WEEK']= accidents['DAY_WEEK'].map({1.0:'Sunday',2.0:'Monday', 3.0:'Tuesday', 4.0: 'Wednesday', 5.0:'Thursday', 6.0:'Friday', 7.0:'Saturday'})
accidents['HIGHWAY'] = accidents['HIGHWAY'].map({1.0:'On',0.0:'Off',9.0:'Unknow'})
accidents['COLLISION_TYPE'] = accidents['COLLISION_TYPE'].map({0.0:'Not Collision',1.0:'Rear-End',2.0:'Head-On',3.0:'Rear-to-Rear',4.0:'Angle',5.0:'Sideswipe, Same Direction',6.0:'Sideswipe, Opposite Direction',7.0:'Sideswipe, Unknown Direction',9.0:'Unknown'})
accidents['LIGHT_CONDITION'] = accidents['LIGHT_CONDITION'].map({1.0:'Daylight',2.0:'Dark' ,3.0:'Dark',5.0:'Dusk',6.0:'Dark',4.0:'Dawn', 7.0:'Other',8.0 :'Not Report', 9.0:'Not Report'})
# accidents['WEATHER'] = accidents['WEATHER'].map({0.0:'Normal',1.0:'Clear',2.0:'Rain',3.0
accidents.head() | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
combine "year" and "case_num" to reindex accidents dataframe. | accidents['STATE']=accidents['STATE'].astype(int)
accidents['CASE_NUM']=accidents['CASE_NUM'].astype(int)
accidents['YEAR']=accidents['YEAR'].astype(int)
accidents.index = list(accidents['YEAR'].astype(str) + accidents['CASE_NUM'].astype(str))
accidents.head()
accidents.shape | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
Load vehicle data file which contains mortality rate We also want to study the mortality rate of fatal accidents. The data element “Fatalities in Vehicle” in the Vehicle data file from the `U.S. Department of Transportation` website provides the number of deaths in a vehicle. | vdbf07= DBF('vehicle_deaths/vehicle2007.dbf')
vdbf08= DBF('vehicle_deaths/vehicle2008.dbf')
vdbf09= DBF('vehicle_deaths/vehicle2009.dbf')
vdbf10= DBF('vehicle_deaths/vehicle2010.dbf')
vdbf11= DBF('vehicle_deaths/vehicle2011.dbf')
vdbf12= DBF('vehicle_deaths/vehicle2012.dbf')
vdbf13= DBF('vehicle_deaths/vehicle2013.dbf')
vdbf14= DBF('vehicle_deaths/vehicle2014.dbf')
# vdbf15= DBF('vehicle_deaths/vehicle2015.csv')
vdbf16= DBF('vehicle_deaths/vehicle2016.dbf')
vehicle07 = DataFrame(iter(vdbf07))
vehicle08 = DataFrame(iter(vdbf08))
vehicle09 = DataFrame(iter(vdbf09))
vehicle10 = DataFrame(iter(vdbf10))
vehicle11 = DataFrame(iter(vdbf11))
vehicle12 = DataFrame(iter(vdbf12))
vehicle13 = DataFrame(iter(vdbf13))
vehicle14 = DataFrame(iter(vdbf14))
# vehicle15 = pd.read_csv('vehicle_deaths/vehicle2015.csv')
vehicle16 = DataFrame(iter(vdbf16))
vehicle07['YEAR']=2007
vehicle08['YEAR']=2008
vehicle09['YEAR']=2009
vehicle10['YEAR']=2010
vehicle11['YEAR']=2011
vehicle12['YEAR']=2012
vehicle13['YEAR']=2013
vehicle14['YEAR']=2014
# vehicle15['YEAR']='2015.0'
vehicle16['YEAR']=2016
allvehicles=pd.concat([vehicle07,vehicle08,vehicle09,vehicle10,vehicle11,vehicle12,vehicle13,vehicle14,vehicle16], axis=0,join='outer')
vehicles = allvehicles[['STATE','YEAR','ST_CASE','HIT_RUN','TRAV_SP','ROLLOVER','FIRE_EXP','SPEEDREL','DEATHS']]
vehicles.rename(columns={'ST_CASE':'CASE_NUM','TRAV_SP':'SPEED','FIRE_EXP': 'FIRE','SPEEDREL':'SPEEDING'}, inplace=True)
vehicles['STATE']=vehicles['STATE'].astype(int)
vehicles['CASE_NUM']=vehicles['CASE_NUM'].astype(int)
vehicles['YEAR']=vehicles['YEAR'].astype(int)
vehicles.index = list(vehicles['YEAR'].astype(str) + vehicles['CASE_NUM'].astype(str))
vehicles.head()
all = pd.merge(vehicles, accidents, left_index=True, right_index=True, how='inner',on=('STATE', 'YEAR','CASE_NUM'))
all.index=(all.index).astype(int)
all.sort_index()
all.head() | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
plot | #the total accidents number each year, analysis the difference between every year
year_acci=all[['YEAR','CASE_NUM']].groupby('YEAR').count()
month_acci=all[['MONTH','CASE_NUM']].groupby('MONTH').count()
day_acci=all[['DAY_WEEK','CASE_NUM']].groupby('DAY_WEEK').count()
hour_acci=all[['HOUR','CASE_NUM']].groupby('HOUR').count()
hour_acci=hour_acci.drop(hour_acci.index[-1])
hour_acci.iloc[0]=hour_acci.iloc[0]+hour_acci.iloc[-1]
hour_acci=hour_acci.drop(hour_acci.index[-1])
day_acci.index = pd.CategoricalIndex(day_acci.index,
categories=['Monday', 'Tuesday', 'Wednesday', 'Thursday','Friday','Saturday', 'Sunday'],
sorted=True)
day_acci = day_acci.sort_index()
month_acci.index=pd.CategoricalIndex(month_acci.index,
categories=['January', 'February', 'March', 'April','May','June', 'July','August','September','October','November','December'],
sorted=True)
month_acci=month_acci.sort_index()
import matplotlib.pyplot as plt
f1,axarr = plt.subplots(2,2)
f1.set_figwidth(15)
f1.set_figheight(9)
axarr[0,0].set_ylabel("count")
axarr[0,0].set_title("the total accidents number each year")
axarr[0,0].bar(year_acci.index,year_acci['CASE_NUM'])
objects1=np.array(month_acci.index)
x1=np.arange(len(objects1))
axarr[0,1].set_ylabel("count")
axarr[0,1].set_title("the total accidents number every month")
axarr[0,1].bar(x1,month_acci['CASE_NUM'])
axarr[0,1].set_xticks(x1)
axarr[0,1].set_xticklabels(objects1)
axarr[1,0].set_ylabel("count")
axarr[1,0].set_title("the total accidents number every hour")
axarr[1,0].bar(hour_acci.index,hour_acci['CASE_NUM'])
axarr[1,0].set_xticks(np.arange(0,24))
objects2=np.array(day_acci.index)
x2=np.arange(len(objects2))
axarr[1,1].set_ylabel("count")
axarr[1,1].set_title("the total accidents number every week")
axarr[1,1].bar(x2,day_acci['CASE_NUM'])
axarr[1,1].set_xticks(x2)
axarr[1,1].set_xticklabels(objects2)
f1
#f1.savefig("fig/time_relate_count.png")
all.to_hdf('results/df1.h5', 'all')
#with shelve.open('results/vars2') as db:
#db['speech_words'] = speech_words
#db['speeches_cleaned'] = speeches_cleaned | _____no_output_____ | MIT | .ipynb_checkpoints/Project 3-checkpoint.ipynb | junemore/traffic-accidents-analysis |
Part 9: Hither to Train, Thither to TestOK, now we know a bit about perceptrons. We'll return to that subject again. But now let's do a couple of things with our 48 colors from lesson 7:* We're going to wiggle some more - perturb the color data - in order to generate even more data.* But now we're going to randomly split the data into two parts, 80% for training and 20% for testing.Why split for training and testing? Repeating the Same Things Too Much Makes Jack a Dull NetworkIt's possible to *overtrain* your network, to provide it with so much similar data and so many epoch repetitions that it only learns the data you give it, so that if you give it something new it can't deal with it very well and its guesses come out wrong. A network that can make good predictions is a *generalized* network. So if you have a lot of data - enough not to worry about *undertraining* by not providing enough examples - you can keep aside some of it as a test for after all your epochs are done, to see, if you give it data it was not trained on, it can still produce similar loss and provide accuracy.This testing is called *scoring* the network against test data. But why weren't we splitting data before?In the beginning we had no colors, then 3, then 11, then 24, then 48. Splitting with so little data does not do much good, as you'll keep important information about some colors out of training, and the network won't know what they are since it never saw them as an example. When we started perturbing - wiggling - the original colors to multiply how much data we have, splitting started to become possible.What we're going to do below is increase the amount of data even more, by adding more wiggle points. And then we're going to keep 20% of the data aside for testing the network to see whether it's becoming too focused - not generalized enough - and can't figure out what to do with the test data. Slightly Different NetworkThere are a bunch of differences in the network code below, based on what we learned in lesson 7:* We use only 4 perceptrons per color, used to be 8.* We use a batch size of 8 to avoid waiting too long for training. As we increase the data we can usually increase the batch size without making things much worse.* We split the data into training and test, then train on the training data.* Then after training we score the network against the test data which the network hasn't seen yet. | from keras.layers import Activation, Dense, Dropout
from keras.models import Sequential
import keras.optimizers, keras.utils, numpy
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
def train(rgbValues, colorNames, epochs = 3, perceptronsPerColorName = 4, batchSize = 8):
"""
Trains a neural network to understand how to map color names to RGB triples.
The provided lists of RGB triples must be floating point triples with each
value in the range [0.0, 1.0], and the number of color names must be the same length.
Different names are allowed to map to the same RGB triple.
Returns a trained model that can be used for recognize().
"""
# Convert the Python map RGB values into a numpy array needed for training.
rgbNumpyArray = numpy.array(rgbValues, numpy.float64)
# Convert the color labels into a one-hot feature array.
# Text labels for each array position are in the classes_ list on the binarizer.
labelBinarizer = LabelBinarizer()
oneHotLabels = labelBinarizer.fit_transform(colorNames)
numColors = len(labelBinarizer.classes_)
colorLabels = labelBinarizer.classes_
# Partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing.
(trainingColors, testColors, trainingOneHotLabels, testOneHotLabels) = train_test_split(
rgbNumpyArray, oneHotLabels, test_size=0.2)
# Hyperparameters to define the network shape.
numFullyConnectedPerceptrons = numColors * perceptronsPerColorName
model = Sequential([
# Layer 1: Fully connected layer with ReLU activation.
Dense(numFullyConnectedPerceptrons, activation='relu', kernel_initializer='TruncatedNormal', input_shape=(3,)),
# Outputs: SoftMax activation to get probabilities by color.
Dense(numColors, activation='softmax')
])
print(model.summary())
# Compile for categorization.
model.compile(
optimizer = keras.optimizers.SGD(lr = 0.01, momentum = 0.9, decay = 1e-6, nesterov = True),
loss = 'categorical_crossentropy',
metrics = [ 'accuracy' ])
history = model.fit(trainingColors, trainingOneHotLabels, epochs=epochs, batch_size=batchSize)
print("")
print("Scoring result against test data after training with training data:")
score = model.evaluate(testColors, testOneHotLabels, batch_size=batchSize)
print("")
print("Score: loss=%1.4f, accuracy=%1.4f" % (score[0], score[1]))
return (model, colorLabels) | Using TensorFlow backend.
| MIT | Part09_Hither_to_Train_Thither_to_Test.ipynb | erikma/ColorMatching |
Here's our createMoreTrainingData() function, mostly the same but we've doubled the number of perturbValues by adding points in between the previous ones. | def createMoreTrainingData(colorNameToRGBMap):
# The incoming color map is not typically going to be oversubscribed with e.g.
# extra 'red' samples pointing to slightly different colors. We generate a
# training dataset by perturbing each color by a small amount positive and
# negative. We do this for each color individually, by pairs, and for all three
# at once, for each positive and negative value, resulting in dataset that is
# many times as large.
perturbValues = [ 0.0, 0.005, 0.01, 0.015, 0.02, 0.025, 0.03 ]
rgbValues = []
labels = []
for colorName, rgb in colorNameToRGBMap.items():
reds = []
greens = []
blues = []
for perturb in perturbValues:
if rgb[0] + perturb <= 1.0:
reds.append(rgb[0] + perturb)
if perturb != 0.0 and rgb[0] - perturb >= 0.0:
reds.append(rgb[0] - perturb)
if rgb[1] + perturb <= 1.0:
greens.append(rgb[1] + perturb)
if perturb != 0.0 and rgb[1] - perturb >= 0.0:
greens.append(rgb[1] - perturb)
if rgb[2] + perturb <= 1.0:
blues.append(rgb[2] + perturb)
if perturb != 0.0 and rgb[2] - perturb >= 0.0:
blues.append(rgb[2] - perturb)
for red in reds:
for green in greens:
for blue in blues:
rgbValues.append((red, green, blue))
labels.append(colorName)
return (rgbValues, labels) | _____no_output_____ | MIT | Part09_Hither_to_Train_Thither_to_Test.ipynb | erikma/ColorMatching |
And our previous 48 crayon colors, and let's try training: | def rgbToFloat(r, g, b): # r, g, b in 0-255 range
return (float(r) / 255.0, float(g) / 255.0, float(b) / 255.0)
# http://www.jennyscrayoncollection.com/2017/10/complete-list-of-current-crayola-crayon.html
colorMap = {
# 8-crayon box colors
'red': rgbToFloat(238, 32, 77),
'yellow': rgbToFloat(252, 232, 131),
'blue': rgbToFloat(31, 117, 254),
'brown': rgbToFloat(180, 103, 77),
'orange': rgbToFloat(255, 117, 56),
'green': rgbToFloat(28, 172, 20),
'violet': rgbToFloat(146, 110, 174),
'black': rgbToFloat(35, 35, 35),
# Additional for 16-count box
'red-violet': rgbToFloat(192, 68, 143),
'red-orange': rgbToFloat(255, 117, 56),
'yellow-green': rgbToFloat(197, 227, 132),
'blue-violet': rgbToFloat(115, 102, 189),
'carnation-pink': rgbToFloat(255, 170, 204),
'yellow-orange': rgbToFloat(255, 182, 83),
'blue-green': rgbToFloat(25, 158, 189),
'white': rgbToFloat(237, 237, 237),
# Additional for 24-count box
'violet-red': rgbToFloat(247, 83 ,148),
'apricot': rgbToFloat(253, 217, 181),
'cerulean': rgbToFloat(29, 172, 214),
'indigo': rgbToFloat(93, 118, 203),
'scarlet': rgbToFloat(242, 40, 71),
'green-yellow': rgbToFloat(240, 232, 145),
'bluetiful': rgbToFloat(46, 80, 144),
'gray': rgbToFloat(149, 145, 140),
# Additional for 32-count box
'chestnut': rgbToFloat(188, 93, 88),
'peach': rgbToFloat(255, 207, 171),
'sky-blue': rgbToFloat(128, 215, 235),
'cadet-blue': rgbToFloat(176, 183, 198),
'melon': rgbToFloat(253, 188, 180),
'tan': rgbToFloat(250, 167, 108),
'wisteria': rgbToFloat(205, 164, 222),
'timberwolf': rgbToFloat(219, 215, 210),
# Additional for 48-count box
'lavender': rgbToFloat(252, 180, 213),
'burnt-sienna': rgbToFloat(234, 126, 93),
'olive-green': rgbToFloat(186, 184, 108),
'purple-mountains-majesty': rgbToFloat(157, 129, 186),
'salmon': rgbToFloat(255, 155, 170),
'macaroni-and-cheese': rgbToFloat(255, 189, 136),
'granny-smith-apple': rgbToFloat(168, 228, 160),
'sepia': rgbToFloat(165, 105, 79),
'mauvelous': rgbToFloat(239, 152, 170),
'goldenrod': rgbToFloat(255, 217, 117),
'sea-green': rgbToFloat(159, 226, 191),
'raw-sienna': rgbToFloat(214, 138, 89),
'mahogany': rgbToFloat(205, 74, 74),
'spring-green': rgbToFloat(236, 234, 190),
'cornflower': rgbToFloat(154, 206, 235),
'tumbleweed': rgbToFloat(222, 170, 136),
}
(rgbValues, colorNames) = createMoreTrainingData(colorMap)
(colorModel, colorLabels) = train(rgbValues, colorNames) | WARNING:tensorflow:From c:\users\erik\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 192) 768
_________________________________________________________________
dense_2 (Dense) (None, 48) 9264
=================================================================
Total params: 10,032
Trainable params: 10,032
Non-trainable params: 0
_________________________________________________________________
None
WARNING:tensorflow:From c:\users\erik\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/3
74224/74224 [==============================] - 14s 185us/step - loss: 1.0514 - acc: 0.7338
Epoch 2/3
74224/74224 [==============================] - 12s 166us/step - loss: 0.2417 - acc: 0.9323
Epoch 3/3
74224/74224 [==============================] - 13s 169us/step - loss: 0.1757 - acc: 0.9456
Scoring result against test data after training with training data:
18557/18557 [==============================] - 2s 86us/step
Score: loss=0.1417, accuracy=0.9565
| MIT | Part09_Hither_to_Train_Thither_to_Test.ipynb | erikma/ColorMatching |
Not bad: We quickly got our loss down to 0.17 in only 3 epochs, but the larger batch size kept it from taking a really long time.But let's examine our new addition, the test data scoring result. From my machine: `Score: loss=0.1681, accuracy=0.9464`Note that we trained with 74,000 data points, but we kept aside an additional 18,000 data points as test data the network was not allowed to train with. And when we ask the network to predict with the test data, the loss we get - 0.168 on my machine - is pretty close to the 0.172 loss I got on training.This is good news! It means our network is well generalized: not overtrained, not too focused to deal with making predictions on new data.Try it out to make sure it still seems like a good result: | from ipywidgets import interact
from IPython.core.display import display, HTML
def displayColor(r, g, b):
rInt = min(255, max(0, int(r * 255.0)))
gInt = min(255, max(0, int(g * 255.0)))
bInt = min(255, max(0, int(b * 255.0)))
hexColor = "#%02X%02X%02X" % (rInt, gInt, bInt)
display(HTML('<div style="width: 50%; height: 50px; background: ' + hexColor + ';"></div>'))
numPredictionsToShow = 5
@interact(r = (0.0, 1.0, 0.01), g = (0.0, 1.0, 0.01), b = (0.0, 1.0, 0.01))
def getTopPredictionsFromModel(r, g, b):
testColor = numpy.array([ (r, g, b) ])
predictions = colorModel.predict(testColor, verbose=0) # Predictions shape (1, numColors)
predictions *= 100.0
predColorTuples = []
for i in range(0, len(colorLabels)):
predColorTuples.append((predictions[0][i], colorLabels[i]))
predAndNames = numpy.array(predColorTuples, dtype=[('pred', float), ('colorName', 'U50')])
sorted = numpy.sort(predAndNames, order=['pred', 'colorName'])
sorted = sorted[::-1] # reverse rows to get highest on top
for i in range(0, numPredictionsToShow):
print("%2.1f" % sorted[i][0] + "%", sorted[i][1])
displayColor(r, g, b) | _____no_output_____ | MIT | Part09_Hither_to_Train_Thither_to_Test.ipynb | erikma/ColorMatching |
In my opinion the extra perturbation data made quite a bit of difference. It guesses over 70% for gray at (0.5, 0.5, 0.5), better than before.Here's the hyperparameter slider version so you can try out different epochs, batch sizes, and perceptrons: | @interact(epochs = (1, 10), perceptronsPerColorName = (1, 12), batchSize = (1, 50))
def trainModel(epochs=4, perceptronsPerColorName=3, batchSize=16):
global colorModel
global colorLabels
(colorModel, colorLabels) = train(rgbValues, colorNames, epochs=epochs, perceptronsPerColorName=perceptronsPerColorName, batchSize=batchSize)
interact(getTopPredictionsFromModel, r = (0.0, 1.0, 0.01), g = (0.0, 1.0, 0.01), b = (0.0, 1.0, 0.01)) | _____no_output_____ | MIT | Part09_Hither_to_Train_Thither_to_Test.ipynb | erikma/ColorMatching |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.