code
stringlengths 2.5k
6.36M
| kind
stringclasses 2
values | parsed_code
stringlengths 0
404k
| quality_prob
float64 0
0.98
| learning_prob
float64 0.03
1
|
---|---|---|---|---|
# Notes on linux
This is a simple note, I don't have clear index to arrage the content in this note. I just take the notes which I may be easy forget.
** Author: Yue-Wen FANG **
** Contact: [email protected] **
** Revision history: created in 16th, December 2017, at Kyoto **
## 1. 配置ssh连接
Professionally speaking, it's better to use a English title: Configure Custom Connection Options for your SSH Client
### Generating a new SSH key
On github help webpage (see [Ref1](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/)), there is a simple introduction on generating the ssh key pairs.
Open your terminal (no matter it is on Cygwin or Linux or Wingw or any other virtual linux systems):
> ssh-keygen -t rsa -b 4096 -C "[email protected]"
> #This creates a new ssh key, using the provided email as a label.
Beforing reading my note, I think you could have see kygen command like
> ssh-keygen -t rsa -b 2048 -C "[email protected]"
The only difference is the bit we use. It determines the length of the generated kyes. You can use either on your favor.
When you're prompted to "Enter a file in which to save the key," press Enter. This accepts the default file location.
> Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
> #you can also use a new address, like /Users/you/.ssh/id_rsa.cluster
At the prompt, type a secure passphrase. (usually I just Press enter)
### Adding your SSH key to the ssh-agent
Start the ssh-agent in the background.
> eval "$(ssh-agent -s)"
> #THIS COMMAND WILL SHOW YOU AS "Agent pid 59566"
Add your SSH private key to the ssh-agent. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_rsa in the command with the name of your private key file.
>ssh-add ~/.ssh/id_rsa
## 2. Add your public key to github
You can copy the content in ~/.ssh/id_rsa to your github (setting-> SSH and GPG keys->New SSH key or Add SSH key). Then you can use git push without inputing the password.
## 3. Specify a key pair?
Sometimes you may have several different key pairs (that means you have several id_rsa files like id_rsa.home id_rsa.work), and you want to use them in some specific conditions, what should we do?
We can use the config file in ~/.ssh, if you don't find it, just create this config file.
You can folow this guide to set up your config file (see [Ref2](https://superuser.com/questions/232373/how-to-tell-git-which-private-key-to-use) or [Ref3](https://www.keybits.net/post/automatically-use-correct-ssh-key-for-remote-git-repo/))
If you know Chinese, you can also read there blogs in which the authors showed their thoughts on config file:
http://blog.csdn.net/u013647382/article/details/47832559
http://dhq.me/use-ssh-config-manage-ssh-session
https://www.hi-linux.com/posts/14346.html
|
github_jupyter
|
# Notes on linux
This is a simple note, I don't have clear index to arrage the content in this note. I just take the notes which I may be easy forget.
** Author: Yue-Wen FANG **
** Contact: [email protected] **
** Revision history: created in 16th, December 2017, at Kyoto **
## 1. 配置ssh连接
Professionally speaking, it's better to use a English title: Configure Custom Connection Options for your SSH Client
### Generating a new SSH key
On github help webpage (see [Ref1](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/)), there is a simple introduction on generating the ssh key pairs.
Open your terminal (no matter it is on Cygwin or Linux or Wingw or any other virtual linux systems):
> ssh-keygen -t rsa -b 4096 -C "[email protected]"
> #This creates a new ssh key, using the provided email as a label.
Beforing reading my note, I think you could have see kygen command like
> ssh-keygen -t rsa -b 2048 -C "[email protected]"
The only difference is the bit we use. It determines the length of the generated kyes. You can use either on your favor.
When you're prompted to "Enter a file in which to save the key," press Enter. This accepts the default file location.
> Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]
> #you can also use a new address, like /Users/you/.ssh/id_rsa.cluster
At the prompt, type a secure passphrase. (usually I just Press enter)
### Adding your SSH key to the ssh-agent
Start the ssh-agent in the background.
> eval "$(ssh-agent -s)"
> #THIS COMMAND WILL SHOW YOU AS "Agent pid 59566"
Add your SSH private key to the ssh-agent. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_rsa in the command with the name of your private key file.
>ssh-add ~/.ssh/id_rsa
## 2. Add your public key to github
You can copy the content in ~/.ssh/id_rsa to your github (setting-> SSH and GPG keys->New SSH key or Add SSH key). Then you can use git push without inputing the password.
## 3. Specify a key pair?
Sometimes you may have several different key pairs (that means you have several id_rsa files like id_rsa.home id_rsa.work), and you want to use them in some specific conditions, what should we do?
We can use the config file in ~/.ssh, if you don't find it, just create this config file.
You can folow this guide to set up your config file (see [Ref2](https://superuser.com/questions/232373/how-to-tell-git-which-private-key-to-use) or [Ref3](https://www.keybits.net/post/automatically-use-correct-ssh-key-for-remote-git-repo/))
If you know Chinese, you can also read there blogs in which the authors showed their thoughts on config file:
http://blog.csdn.net/u013647382/article/details/47832559
http://dhq.me/use-ssh-config-manage-ssh-session
https://www.hi-linux.com/posts/14346.html
| 0.480479 | 0.296311 |
```
#data https://archive.ics.uci.edu/ml/datasets/sms+spam+collection
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
%matplotlib inline
plt.style.use('ggplot')
nltk.download()
messages = [line.rstrip() for line in open('SMSSpamCollection')]
print(len(messages))
messages[10]
for messages_number, message in enumerate(messages[:15]):
print(messages_number, message)
print('\n')
messages = pd.read_csv('SMSSpamCollection', sep='\t', names=['label', 'message'])
messages.head()
messages.describe()
messages.groupby('label').describe()
messages['lenght'] = messages['message'].apply(len)
messages.head()
plt.figure(figsize=(12,8))
messages['lenght'].plot(kind='hist', bins=150)
messages.lenght.describe()
messages[messages['lenght'] == 910]['message'].iloc[0]
messages.hist(bins=100, column='lenght', by='label', figsize=(12,8))
import string
mess = 'Mensagem de exemplo! Notem: Ela possui pontuação.'
string.punctuation
sempont = [car for car in mess if car not in string.punctuation]
print(sempont)
sempont = ''.join(sempont)
print(sempont)
from nltk.corpus import stopwords
print(stopwords.words('english'))
print(stopwords.words('portuguese'))
tst = 'Sample message! Notice: it has punctuation.'
clean_mess = [word for word in tst.split() if word.lower() not in stopwords.words('english')]
clean_mess
def text_process(mess):
#Retira as pontuações
nopunc = [char for char in mess if char not in string.punctuation]
#Juntar texto novamente
nopunc = ''.join(nopunc)
#Remover as stopwords
sms = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
return sms
messages['message'].head(5).apply(text_process)
from sklearn.feature_extraction.text import CountVectorizer
bow_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
print(len(bow_transformer.vocabulary_))
message4 = messages['message'][3]
print(message4)
bow4 = bow_transformer.transform([message4])
print(bow4)
print(bow4.shape)
print(bow_transformer.get_feature_names()[9554])
messages_bow = bow_transformer.transform(messages['message'])
print(messages_bow.shape)
print(messages_bow.nnz)
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print('sparsity: {}'.format(sparsity))
from sklearn.feature_extraction.text import TfidfTransformer
tdidf_transform = TfidfTransformer()
tdidf_transform = tdidf_transform.fit(messages_bow)
tfdf4 = tdidf_transform.transform(bow4)
print(tfdf4)
print(tdidf_transform.idf_[bow_transformer.vocabulary_['university']])
from sklearn.naive_bayes import MultinomialNB
messages_tfidf = tdidf_transform.transform(messages_bow)
spam_detect_model = MultinomialNB().fit(messages_tfidf, messages['label'])
print('Predito: ', spam_detect_model.predict(tfdf4)[0])
print('Esperado: ', messages['label'][3])
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = train_test_split(messages['message'], messages['label'], test_size=0.2)
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
pipeline.fit(msg_train, label_train)
pred = pipeline.predict(msg_test)
from sklearn.metrics import classification_report
print(classification_report(pred, label_test))
```
|
github_jupyter
|
#data https://archive.ics.uci.edu/ml/datasets/sms+spam+collection
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import nltk
%matplotlib inline
plt.style.use('ggplot')
nltk.download()
messages = [line.rstrip() for line in open('SMSSpamCollection')]
print(len(messages))
messages[10]
for messages_number, message in enumerate(messages[:15]):
print(messages_number, message)
print('\n')
messages = pd.read_csv('SMSSpamCollection', sep='\t', names=['label', 'message'])
messages.head()
messages.describe()
messages.groupby('label').describe()
messages['lenght'] = messages['message'].apply(len)
messages.head()
plt.figure(figsize=(12,8))
messages['lenght'].plot(kind='hist', bins=150)
messages.lenght.describe()
messages[messages['lenght'] == 910]['message'].iloc[0]
messages.hist(bins=100, column='lenght', by='label', figsize=(12,8))
import string
mess = 'Mensagem de exemplo! Notem: Ela possui pontuação.'
string.punctuation
sempont = [car for car in mess if car not in string.punctuation]
print(sempont)
sempont = ''.join(sempont)
print(sempont)
from nltk.corpus import stopwords
print(stopwords.words('english'))
print(stopwords.words('portuguese'))
tst = 'Sample message! Notice: it has punctuation.'
clean_mess = [word for word in tst.split() if word.lower() not in stopwords.words('english')]
clean_mess
def text_process(mess):
#Retira as pontuações
nopunc = [char for char in mess if char not in string.punctuation]
#Juntar texto novamente
nopunc = ''.join(nopunc)
#Remover as stopwords
sms = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
return sms
messages['message'].head(5).apply(text_process)
from sklearn.feature_extraction.text import CountVectorizer
bow_transformer = CountVectorizer(analyzer=text_process).fit(messages['message'])
print(len(bow_transformer.vocabulary_))
message4 = messages['message'][3]
print(message4)
bow4 = bow_transformer.transform([message4])
print(bow4)
print(bow4.shape)
print(bow_transformer.get_feature_names()[9554])
messages_bow = bow_transformer.transform(messages['message'])
print(messages_bow.shape)
print(messages_bow.nnz)
sparsity = (100.0 * messages_bow.nnz / (messages_bow.shape[0] * messages_bow.shape[1]))
print('sparsity: {}'.format(sparsity))
from sklearn.feature_extraction.text import TfidfTransformer
tdidf_transform = TfidfTransformer()
tdidf_transform = tdidf_transform.fit(messages_bow)
tfdf4 = tdidf_transform.transform(bow4)
print(tfdf4)
print(tdidf_transform.idf_[bow_transformer.vocabulary_['university']])
from sklearn.naive_bayes import MultinomialNB
messages_tfidf = tdidf_transform.transform(messages_bow)
spam_detect_model = MultinomialNB().fit(messages_tfidf, messages['label'])
print('Predito: ', spam_detect_model.predict(tfdf4)[0])
print('Esperado: ', messages['label'][3])
from sklearn.model_selection import train_test_split
msg_train, msg_test, label_train, label_test = train_test_split(messages['message'], messages['label'], test_size=0.2)
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
pipeline.fit(msg_train, label_train)
pred = pipeline.predict(msg_test)
from sklearn.metrics import classification_report
print(classification_report(pred, label_test))
| 0.483892 | 0.395659 |
submitted by Tarang Ranpara
## Part 1 - training CBOW and Skipgram models
```
# load library gensim (contains word2vec implementation)
import gensim
# ignore some warnings (probably caused by gensim version)
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import multiprocessing
cores = multiprocessing.cpu_count() # Count the number of cores
from tqdm import tqdm
# importing needed libs
import os
import re
import nltk
import pickle
import scipy
import numpy as np
from bs4 import BeautifulSoup as bs
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
import matplotlib.pyplot as plt
# downloading needed data
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
from google.colab import drive
drive.mount('/content/drive')
! mkdir data
! cp 'drive/MyDrive/IRLAB/A3/FIRE_Dataset_EN_2010.rar' './data/FIRE_Dataset_EN_2010.rar' > nul
! unrar x data/FIRE_Dataset_EN_2010.rar data > nul
! tar -xvf './data/FIRE_Dataset_EN_2010/English-Data.tgz' -C './data/FIRE_Dataset_EN_2010/' > nul
class DataReader:
def read_and_process(self, data_dir):
# stopwords
stopwords = set(nltk.corpus.stopwords.words('english'))
# wordnet lemmatizer
stemmer = nltk.stem.PorterStemmer()
file_names = []
text_tokens = []
i = 0
# iterating over 2004, 2005, 2006, 2007 etc dirs
for dir in tqdm(os.listdir(data_dir)):
dir_name = os.path.join(data_dir,dir)
# iterating over bengal, business, foreign etc dirs
for sub_dir in os.listdir(dir_name):
sub_dir_name = os.path.join(dir_name,sub_dir)
data_files = os.listdir(sub_dir_name)
for f in data_files:
f_name = os.path.join(sub_dir_name,f)
with open(f_name,'r') as fobj:
content = fobj.read()
soup = bs(content, "lxml")
# find text tag
temp_text_data = soup.find('text').text
# converting text to lower case
temp_text_data = temp_text_data.lower()
# removing numbers and special chars
temp_text_data = re.sub(r'[^\w\s]', '', temp_text_data)
temp_text_data = re.sub(r'\d+', '', temp_text_data)
# tokens
tokens = nltk.word_tokenize(temp_text_data)
# removing stopwords
tokens = [token for token in tokens if token not in stopwords]
# lemmatizing
tokens = list(map(stemmer.stem,tokens))
# removing empty files
if len(tokens) > 0:
text_tokens.append(tokens)
file_names.append(f)
if i%5000==0:
print(i, ' - ', f)
i += 1
# list of tokens, list of file names
return text_tokens, file_names
data_dir = "./data/FIRE_Dataset_EN_2010/TELEGRAPH_UTF8/"
dr = DataReader()
text_tokens, file_names = dr.read_and_process(data_dir)
for sentence in text_tokens[30:40]:
print(sentence)
```
### cbow model
```
# CBOW Model
w2v_model = gensim.models.Word2Vec(min_count=20,
window=5,
size=100,
sample=6e-5,
alpha=0.03,
min_alpha=0.0007,
negative=20,
workers=cores-1,
sg=0
)
w2v_model.build_vocab(text_tokens, progress_per=10000)
w2v_model.train(text_tokens, total_examples=w2v_model.corpus_count, epochs=5, report_delay=1)
w2v_model.init_sims(replace=True)
# word vectors are stored in model.wv
print("Size of the vocabulary: %d number of unique words have been considered" % len(w2v_model.wv.vocab))
example_word = 'woman'
print("\nWord vector of " + example_word)
print(w2v_model.wv[example_word].size)
print(w2v_model.wv[example_word])
print("\nWords with most similar vector representations to " + example_word)
print(w2v_model.wv.most_similar(example_word))
# similarity directly:
print("\nCosine similarity to other words:")
print(w2v_model.similarity('woman','man'))
print(w2v_model.similarity('woman','tree'))
# words most similar to "man"
w2v_model.wv.most_similar("man")
# words most similar to "politician"
w2v_model.wv.most_similar("politician")
w2v_model.wv.most_similar(positive=["king", "girl"], negative=["queen"], topn=10)
import numpy as np
labels = []
count = 0
max_count = 50
X = np.zeros(shape=(max_count, len(w2v_model['car'])))
for term in w2v_model.wv.vocab:
X[count] = w2v_model[term]
labels.append(term)
count+= 1
if count >= max_count: break
# It is recommended to use PCA first to reduce to ~50 dimensions
from sklearn.decomposition import PCA
pca = PCA(n_components=50)
X_50 = pca.fit_transform(X)
# Using TSNE to further reduce to 2 dimensions
from sklearn.manifold import TSNE
model_tsne = TSNE(n_components=2, random_state=0)
Y = model_tsne.fit_transform(X_50)
# Show the scatter plot
import matplotlib.pyplot as plt
plt.scatter(Y[:,0], Y[:,1], 20)
# Add labels
for label, x, y in zip(labels, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy = (x,y), xytext = (0, 0), textcoords = 'offset points', size = 10)
plt.show()
```
### skipgram model
```
# SkipGram Model
w2v_model = gensim.models.Word2Vec(min_count=20,
window=5,
size=100,
sample=6e-5,
alpha=0.03,
min_alpha=0.0007,
negative=20,
workers=cores-1,
sg=1
)
w2v_model.build_vocab(text_tokens, progress_per=10000)
w2v_model.train(text_tokens, total_examples=w2v_model.corpus_count, epochs=5, report_delay=1)
w2v_model.init_sims(replace=True)
# word vectors are stored in model.wv
print("Size of the vocabulary: %d number of unique words have been considered" % len(w2v_model.wv.vocab))
example_word = 'woman'
print("\nWord vector of " + example_word)
print(w2v_model.wv[example_word].size)
print(w2v_model.wv[example_word])
print("\nWords with most similar vector representations to " + example_word)
print(w2v_model.wv.most_similar(example_word))
# similarity directly:
print("\nCosine similarity to other words:")
print(w2v_model.similarity('woman','man'))
print(w2v_model.similarity('woman','tree'))
w2v_model.wv.most_similar("man")
w2v_model.wv.most_similar("politician")
w2v_model.wv.most_similar(positive=["king", "girl"], negative=["queen"], topn=10)
# probably not enough data?
import numpy as np
labels = []
count = 0
max_count = 50
X = np.zeros(shape=(max_count, len(w2v_model['car'])))
for term in w2v_model.wv.vocab:
X[count] = w2v_model[term]
labels.append(term)
count+= 1
if count >= max_count: break
# It is recommended to use PCA first to reduce to ~50 dimensions
from sklearn.decomposition import PCA
pca = PCA(n_components=50)
X_50 = pca.fit_transform(X)
# Using TSNE to further reduce to 2 dimensions
from sklearn.manifold import TSNE
model_tsne = TSNE(n_components=2, random_state=0)
Y = model_tsne.fit_transform(X_50)
# Show the scatter plot
import matplotlib.pyplot as plt
plt.scatter(Y[:,0], Y[:,1], 20)
# Add labels
for label, x, y in zip(labels, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy = (x,y), xytext = (0, 0), textcoords = 'offset points', size = 10)
plt.show()
```
## Part 2 - Training token classification models
```
import nltk
nltk.download('opinion_lexicon')
from nltk.corpus import opinion_lexicon
import gensim.downloader
```
### preparing data
```
positives = list(opinion_lexicon.positive())
negatives = list(opinion_lexicon.negative())
positives = [(tok, 1) for tok in positives ]
negatives = [(tok, 0) for tok in negatives ]
data = positives + negatives
final_dataset = []
categories = []
for word, category in data:
try:
emb = wv_model.wv[word]
final_dataset.append(emb)
categories.append(category)
except:
continue
```
### SVC
```
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(final_dataset, categories, test_size=0.25, stratify=categories)
wv_model = gensim.downloader.load('glove-twitter-100')
from sklearn.svm import SVC
svc = SVC()
svc.fit(x_train, y_train)
print(f'Score: {svc.score(x_test, y_test)}')
```
### Feed Forward Neural net for classification
```
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.metrics import Precision, Recall
vector_size = len(x_train[0])
batch_size = 64
epochs = 20
def NN(input_size, activation):
inputs = Input(shape=(input_size, ))
x = Dense(64, activation=activation)(inputs)
x = Dense(32, activation=activation)(x)
x = Dense(16, activation=activation)(x)
outputs = Dense(1, activation='sigmoid')(x)
return Model(inputs = inputs, outputs = outputs, name='token_classification')
model = NN(100, 'relu')
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[Precision(), Recall()])
H = model.fit(np.array(x_train), np.array(y_train), batch_size=batch_size, epochs=epochs, validation_split=0.1)
l, p, r = model.evaluate(np.array(x_test), np.array(y_test))
print(f'F1: {2 * p * r/ (p+r)}')
```
|
github_jupyter
|
# load library gensim (contains word2vec implementation)
import gensim
# ignore some warnings (probably caused by gensim version)
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import multiprocessing
cores = multiprocessing.cpu_count() # Count the number of cores
from tqdm import tqdm
# importing needed libs
import os
import re
import nltk
import pickle
import scipy
import numpy as np
from bs4 import BeautifulSoup as bs
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
import matplotlib.pyplot as plt
# downloading needed data
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
from google.colab import drive
drive.mount('/content/drive')
! mkdir data
! cp 'drive/MyDrive/IRLAB/A3/FIRE_Dataset_EN_2010.rar' './data/FIRE_Dataset_EN_2010.rar' > nul
! unrar x data/FIRE_Dataset_EN_2010.rar data > nul
! tar -xvf './data/FIRE_Dataset_EN_2010/English-Data.tgz' -C './data/FIRE_Dataset_EN_2010/' > nul
class DataReader:
def read_and_process(self, data_dir):
# stopwords
stopwords = set(nltk.corpus.stopwords.words('english'))
# wordnet lemmatizer
stemmer = nltk.stem.PorterStemmer()
file_names = []
text_tokens = []
i = 0
# iterating over 2004, 2005, 2006, 2007 etc dirs
for dir in tqdm(os.listdir(data_dir)):
dir_name = os.path.join(data_dir,dir)
# iterating over bengal, business, foreign etc dirs
for sub_dir in os.listdir(dir_name):
sub_dir_name = os.path.join(dir_name,sub_dir)
data_files = os.listdir(sub_dir_name)
for f in data_files:
f_name = os.path.join(sub_dir_name,f)
with open(f_name,'r') as fobj:
content = fobj.read()
soup = bs(content, "lxml")
# find text tag
temp_text_data = soup.find('text').text
# converting text to lower case
temp_text_data = temp_text_data.lower()
# removing numbers and special chars
temp_text_data = re.sub(r'[^\w\s]', '', temp_text_data)
temp_text_data = re.sub(r'\d+', '', temp_text_data)
# tokens
tokens = nltk.word_tokenize(temp_text_data)
# removing stopwords
tokens = [token for token in tokens if token not in stopwords]
# lemmatizing
tokens = list(map(stemmer.stem,tokens))
# removing empty files
if len(tokens) > 0:
text_tokens.append(tokens)
file_names.append(f)
if i%5000==0:
print(i, ' - ', f)
i += 1
# list of tokens, list of file names
return text_tokens, file_names
data_dir = "./data/FIRE_Dataset_EN_2010/TELEGRAPH_UTF8/"
dr = DataReader()
text_tokens, file_names = dr.read_and_process(data_dir)
for sentence in text_tokens[30:40]:
print(sentence)
# CBOW Model
w2v_model = gensim.models.Word2Vec(min_count=20,
window=5,
size=100,
sample=6e-5,
alpha=0.03,
min_alpha=0.0007,
negative=20,
workers=cores-1,
sg=0
)
w2v_model.build_vocab(text_tokens, progress_per=10000)
w2v_model.train(text_tokens, total_examples=w2v_model.corpus_count, epochs=5, report_delay=1)
w2v_model.init_sims(replace=True)
# word vectors are stored in model.wv
print("Size of the vocabulary: %d number of unique words have been considered" % len(w2v_model.wv.vocab))
example_word = 'woman'
print("\nWord vector of " + example_word)
print(w2v_model.wv[example_word].size)
print(w2v_model.wv[example_word])
print("\nWords with most similar vector representations to " + example_word)
print(w2v_model.wv.most_similar(example_word))
# similarity directly:
print("\nCosine similarity to other words:")
print(w2v_model.similarity('woman','man'))
print(w2v_model.similarity('woman','tree'))
# words most similar to "man"
w2v_model.wv.most_similar("man")
# words most similar to "politician"
w2v_model.wv.most_similar("politician")
w2v_model.wv.most_similar(positive=["king", "girl"], negative=["queen"], topn=10)
import numpy as np
labels = []
count = 0
max_count = 50
X = np.zeros(shape=(max_count, len(w2v_model['car'])))
for term in w2v_model.wv.vocab:
X[count] = w2v_model[term]
labels.append(term)
count+= 1
if count >= max_count: break
# It is recommended to use PCA first to reduce to ~50 dimensions
from sklearn.decomposition import PCA
pca = PCA(n_components=50)
X_50 = pca.fit_transform(X)
# Using TSNE to further reduce to 2 dimensions
from sklearn.manifold import TSNE
model_tsne = TSNE(n_components=2, random_state=0)
Y = model_tsne.fit_transform(X_50)
# Show the scatter plot
import matplotlib.pyplot as plt
plt.scatter(Y[:,0], Y[:,1], 20)
# Add labels
for label, x, y in zip(labels, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy = (x,y), xytext = (0, 0), textcoords = 'offset points', size = 10)
plt.show()
# SkipGram Model
w2v_model = gensim.models.Word2Vec(min_count=20,
window=5,
size=100,
sample=6e-5,
alpha=0.03,
min_alpha=0.0007,
negative=20,
workers=cores-1,
sg=1
)
w2v_model.build_vocab(text_tokens, progress_per=10000)
w2v_model.train(text_tokens, total_examples=w2v_model.corpus_count, epochs=5, report_delay=1)
w2v_model.init_sims(replace=True)
# word vectors are stored in model.wv
print("Size of the vocabulary: %d number of unique words have been considered" % len(w2v_model.wv.vocab))
example_word = 'woman'
print("\nWord vector of " + example_word)
print(w2v_model.wv[example_word].size)
print(w2v_model.wv[example_word])
print("\nWords with most similar vector representations to " + example_word)
print(w2v_model.wv.most_similar(example_word))
# similarity directly:
print("\nCosine similarity to other words:")
print(w2v_model.similarity('woman','man'))
print(w2v_model.similarity('woman','tree'))
w2v_model.wv.most_similar("man")
w2v_model.wv.most_similar("politician")
w2v_model.wv.most_similar(positive=["king", "girl"], negative=["queen"], topn=10)
# probably not enough data?
import numpy as np
labels = []
count = 0
max_count = 50
X = np.zeros(shape=(max_count, len(w2v_model['car'])))
for term in w2v_model.wv.vocab:
X[count] = w2v_model[term]
labels.append(term)
count+= 1
if count >= max_count: break
# It is recommended to use PCA first to reduce to ~50 dimensions
from sklearn.decomposition import PCA
pca = PCA(n_components=50)
X_50 = pca.fit_transform(X)
# Using TSNE to further reduce to 2 dimensions
from sklearn.manifold import TSNE
model_tsne = TSNE(n_components=2, random_state=0)
Y = model_tsne.fit_transform(X_50)
# Show the scatter plot
import matplotlib.pyplot as plt
plt.scatter(Y[:,0], Y[:,1], 20)
# Add labels
for label, x, y in zip(labels, Y[:, 0], Y[:, 1]):
plt.annotate(label, xy = (x,y), xytext = (0, 0), textcoords = 'offset points', size = 10)
plt.show()
import nltk
nltk.download('opinion_lexicon')
from nltk.corpus import opinion_lexicon
import gensim.downloader
positives = list(opinion_lexicon.positive())
negatives = list(opinion_lexicon.negative())
positives = [(tok, 1) for tok in positives ]
negatives = [(tok, 0) for tok in negatives ]
data = positives + negatives
final_dataset = []
categories = []
for word, category in data:
try:
emb = wv_model.wv[word]
final_dataset.append(emb)
categories.append(category)
except:
continue
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(final_dataset, categories, test_size=0.25, stratify=categories)
wv_model = gensim.downloader.load('glove-twitter-100')
from sklearn.svm import SVC
svc = SVC()
svc.fit(x_train, y_train)
print(f'Score: {svc.score(x_test, y_test)}')
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.metrics import Precision, Recall
vector_size = len(x_train[0])
batch_size = 64
epochs = 20
def NN(input_size, activation):
inputs = Input(shape=(input_size, ))
x = Dense(64, activation=activation)(inputs)
x = Dense(32, activation=activation)(x)
x = Dense(16, activation=activation)(x)
outputs = Dense(1, activation='sigmoid')(x)
return Model(inputs = inputs, outputs = outputs, name='token_classification')
model = NN(100, 'relu')
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[Precision(), Recall()])
H = model.fit(np.array(x_train), np.array(y_train), batch_size=batch_size, epochs=epochs, validation_split=0.1)
l, p, r = model.evaluate(np.array(x_test), np.array(y_test))
print(f'F1: {2 * p * r/ (p+r)}')
| 0.322846 | 0.513363 |
# TensorFlow Basics
Import the library:
```
import tensorflow as tf
print(tf.__version__)
```
### Simple Constants
Let's show how to create a simple constant with Tensorflow, which TF stores as a tensor object:
```
hello = tf.constant('Hello World')
type(hello)
x = tf.constant(100)
type(x)
```
### Running Sessions
Now you can create a TensorFlow Session, which is a class for running TensorFlow operations.
A `Session` object encapsulates the environment in which `Operation`
objects are executed, and `Tensor` objects are evaluated. For example:
```
sess = tf.Session()
sess.run(hello)
type(sess.run(hello))
sess.run(x)
type(sess.run(x))
```
## Operations
You can line up multiple Tensorflow operations in to be run during a session:
```
x = tf.constant(2)
y = tf.constant(3)
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(x+y))
print('Subtraction',sess.run(x-y))
print('Multiplication',sess.run(x*y))
print('Division',sess.run(x/y))
```
### Placeholder
You may not always have the constants right away, and you may be waiting for a constant to appear after a cycle of operations. **tf.placeholder** is a tool for this. It inserts a placeholder for a tensor that will be always fed.
**Important**: This tensor will produce an error if evaluated. Its value must be fed using the `feed_dict` optional argument to `Session.run()`,
`Tensor.eval()`, or `Operation.run()`. For example, for a placeholder of a matrix of floating point numbers:
x = tf.placeholder(tf.float32, shape=(1024, 1024))
Here is an example for integer placeholders:
```
x = tf.placeholder(tf.int32)
y = tf.placeholder(tf.int32)
x
type(x)
```
### Defining Operations
```
add = tf.add(x,y)
sub = tf.subtract(x,y)
mul = tf.multiply(x,y)
```
Running operations with variable input:
```
d = {x:20,y:30}
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(add,feed_dict=d))
print('Subtraction',sess.run(sub,feed_dict=d))
print('Multiplication',sess.run(mul,feed_dict=d))
```
Now let's see an example of a more complex operation, using Matrix Multiplication. First we need to create the matrices:
```
import numpy as np
# Make sure to use floats here, int64 will cause an error.
a = np.array([[5.0,5.0]])
b = np.array([[2.0],[2.0]])
a
a.shape
b
b.shape
mat1 = tf.constant(a)
mat2 = tf.constant(b)
```
The matrix multiplication operation:
```
matrix_multi = tf.matmul(mat1,mat2)
```
Now run the session to perform the Operation:
```
with tf.Session() as sess:
result = sess.run(matrix_multi)
print(result)
```
That is all for now! Next we will expand these basic concepts to construct out own Multi-Layer Perceptron model!
|
github_jupyter
|
import tensorflow as tf
print(tf.__version__)
hello = tf.constant('Hello World')
type(hello)
x = tf.constant(100)
type(x)
sess = tf.Session()
sess.run(hello)
type(sess.run(hello))
sess.run(x)
type(sess.run(x))
x = tf.constant(2)
y = tf.constant(3)
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(x+y))
print('Subtraction',sess.run(x-y))
print('Multiplication',sess.run(x*y))
print('Division',sess.run(x/y))
x = tf.placeholder(tf.int32)
y = tf.placeholder(tf.int32)
x
type(x)
add = tf.add(x,y)
sub = tf.subtract(x,y)
mul = tf.multiply(x,y)
d = {x:20,y:30}
with tf.Session() as sess:
print('Operations with Constants')
print('Addition',sess.run(add,feed_dict=d))
print('Subtraction',sess.run(sub,feed_dict=d))
print('Multiplication',sess.run(mul,feed_dict=d))
import numpy as np
# Make sure to use floats here, int64 will cause an error.
a = np.array([[5.0,5.0]])
b = np.array([[2.0],[2.0]])
a
a.shape
b
b.shape
mat1 = tf.constant(a)
mat2 = tf.constant(b)
matrix_multi = tf.matmul(mat1,mat2)
with tf.Session() as sess:
result = sess.run(matrix_multi)
print(result)
| 0.47171 | 0.979433 |
```
import numpy as np
import pickle
from itertools import chain
from collections import OrderedDict
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
import matplotlib.pylab as plt
from copy import deepcopy
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt
import sys, os
try:
%matplotlib inline
sys.path.append(os.path.join(os.path.dirname("__file__"), '..', '..', '..'))
from mela.settings.filepath import variational_model_PATH, dataset_PATH
isplot = True
except:
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
from mela.settings.filepath import variational_model_PATH, dataset_PATH
if dataset_PATH[:2] == "..":
dataset_PATH = dataset_PATH[3:]
isplot = False
from mela.util import plot_matrices, make_dir, get_struct_str, get_args, Early_Stopping, record_data, manifold_embedding
from mela.pytorch.net import Net
from mela.pytorch.util_pytorch import Loss_with_uncertainty
from mela.variational.util_variational import get_torch_tasks
from mela.variational.variational_meta_learning import Master_Model, Statistics_Net, Generative_Net, load_model_dict, get_regulated_statistics, get_forward_pred
from mela.variational.variational_meta_learning import VAE_Loss, sample_Gaussian, clone_net, get_nets, get_tasks, evaluate, get_reg, load_trained_models
from mela.variational.variational_meta_learning import plot_task_ensembles, plot_individual_tasks, plot_statistics_vs_z, plot_data_record, get_corrcoef
from mela.variational.variational_meta_learning import plot_few_shot_loss, plot_individual_tasks_bounce, plot_quick_learn_performance
from mela.variational.variational_meta_learning import get_latent_model_data, get_polynomial_class, get_Legendre_class, get_master_function
seed = 1
np.random.seed(seed)
torch.manual_seed(seed)
is_cuda = torch.cuda.is_available()
```
## Training:
```
task_id_list = [
# "latent-linear",
# "polynomial-3",
# "Legendre-3",
# "M-sawtooth",
# "M-sin",
# "M-Gaussian",
# "M-tanh",
# "M-softplus",
# "C-sin",
# "C-tanh",
"bounce-states",
# "bounce-images",
]
exp_id = "C-May16"
exp_mode = "meta"
# exp_mode = "finetune"
# exp_mode = "oracle"
is_VAE = False
is_uncertainty_net = False
is_regulated_net = False
is_load_data = False
VAE_beta = 0.2
task_id_list = get_args(task_id_list, 3, type = "tuple")
if task_id_list[0] in ["C-sin", "C-tanh"]:
statistics_output_neurons = 2 if task_id_list[0] == "C-sin" else 4
z_size = 2 if task_id_list[0] == "C-sin" else 4
num_shots = 10
input_size = 1
output_size = 1
reg_amp = 1e-6
forward_steps = [1]
is_time_series = False
elif task_id_list[0] in ["bounce-states", "bounce-states2"]:
statistics_output_neurons = 8
num_shots = 100
z_size = 8
input_size = 6
output_size = 2
reg_amp = 1e-8
forward_steps = [1]
is_time_series = True
elif task_id_list[0] == "bounce-images":
raise
lr = 5e-5
num_train_tasks = 100
num_test_tasks = 100
batch_size_task = num_train_tasks
num_iter = 10000
pre_pooling_neurons = 200
num_context_neurons = 0
statistics_pooling = "max"
struct_param_pre_neurons = (60,3)
struct_param_gen_base_neurons = (60,3)
main_hidden_neurons = (40, 40)
activation_gen = "leakyRelu"
activation_model = "leakyRelu"
optim_mode = "indi"
loss_core = "huber"
patience = 200
array_id = 0
exp_id = get_args(exp_id, 1)
exp_mode = get_args(exp_mode, 2)
statistics_output_neurons = get_args(statistics_output_neurons, 4, type = "int")
is_VAE = get_args(is_VAE, 5, type = "bool")
VAE_beta = get_args(VAE_beta, 6, type = "float")
lr = get_args(lr, 7, type = "float")
pre_pooling_neurons = get_args(pre_pooling_neurons, 8, type = "int")
num_context_neurons = get_args(num_context_neurons, 9, type = "int")
statistics_pooling = get_args(statistics_pooling, 10)
struct_param_pre_neurons = get_args(struct_param_pre_neurons, 11, "tuple")
struct_param_gen_base_neurons = get_args(struct_param_gen_base_neurons, 12, "tuple")
main_hidden_neurons = get_args(main_hidden_neurons, 13, "tuple")
reg_amp = get_args(reg_amp, 14, type = "float")
activation_gen = get_args(activation_gen, 15)
activation_model = get_args(activation_model, 16)
optim_mode = get_args(optim_mode, 17)
is_uncertainty_net = get_args(is_uncertainty_net, 18, "bool")
loss_core = get_args(loss_core, 19)
patience = get_args(patience, 20, "int")
forward_steps = get_args(forward_steps, 21, "tuple")
array_id = get_args(array_id, 22)
# Settings:
task_settings = {
"xlim": (-5, 5),
"num_examples": num_shots * 2,
"test_size": 0.5,
}
isParallel = False
inspect_interval = 20
save_interval = 200
num_backwards = 1
is_oracle = (exp_mode == "oracle")
if is_oracle:
input_size += z_size
oracle_size = z_size
else:
oracle_size = None
print("exp_mode: {0}".format(exp_mode))
# Obtain tasks:
assert len(task_id_list) == 1
dataset_filename = dataset_PATH + task_id_list[0] + "_{0}-shot.p".format(num_shots)
tasks = pickle.load(open(dataset_filename, "rb"))
tasks_train = get_torch_tasks(tasks["tasks_train"], task_id_list[0], num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
tasks_test = get_torch_tasks(tasks["tasks_test"], task_id_list[0], start_id = num_train_tasks, num_tasks = num_test_tasks, num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
# Obtain nets:
all_keys = list(tasks_train.keys()) + list(tasks_test.keys())
data_record = {"loss": {key: [] for key in all_keys}, "loss_sampled": {key: [] for key in all_keys}, "mse": {key: [] for key in all_keys},
"reg": {key: [] for key in all_keys}, "KLD": {key: [] for key in all_keys}}
if exp_mode in ["meta"]:
struct_param_pre = [[struct_param_pre_neurons[0], "Simple_Layer", {}] for _ in range(struct_param_pre_neurons[1])]
struct_param_pre.append([pre_pooling_neurons, "Simple_Layer", {"activation": "linear"}])
struct_param_post = None
struct_param_gen_base = [[struct_param_gen_base_neurons[0], "Simple_Layer", {}] for _ in range(struct_param_gen_base_neurons[1])]
statistics_Net, generative_Net, generative_Net_logstd = get_nets(input_size = input_size, output_size = output_size,
target_size = len(forward_steps) * output_size, main_hidden_neurons = main_hidden_neurons,
pre_pooling_neurons = pre_pooling_neurons, statistics_output_neurons = statistics_output_neurons, num_context_neurons = num_context_neurons,
struct_param_pre = struct_param_pre,
struct_param_gen_base = struct_param_gen_base,
activation_statistics = activation_gen,
activation_generative = activation_gen,
activation_model = activation_model,
statistics_pooling = statistics_pooling,
isParallel = isParallel,
is_VAE = is_VAE,
is_uncertainty_net = is_uncertainty_net,
is_cuda = is_cuda,
)
if is_regulated_net:
struct_param_regulated_Net = [[num_neurons, "Simple_Layer", {}] for num_neurons in main_hidden_neurons]
struct_param_regulated_Net.append([1, "Simple_Layer", {"activation": "linear"}])
generative_Net = Net(input_size = input_size, struct_param = struct_param_regulated_Net, settings = {"activation": activation_model})
master_model = Master_Model(statistics_Net, generative_Net, generative_Net_logstd, is_cuda = is_cuda)
if is_uncertainty_net:
optimizer = optim.Adam(chain.from_iterable([statistics_Net.parameters(), generative_Net.parameters(), generative_Net_logstd.parameters()]), lr = lr)
else:
optimizer = optim.Adam(chain.from_iterable([statistics_Net.parameters(), generative_Net.parameters()]), lr = lr)
reg_dict = {"statistics_Net": {"weight": reg_amp, "bias": reg_amp},
"generative_Net": {"weight": reg_amp, "bias": reg_amp, "W_gen": reg_amp, "b_gen": reg_amp}}
record_data(data_record, [struct_param_gen_base, struct_param_pre, struct_param_post], ["struct_param_gen_base", "struct_param_pre", "struct_param_post"])
model = None
elif exp_mode in ["finetune", "oracle"]:
struct_param_net = [[num_neurons, "Simple_Layer", {}] for num_neurons in main_hidden_neurons]
struct_param_net.append([output_size, "Simple_Layer", {"activation": "linear"}])
record_data(data_record, [struct_param_net], ["struct_param_net"])
model = Net(input_size = input_size,
struct_param = struct_param_net,
settings = {"activation": activation_model},
is_cuda = is_cuda,
)
reg_dict = {"net": {"weight": reg_amp, "bias": reg_amp}}
optimizer = optim.Adam(model.parameters(), lr = lr)
statistics_Net = None
generative_Net = None
generative_Net_logstd = None
master_model = None
# Loss function:
if loss_core == "mse":
loss_fun_core = nn.MSELoss(size_average = True)
elif loss_core == "huber":
loss_fun_core = nn.SmoothL1Loss(size_average = True)
else:
raise
if is_VAE:
criterion = VAE_Loss(criterion = loss_fun_core, prior = "Gaussian", beta = VAE_beta)
else:
if is_uncertainty_net:
criterion = Loss_with_uncertainty(core = loss_core)
else:
criterion = loss_fun_core
early_stopping = Early_Stopping(patience = patience)
# Setting up recordings:
info_dict = {"array_id": array_id}
info_dict["data_record"] = data_record
info_dict["model_dict"] = []
record_data(data_record, [exp_id, tasks_train, tasks_test, task_id_list, task_settings, reg_dict, is_uncertainty_net, lr, pre_pooling_neurons, num_backwards, batch_size_task,
statistics_pooling, activation_gen, activation_model],
["exp_id", "tasks_train", "tasks_test", "task_id_list", "task_settings", "reg_dict", "is_uncertainty_net", "lr", "pre_pooling_neurons", "num_backwards", "batch_size_task",
"statistics_pooling", "activation_gen", "activation_model"])
filename = variational_model_PATH + "/trained_models/{0}/Net_{1}_{2}_input_{3}_({4},{5})_stat_{6}_pre_{7}_pool_{8}_context_{9}_hid_{10}_{11}_{12}_VAE_{13}_{14}_uncer_{15}_lr_{16}_reg_{17}_actgen_{18}_actmodel_{19}_{20}_core_{21}_pat_{22}_for_{23}_{24}_".format(
exp_id, exp_mode, task_id_list, input_size, num_train_tasks, num_test_tasks, statistics_output_neurons, pre_pooling_neurons, statistics_pooling, num_context_neurons, main_hidden_neurons, struct_param_pre_neurons, struct_param_gen_base_neurons, is_VAE, VAE_beta, is_uncertainty_net, lr, reg_amp, activation_gen, activation_model, optim_mode, loss_core, patience, forward_steps[-1], exp_id)
make_dir(filename)
print(filename)
# Training:
for i in range(num_iter + 1):
chosen_task_keys = np.random.choice(list(tasks_train.keys()), batch_size_task, replace = False).tolist()
if optim_mode == "indi":
if is_VAE:
KLD_total = Variable(torch.FloatTensor([0]), requires_grad = False)
if is_cuda:
KLD_total = KLD_total.cuda()
for task_key, task in tasks_train.items():
if task_key not in chosen_task_keys:
continue
((X_train, y_train), (X_test, y_test)), _ = task
for k in range(num_backwards):
optimizer.zero_grad()
if master_model is not None:
results = master_model.get_predictions(X_test = X_test, X_train = X_train, y_train = y_train, is_time_series = is_time_series,
is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
else:
results = {}
results["y_pred"] = get_forward_pred(model, X_test, forward_steps, is_time_series = is_time_series, jump_step = 2, is_flatten = True, oracle_size = oracle_size)
if is_VAE:
loss, KLD = criterion(results["y_pred"], y_test, mu = results["statistics_mu"], logvar = results["statistics_logvar"])
KLD_total = KLD_total + KLD
else:
if is_uncertainty_net:
loss = criterion(results["y_pred"], y_test, log_std = results["y_pred_logstd"])
else:
loss = criterion(results["y_pred"], y_test)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda)
loss = loss + reg
loss.backward(retain_graph = True)
optimizer.step()
# Perform gradient on the KL-divergence:
if is_VAE:
KLD_total = KLD_total / batch_size_task
optimizer.zero_grad()
KLD_total.backward()
optimizer.step()
record_data(data_record, [KLD_total], ["KLD_total"])
elif optim_mode == "sum":
optimizer.zero_grad()
loss_total = Variable(torch.FloatTensor([0]), requires_grad = False)
if is_cuda:
loss_total = loss_total.cuda()
for task_key, task in tasks_train.items():
if task_key not in chosen_task_keys:
continue
((X_train, y_train), (X_test, y_test)), _ = task
if master_model is not None:
results = master_model.get_predictions(X_test = X_test, X_train = X_train, y_train = y_train, is_time_series = is_time_series,
is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
else:
results = {}
results["y_pred"] = get_forward_pred(model, X_test, forward_steps, is_time_series = is_time_series, jump_step = 2, is_flatten = True, oracle_size = oracle_size)
if is_VAE:
loss, KLD = criterion(results["y_pred"], y_test, mu = results["statistics_mu"], logvar = results["statistics_logvar"])
loss = loss + KLD
else:
if is_uncertainty_net:
loss = criterion(results["y_pred"], y_test, log_std = results["y_pred_logstd"])
else:
loss = criterion(results["y_pred"], y_test)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda)
loss_total = loss_total + loss + reg
loss_total.backward()
optimizer.step()
else:
raise Exception("optim_mode {0} not recognized!".format(optim_mode))
loss_test_record = []
for task_key, task in tasks_train.items():
loss_test, _, _, _ = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
loss_test_record.append(loss_test)
to_stop = early_stopping.monitor(np.mean(loss_test_record))
# Validation and visualization:
if i % inspect_interval == 0 or to_stop:
print("=" * 50)
print("training tasks:")
for task_key, task in tasks_train.items():
loss_test, loss_test_sampled, mse, KLD_test = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda).data[0]
data_record["loss"][task_key].append(loss_test)
data_record["loss_sampled"][task_key].append(loss_test_sampled)
data_record["mse"][task_key].append(mse)
data_record["reg"][task_key].append(reg)
data_record["KLD"][task_key].append(KLD_test)
print('{0}\ttrain\t{1} \tloss: {2:.9f}\tloss_sampled:{3:.9f} \tmse:{4:.9f}\tKLD:{5:.9f}\treg:{6:.9f}'.format(i, task_key, loss_test, loss_test_sampled, mse, KLD_test, reg))
for task_key, task in tasks_test.items():
loss_test, loss_test_sampled, mse, KLD_test = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda).data[0]
data_record["loss"][task_key].append(loss_test)
data_record["loss_sampled"][task_key].append(loss_test_sampled)
data_record["mse"][task_key].append(mse)
data_record["reg"][task_key].append(reg)
data_record["KLD"][task_key].append(KLD_test)
print('{0}\ttest\t{1} \tloss: {2:.9f}\tloss_sampled:{3:.9f} \tmse:{4:.9f}\tKLD:{5:.9f}\treg:{6:.9f}'.format(i, task_key, loss_test, loss_test_sampled, mse, KLD_test, reg))
loss_train_list = [data_record["loss"][task_key][-1] for task_key in tasks_train]
loss_test_list = [data_record["loss"][task_key][-1] for task_key in tasks_test]
loss_train_sampled_list = [data_record["loss_sampled"][task_key][-1] for task_key in tasks_train]
loss_test_sampled_list = [data_record["loss_sampled"][task_key][-1] for task_key in tasks_test]
mse_train_list = [data_record["mse"][task_key][-1] for task_key in tasks_train]
mse_test_list = [data_record["mse"][task_key][-1] for task_key in tasks_test]
reg_train_list = [data_record["reg"][task_key][-1] for task_key in tasks_train]
reg_test_list = [data_record["reg"][task_key][-1] for task_key in tasks_test]
mse_few_shot = plot_few_shot_loss(master_model, tasks_test, forward_steps = forward_steps, is_time_series = is_time_series, isplot = isplot)
plot_quick_learn_performance(master_model if exp_mode in ["meta"] else model, tasks_test, forward_steps = forward_steps, is_time_series = is_time_series, isplot = isplot)
record_data(data_record,
[np.mean(loss_train_list), np.median(loss_train_list), np.mean(reg_train_list), i,
np.mean(loss_test_list), np.median(loss_test_list), np.mean(reg_test_list),
np.mean(loss_train_sampled_list), np.median(loss_train_sampled_list),
np.mean(loss_test_sampled_list), np.median(loss_test_sampled_list),
np.mean(mse_train_list), np.median(mse_train_list),
np.mean(mse_test_list), np.median(mse_test_list),
mse_few_shot,
],
["loss_mean_train", "loss_median_train", "reg_mean_train", "iter",
"loss_mean_test", "loss_median_test", "reg_mean_test",
"loss_sampled_mean_train", "loss_sampled_median_train",
"loss_sampled_mean_test", "loss_sampled_median_test",
"mse_mean_train", "mse_median_train", "mse_mean_test", "mse_median_test",
"mse_few_shot",
])
if isplot:
plot_data_record(data_record, idx = -1, is_VAE = is_VAE)
print("Summary:")
print('\n{0}\ttrain\tloss_mean: {1:.5f}\tloss_median: {2:.5f}\tmse_mean: {3:.6f}\tmse_median: {4:.6f}\treg: {5:.6f}'.format(i, data_record["loss_mean_train"][-1], data_record["loss_median_train"][-1], data_record["mse_mean_train"][-1], data_record["mse_median_train"][-1], data_record["reg_mean_train"][-1]))
print('{0}\ttest\tloss_mean: {1:.5f}\tloss_median: {2:.5f}\tmse_mean: {3:.6f}\tmse_median: {4:.6f}\treg: {5:.6f}'.format(i, data_record["loss_mean_test"][-1], data_record["loss_median_test"][-1], data_record["mse_mean_test"][-1], data_record["mse_median_test"][-1], data_record["reg_mean_test"][-1]))
if is_VAE and "KLD_total" in locals():
print("KLD_total: {0:.5f}".format(KLD_total.data[0]))
if isplot:
plot_data_record(data_record, is_VAE = is_VAE)
# Plotting y_pred vs. y_target:
statistics_list_train, z_list_train = plot_task_ensembles(tasks_train, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, title = "y_pred_train vs. y_train", isplot = isplot)
statistics_list_test, z_list_test = plot_task_ensembles(tasks_test, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, title = "y_pred_test vs. y_test", isplot = isplot)
record_data(data_record, [np.array(z_list_train), np.array(z_list_test), np.array(statistics_list_train), np.array(statistics_list_test)],
["z_list_train_list", "z_list_test_list", "statistics_list_train_list", "statistics_list_test_list"])
if isplot:
print("train statistics vs. z:")
plot_statistics_vs_z(z_list_train, statistics_list_train)
print("test statistics vs. z:")
plot_statistics_vs_z(z_list_test, statistics_list_test)
# Plotting individual test data:
if "bounce" in task_id_list[0]:
plot_individual_tasks_bounce(tasks_test, num_examples_show = 40, num_tasks_show = 6, master_model = master_model, model = model, num_shots = 200, valid_input_dims = input_size - z_size, target_forward_steps = len(forward_steps), eval_forward_steps = len(forward_steps))
else:
print("train tasks:")
plot_individual_tasks(tasks_train, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, is_oracle = is_oracle, xlim = task_settings["xlim"])
print("test tasks:")
plot_individual_tasks(tasks_test, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, is_oracle = is_oracle, xlim = task_settings["xlim"])
print("=" * 50 + "\n\n")
try:
sys.stdout.flush()
except:
pass
if i % save_interval == 0 or to_stop:
if master_model is not None:
record_data(info_dict, [master_model.model_dict], ["model_dict"])
else:
record_data(info_dict, [model.model_dict], ["model_dict"])
pickle.dump(info_dict, open(filename + "info.p", "wb"))
if to_stop:
print("The training loss stops decreasing for {0} steps. Early stopping at {1}.".format(patience, i))
break
# Plotting:
if isplot:
for task_key in tasks_train:
plt.semilogy(data_record["loss"][task_key], alpha = 0.6)
plt.show()
for task_key in tasks_test:
plt.semilogy(data_record["loss"][task_key], alpha = 0.6)
plt.show()
print("completed")
sys.stdout.flush()
```
## Testing:
```
def get_test_result(model, lr, isplot = True):
print(dataset_filename)
tasks = pickle.load(open(dataset_filename, "rb"))
tasks_test = get_torch_tasks(tasks["tasks_test"], task_id_list[0], start_id = num_train_tasks, num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
task_keys_all = list(tasks_test.keys())
mse_list_all = []
for i in range(int(len(tasks_test) / 100)):
print("{0}:".format(i))
task_keys_iter = task_keys_all[i * 100: (i + 1) * 100]
tasks_test_iter = {task_key: tasks_test[task_key] for task_key in task_keys_iter}
mse = plot_quick_learn_performance(model, tasks_test_iter, is_time_series = is_time_series, forward_steps = forward_steps, lr = lr, epochs = 20, isplot = isplot)['model_0'].mean(0)
mse_list_all.append(mse)
mse_list_all = np.array(mse_list_all)
info_dict["mse_test_lr_{0}".format(lr)] = mse_list_all
pickle.dump(info_dict, open(filename + "info.p", "wb"))
print("mean:")
print(mse_list_all.mean(0))
print("std:")
print(mse_list_all.std(0))
if isplot:
plt.figure(figsize = (8,6))
mse_list_all = np.array(mse_list_all)
mse_mean = mse_list_all.mean(0)
mse_std = mse_list_all.std(0)
plt.fill_between(range(len(mse_mean)), mse_mean - mse_std * 1.96 / np.sqrt(int(len(tasks_test) / 100)), mse_mean + mse_std * 1.96 / np.sqrt(int(len(tasks_test) / 100)), alpha = 0.3)
plt.plot(range(len(mse_mean)), mse_mean)
plt.title("{0}, {1}-shot regression, lr = {2}".format(task_id_list[0], num_shots, lr), fontsize = 20)
plt.xlabel("Number of gradient steps", fontsize = 18)
plt.ylabel("Mean Squared Error", fontsize = 18)
plt.show()
return mse_list_all
for lr in [1e-3, 5e-4, 2e-4]:
mse_list_all = get_test_result(master_model if master_model is not None else model, lr = lr, isplot = isplot)
```
|
github_jupyter
|
import numpy as np
import pickle
from itertools import chain
from collections import OrderedDict
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
import matplotlib.pylab as plt
from copy import deepcopy
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
import matplotlib.pyplot as plt
import sys, os
try:
%matplotlib inline
sys.path.append(os.path.join(os.path.dirname("__file__"), '..', '..', '..'))
from mela.settings.filepath import variational_model_PATH, dataset_PATH
isplot = True
except:
sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
from mela.settings.filepath import variational_model_PATH, dataset_PATH
if dataset_PATH[:2] == "..":
dataset_PATH = dataset_PATH[3:]
isplot = False
from mela.util import plot_matrices, make_dir, get_struct_str, get_args, Early_Stopping, record_data, manifold_embedding
from mela.pytorch.net import Net
from mela.pytorch.util_pytorch import Loss_with_uncertainty
from mela.variational.util_variational import get_torch_tasks
from mela.variational.variational_meta_learning import Master_Model, Statistics_Net, Generative_Net, load_model_dict, get_regulated_statistics, get_forward_pred
from mela.variational.variational_meta_learning import VAE_Loss, sample_Gaussian, clone_net, get_nets, get_tasks, evaluate, get_reg, load_trained_models
from mela.variational.variational_meta_learning import plot_task_ensembles, plot_individual_tasks, plot_statistics_vs_z, plot_data_record, get_corrcoef
from mela.variational.variational_meta_learning import plot_few_shot_loss, plot_individual_tasks_bounce, plot_quick_learn_performance
from mela.variational.variational_meta_learning import get_latent_model_data, get_polynomial_class, get_Legendre_class, get_master_function
seed = 1
np.random.seed(seed)
torch.manual_seed(seed)
is_cuda = torch.cuda.is_available()
task_id_list = [
# "latent-linear",
# "polynomial-3",
# "Legendre-3",
# "M-sawtooth",
# "M-sin",
# "M-Gaussian",
# "M-tanh",
# "M-softplus",
# "C-sin",
# "C-tanh",
"bounce-states",
# "bounce-images",
]
exp_id = "C-May16"
exp_mode = "meta"
# exp_mode = "finetune"
# exp_mode = "oracle"
is_VAE = False
is_uncertainty_net = False
is_regulated_net = False
is_load_data = False
VAE_beta = 0.2
task_id_list = get_args(task_id_list, 3, type = "tuple")
if task_id_list[0] in ["C-sin", "C-tanh"]:
statistics_output_neurons = 2 if task_id_list[0] == "C-sin" else 4
z_size = 2 if task_id_list[0] == "C-sin" else 4
num_shots = 10
input_size = 1
output_size = 1
reg_amp = 1e-6
forward_steps = [1]
is_time_series = False
elif task_id_list[0] in ["bounce-states", "bounce-states2"]:
statistics_output_neurons = 8
num_shots = 100
z_size = 8
input_size = 6
output_size = 2
reg_amp = 1e-8
forward_steps = [1]
is_time_series = True
elif task_id_list[0] == "bounce-images":
raise
lr = 5e-5
num_train_tasks = 100
num_test_tasks = 100
batch_size_task = num_train_tasks
num_iter = 10000
pre_pooling_neurons = 200
num_context_neurons = 0
statistics_pooling = "max"
struct_param_pre_neurons = (60,3)
struct_param_gen_base_neurons = (60,3)
main_hidden_neurons = (40, 40)
activation_gen = "leakyRelu"
activation_model = "leakyRelu"
optim_mode = "indi"
loss_core = "huber"
patience = 200
array_id = 0
exp_id = get_args(exp_id, 1)
exp_mode = get_args(exp_mode, 2)
statistics_output_neurons = get_args(statistics_output_neurons, 4, type = "int")
is_VAE = get_args(is_VAE, 5, type = "bool")
VAE_beta = get_args(VAE_beta, 6, type = "float")
lr = get_args(lr, 7, type = "float")
pre_pooling_neurons = get_args(pre_pooling_neurons, 8, type = "int")
num_context_neurons = get_args(num_context_neurons, 9, type = "int")
statistics_pooling = get_args(statistics_pooling, 10)
struct_param_pre_neurons = get_args(struct_param_pre_neurons, 11, "tuple")
struct_param_gen_base_neurons = get_args(struct_param_gen_base_neurons, 12, "tuple")
main_hidden_neurons = get_args(main_hidden_neurons, 13, "tuple")
reg_amp = get_args(reg_amp, 14, type = "float")
activation_gen = get_args(activation_gen, 15)
activation_model = get_args(activation_model, 16)
optim_mode = get_args(optim_mode, 17)
is_uncertainty_net = get_args(is_uncertainty_net, 18, "bool")
loss_core = get_args(loss_core, 19)
patience = get_args(patience, 20, "int")
forward_steps = get_args(forward_steps, 21, "tuple")
array_id = get_args(array_id, 22)
# Settings:
task_settings = {
"xlim": (-5, 5),
"num_examples": num_shots * 2,
"test_size": 0.5,
}
isParallel = False
inspect_interval = 20
save_interval = 200
num_backwards = 1
is_oracle = (exp_mode == "oracle")
if is_oracle:
input_size += z_size
oracle_size = z_size
else:
oracle_size = None
print("exp_mode: {0}".format(exp_mode))
# Obtain tasks:
assert len(task_id_list) == 1
dataset_filename = dataset_PATH + task_id_list[0] + "_{0}-shot.p".format(num_shots)
tasks = pickle.load(open(dataset_filename, "rb"))
tasks_train = get_torch_tasks(tasks["tasks_train"], task_id_list[0], num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
tasks_test = get_torch_tasks(tasks["tasks_test"], task_id_list[0], start_id = num_train_tasks, num_tasks = num_test_tasks, num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
# Obtain nets:
all_keys = list(tasks_train.keys()) + list(tasks_test.keys())
data_record = {"loss": {key: [] for key in all_keys}, "loss_sampled": {key: [] for key in all_keys}, "mse": {key: [] for key in all_keys},
"reg": {key: [] for key in all_keys}, "KLD": {key: [] for key in all_keys}}
if exp_mode in ["meta"]:
struct_param_pre = [[struct_param_pre_neurons[0], "Simple_Layer", {}] for _ in range(struct_param_pre_neurons[1])]
struct_param_pre.append([pre_pooling_neurons, "Simple_Layer", {"activation": "linear"}])
struct_param_post = None
struct_param_gen_base = [[struct_param_gen_base_neurons[0], "Simple_Layer", {}] for _ in range(struct_param_gen_base_neurons[1])]
statistics_Net, generative_Net, generative_Net_logstd = get_nets(input_size = input_size, output_size = output_size,
target_size = len(forward_steps) * output_size, main_hidden_neurons = main_hidden_neurons,
pre_pooling_neurons = pre_pooling_neurons, statistics_output_neurons = statistics_output_neurons, num_context_neurons = num_context_neurons,
struct_param_pre = struct_param_pre,
struct_param_gen_base = struct_param_gen_base,
activation_statistics = activation_gen,
activation_generative = activation_gen,
activation_model = activation_model,
statistics_pooling = statistics_pooling,
isParallel = isParallel,
is_VAE = is_VAE,
is_uncertainty_net = is_uncertainty_net,
is_cuda = is_cuda,
)
if is_regulated_net:
struct_param_regulated_Net = [[num_neurons, "Simple_Layer", {}] for num_neurons in main_hidden_neurons]
struct_param_regulated_Net.append([1, "Simple_Layer", {"activation": "linear"}])
generative_Net = Net(input_size = input_size, struct_param = struct_param_regulated_Net, settings = {"activation": activation_model})
master_model = Master_Model(statistics_Net, generative_Net, generative_Net_logstd, is_cuda = is_cuda)
if is_uncertainty_net:
optimizer = optim.Adam(chain.from_iterable([statistics_Net.parameters(), generative_Net.parameters(), generative_Net_logstd.parameters()]), lr = lr)
else:
optimizer = optim.Adam(chain.from_iterable([statistics_Net.parameters(), generative_Net.parameters()]), lr = lr)
reg_dict = {"statistics_Net": {"weight": reg_amp, "bias": reg_amp},
"generative_Net": {"weight": reg_amp, "bias": reg_amp, "W_gen": reg_amp, "b_gen": reg_amp}}
record_data(data_record, [struct_param_gen_base, struct_param_pre, struct_param_post], ["struct_param_gen_base", "struct_param_pre", "struct_param_post"])
model = None
elif exp_mode in ["finetune", "oracle"]:
struct_param_net = [[num_neurons, "Simple_Layer", {}] for num_neurons in main_hidden_neurons]
struct_param_net.append([output_size, "Simple_Layer", {"activation": "linear"}])
record_data(data_record, [struct_param_net], ["struct_param_net"])
model = Net(input_size = input_size,
struct_param = struct_param_net,
settings = {"activation": activation_model},
is_cuda = is_cuda,
)
reg_dict = {"net": {"weight": reg_amp, "bias": reg_amp}}
optimizer = optim.Adam(model.parameters(), lr = lr)
statistics_Net = None
generative_Net = None
generative_Net_logstd = None
master_model = None
# Loss function:
if loss_core == "mse":
loss_fun_core = nn.MSELoss(size_average = True)
elif loss_core == "huber":
loss_fun_core = nn.SmoothL1Loss(size_average = True)
else:
raise
if is_VAE:
criterion = VAE_Loss(criterion = loss_fun_core, prior = "Gaussian", beta = VAE_beta)
else:
if is_uncertainty_net:
criterion = Loss_with_uncertainty(core = loss_core)
else:
criterion = loss_fun_core
early_stopping = Early_Stopping(patience = patience)
# Setting up recordings:
info_dict = {"array_id": array_id}
info_dict["data_record"] = data_record
info_dict["model_dict"] = []
record_data(data_record, [exp_id, tasks_train, tasks_test, task_id_list, task_settings, reg_dict, is_uncertainty_net, lr, pre_pooling_neurons, num_backwards, batch_size_task,
statistics_pooling, activation_gen, activation_model],
["exp_id", "tasks_train", "tasks_test", "task_id_list", "task_settings", "reg_dict", "is_uncertainty_net", "lr", "pre_pooling_neurons", "num_backwards", "batch_size_task",
"statistics_pooling", "activation_gen", "activation_model"])
filename = variational_model_PATH + "/trained_models/{0}/Net_{1}_{2}_input_{3}_({4},{5})_stat_{6}_pre_{7}_pool_{8}_context_{9}_hid_{10}_{11}_{12}_VAE_{13}_{14}_uncer_{15}_lr_{16}_reg_{17}_actgen_{18}_actmodel_{19}_{20}_core_{21}_pat_{22}_for_{23}_{24}_".format(
exp_id, exp_mode, task_id_list, input_size, num_train_tasks, num_test_tasks, statistics_output_neurons, pre_pooling_neurons, statistics_pooling, num_context_neurons, main_hidden_neurons, struct_param_pre_neurons, struct_param_gen_base_neurons, is_VAE, VAE_beta, is_uncertainty_net, lr, reg_amp, activation_gen, activation_model, optim_mode, loss_core, patience, forward_steps[-1], exp_id)
make_dir(filename)
print(filename)
# Training:
for i in range(num_iter + 1):
chosen_task_keys = np.random.choice(list(tasks_train.keys()), batch_size_task, replace = False).tolist()
if optim_mode == "indi":
if is_VAE:
KLD_total = Variable(torch.FloatTensor([0]), requires_grad = False)
if is_cuda:
KLD_total = KLD_total.cuda()
for task_key, task in tasks_train.items():
if task_key not in chosen_task_keys:
continue
((X_train, y_train), (X_test, y_test)), _ = task
for k in range(num_backwards):
optimizer.zero_grad()
if master_model is not None:
results = master_model.get_predictions(X_test = X_test, X_train = X_train, y_train = y_train, is_time_series = is_time_series,
is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
else:
results = {}
results["y_pred"] = get_forward_pred(model, X_test, forward_steps, is_time_series = is_time_series, jump_step = 2, is_flatten = True, oracle_size = oracle_size)
if is_VAE:
loss, KLD = criterion(results["y_pred"], y_test, mu = results["statistics_mu"], logvar = results["statistics_logvar"])
KLD_total = KLD_total + KLD
else:
if is_uncertainty_net:
loss = criterion(results["y_pred"], y_test, log_std = results["y_pred_logstd"])
else:
loss = criterion(results["y_pred"], y_test)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda)
loss = loss + reg
loss.backward(retain_graph = True)
optimizer.step()
# Perform gradient on the KL-divergence:
if is_VAE:
KLD_total = KLD_total / batch_size_task
optimizer.zero_grad()
KLD_total.backward()
optimizer.step()
record_data(data_record, [KLD_total], ["KLD_total"])
elif optim_mode == "sum":
optimizer.zero_grad()
loss_total = Variable(torch.FloatTensor([0]), requires_grad = False)
if is_cuda:
loss_total = loss_total.cuda()
for task_key, task in tasks_train.items():
if task_key not in chosen_task_keys:
continue
((X_train, y_train), (X_test, y_test)), _ = task
if master_model is not None:
results = master_model.get_predictions(X_test = X_test, X_train = X_train, y_train = y_train, is_time_series = is_time_series,
is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
else:
results = {}
results["y_pred"] = get_forward_pred(model, X_test, forward_steps, is_time_series = is_time_series, jump_step = 2, is_flatten = True, oracle_size = oracle_size)
if is_VAE:
loss, KLD = criterion(results["y_pred"], y_test, mu = results["statistics_mu"], logvar = results["statistics_logvar"])
loss = loss + KLD
else:
if is_uncertainty_net:
loss = criterion(results["y_pred"], y_test, log_std = results["y_pred_logstd"])
else:
loss = criterion(results["y_pred"], y_test)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda)
loss_total = loss_total + loss + reg
loss_total.backward()
optimizer.step()
else:
raise Exception("optim_mode {0} not recognized!".format(optim_mode))
loss_test_record = []
for task_key, task in tasks_train.items():
loss_test, _, _, _ = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
loss_test_record.append(loss_test)
to_stop = early_stopping.monitor(np.mean(loss_test_record))
# Validation and visualization:
if i % inspect_interval == 0 or to_stop:
print("=" * 50)
print("training tasks:")
for task_key, task in tasks_train.items():
loss_test, loss_test_sampled, mse, KLD_test = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda).data[0]
data_record["loss"][task_key].append(loss_test)
data_record["loss_sampled"][task_key].append(loss_test_sampled)
data_record["mse"][task_key].append(mse)
data_record["reg"][task_key].append(reg)
data_record["KLD"][task_key].append(KLD_test)
print('{0}\ttrain\t{1} \tloss: {2:.9f}\tloss_sampled:{3:.9f} \tmse:{4:.9f}\tKLD:{5:.9f}\treg:{6:.9f}'.format(i, task_key, loss_test, loss_test_sampled, mse, KLD_test, reg))
for task_key, task in tasks_test.items():
loss_test, loss_test_sampled, mse, KLD_test = evaluate(task, master_model = master_model, model = model, criterion = criterion, is_time_series = is_time_series, is_VAE = is_VAE, is_regulated_net = is_regulated_net, forward_steps = forward_steps)
reg = get_reg(reg_dict, statistics_Net = statistics_Net, generative_Net = generative_Net, net = model, is_cuda = is_cuda).data[0]
data_record["loss"][task_key].append(loss_test)
data_record["loss_sampled"][task_key].append(loss_test_sampled)
data_record["mse"][task_key].append(mse)
data_record["reg"][task_key].append(reg)
data_record["KLD"][task_key].append(KLD_test)
print('{0}\ttest\t{1} \tloss: {2:.9f}\tloss_sampled:{3:.9f} \tmse:{4:.9f}\tKLD:{5:.9f}\treg:{6:.9f}'.format(i, task_key, loss_test, loss_test_sampled, mse, KLD_test, reg))
loss_train_list = [data_record["loss"][task_key][-1] for task_key in tasks_train]
loss_test_list = [data_record["loss"][task_key][-1] for task_key in tasks_test]
loss_train_sampled_list = [data_record["loss_sampled"][task_key][-1] for task_key in tasks_train]
loss_test_sampled_list = [data_record["loss_sampled"][task_key][-1] for task_key in tasks_test]
mse_train_list = [data_record["mse"][task_key][-1] for task_key in tasks_train]
mse_test_list = [data_record["mse"][task_key][-1] for task_key in tasks_test]
reg_train_list = [data_record["reg"][task_key][-1] for task_key in tasks_train]
reg_test_list = [data_record["reg"][task_key][-1] for task_key in tasks_test]
mse_few_shot = plot_few_shot_loss(master_model, tasks_test, forward_steps = forward_steps, is_time_series = is_time_series, isplot = isplot)
plot_quick_learn_performance(master_model if exp_mode in ["meta"] else model, tasks_test, forward_steps = forward_steps, is_time_series = is_time_series, isplot = isplot)
record_data(data_record,
[np.mean(loss_train_list), np.median(loss_train_list), np.mean(reg_train_list), i,
np.mean(loss_test_list), np.median(loss_test_list), np.mean(reg_test_list),
np.mean(loss_train_sampled_list), np.median(loss_train_sampled_list),
np.mean(loss_test_sampled_list), np.median(loss_test_sampled_list),
np.mean(mse_train_list), np.median(mse_train_list),
np.mean(mse_test_list), np.median(mse_test_list),
mse_few_shot,
],
["loss_mean_train", "loss_median_train", "reg_mean_train", "iter",
"loss_mean_test", "loss_median_test", "reg_mean_test",
"loss_sampled_mean_train", "loss_sampled_median_train",
"loss_sampled_mean_test", "loss_sampled_median_test",
"mse_mean_train", "mse_median_train", "mse_mean_test", "mse_median_test",
"mse_few_shot",
])
if isplot:
plot_data_record(data_record, idx = -1, is_VAE = is_VAE)
print("Summary:")
print('\n{0}\ttrain\tloss_mean: {1:.5f}\tloss_median: {2:.5f}\tmse_mean: {3:.6f}\tmse_median: {4:.6f}\treg: {5:.6f}'.format(i, data_record["loss_mean_train"][-1], data_record["loss_median_train"][-1], data_record["mse_mean_train"][-1], data_record["mse_median_train"][-1], data_record["reg_mean_train"][-1]))
print('{0}\ttest\tloss_mean: {1:.5f}\tloss_median: {2:.5f}\tmse_mean: {3:.6f}\tmse_median: {4:.6f}\treg: {5:.6f}'.format(i, data_record["loss_mean_test"][-1], data_record["loss_median_test"][-1], data_record["mse_mean_test"][-1], data_record["mse_median_test"][-1], data_record["reg_mean_test"][-1]))
if is_VAE and "KLD_total" in locals():
print("KLD_total: {0:.5f}".format(KLD_total.data[0]))
if isplot:
plot_data_record(data_record, is_VAE = is_VAE)
# Plotting y_pred vs. y_target:
statistics_list_train, z_list_train = plot_task_ensembles(tasks_train, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, title = "y_pred_train vs. y_train", isplot = isplot)
statistics_list_test, z_list_test = plot_task_ensembles(tasks_test, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, title = "y_pred_test vs. y_test", isplot = isplot)
record_data(data_record, [np.array(z_list_train), np.array(z_list_test), np.array(statistics_list_train), np.array(statistics_list_test)],
["z_list_train_list", "z_list_test_list", "statistics_list_train_list", "statistics_list_test_list"])
if isplot:
print("train statistics vs. z:")
plot_statistics_vs_z(z_list_train, statistics_list_train)
print("test statistics vs. z:")
plot_statistics_vs_z(z_list_test, statistics_list_test)
# Plotting individual test data:
if "bounce" in task_id_list[0]:
plot_individual_tasks_bounce(tasks_test, num_examples_show = 40, num_tasks_show = 6, master_model = master_model, model = model, num_shots = 200, valid_input_dims = input_size - z_size, target_forward_steps = len(forward_steps), eval_forward_steps = len(forward_steps))
else:
print("train tasks:")
plot_individual_tasks(tasks_train, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, is_oracle = is_oracle, xlim = task_settings["xlim"])
print("test tasks:")
plot_individual_tasks(tasks_test, master_model = master_model, model = model, is_time_series = is_time_series, is_VAE = is_VAE, is_uncertainty_net = is_uncertainty_net, is_regulated_net = is_regulated_net, is_oracle = is_oracle, xlim = task_settings["xlim"])
print("=" * 50 + "\n\n")
try:
sys.stdout.flush()
except:
pass
if i % save_interval == 0 or to_stop:
if master_model is not None:
record_data(info_dict, [master_model.model_dict], ["model_dict"])
else:
record_data(info_dict, [model.model_dict], ["model_dict"])
pickle.dump(info_dict, open(filename + "info.p", "wb"))
if to_stop:
print("The training loss stops decreasing for {0} steps. Early stopping at {1}.".format(patience, i))
break
# Plotting:
if isplot:
for task_key in tasks_train:
plt.semilogy(data_record["loss"][task_key], alpha = 0.6)
plt.show()
for task_key in tasks_test:
plt.semilogy(data_record["loss"][task_key], alpha = 0.6)
plt.show()
print("completed")
sys.stdout.flush()
def get_test_result(model, lr, isplot = True):
print(dataset_filename)
tasks = pickle.load(open(dataset_filename, "rb"))
tasks_test = get_torch_tasks(tasks["tasks_test"], task_id_list[0], start_id = num_train_tasks, num_forward_steps = forward_steps[-1], is_oracle = is_oracle, is_cuda = is_cuda)
task_keys_all = list(tasks_test.keys())
mse_list_all = []
for i in range(int(len(tasks_test) / 100)):
print("{0}:".format(i))
task_keys_iter = task_keys_all[i * 100: (i + 1) * 100]
tasks_test_iter = {task_key: tasks_test[task_key] for task_key in task_keys_iter}
mse = plot_quick_learn_performance(model, tasks_test_iter, is_time_series = is_time_series, forward_steps = forward_steps, lr = lr, epochs = 20, isplot = isplot)['model_0'].mean(0)
mse_list_all.append(mse)
mse_list_all = np.array(mse_list_all)
info_dict["mse_test_lr_{0}".format(lr)] = mse_list_all
pickle.dump(info_dict, open(filename + "info.p", "wb"))
print("mean:")
print(mse_list_all.mean(0))
print("std:")
print(mse_list_all.std(0))
if isplot:
plt.figure(figsize = (8,6))
mse_list_all = np.array(mse_list_all)
mse_mean = mse_list_all.mean(0)
mse_std = mse_list_all.std(0)
plt.fill_between(range(len(mse_mean)), mse_mean - mse_std * 1.96 / np.sqrt(int(len(tasks_test) / 100)), mse_mean + mse_std * 1.96 / np.sqrt(int(len(tasks_test) / 100)), alpha = 0.3)
plt.plot(range(len(mse_mean)), mse_mean)
plt.title("{0}, {1}-shot regression, lr = {2}".format(task_id_list[0], num_shots, lr), fontsize = 20)
plt.xlabel("Number of gradient steps", fontsize = 18)
plt.ylabel("Mean Squared Error", fontsize = 18)
plt.show()
return mse_list_all
for lr in [1e-3, 5e-4, 2e-4]:
mse_list_all = get_test_result(master_model if master_model is not None else model, lr = lr, isplot = isplot)
| 0.552781 | 0.603873 |
```
import pandas as pd
import numpy as np
import warnings
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from tqdm import tqdm, tqdm_notebook
warnings.filterwarnings('ignore')
# BEWARE, ignoreing warnings is not always a good idea
# I am doing it for presentation
```
# Private and Encrypted AI - Credit Approval Application
This notebook is meant for my exploratory development of unencrypted deep learning approach.
I will develop federated and encrypted models in other notebooks.
<a id='data_prep'></a>
## Data Preparation
- only using non-NaN values. I drop NaN values because the dataset is not very big regardless, and we are not dropping very many values.
- Convert binary variables to a numeric representation, and one-hot-encode categorical variables. We do not want to use label encoder since a label encoder would make it
```
cols = [ f"A{i}" for i in range(1,16)]
cols.append('label')
df = pd.read_csv('../data/crx.data', names=cols)\
.replace(to_replace='?', value=np.nan).dropna()
print(df.shape, "\n ------- \n")
print(df.head(2))
```
### Data Analysis
Let's check out what this data looks like first, so that we have an idea of what we are dealing with. In true encrypted, federated learning we would not have this luxury though...
```
def to_binary(df, col):
u = df[col].unique()
mapping =dict(zip(u, [i for i in range(0,len(u))]))
return df[col].map(mapping)
df.A1.head()
#convert to float
for col in ['A2', 'A3', 'A8', 'A11', 'A14', 'A15']:
df[col] = df[col].astype(float)
#binarize
for col in ['A1', 'A9', 'A10', 'A12', 'label']:
df[col] = to_binary(df, col)
onehot_cols = ['A4', 'A5', 'A6', 'A7', 'A13']
#perform one hot encoding, and drop original columns
df = df.join(pd.get_dummies(df[onehot_cols], dtype=int))\
.drop(onehot_cols, axis=1)
set(df.dtypes) #check that we have the data types we expect, no object types
#distribution of numeric-only columns
df[['A2', 'A3', 'A8', 'A11', 'A14', 'A15']].describe().iloc[1:, :10].round(3)
df.head(2) #double check what our DF looks like
```
### Simulate Real People's Data
To illustrate how this model would work in real life, I want to simulate this data belonging to people. I am generating random names to be associated with each row. I know that this is not an ideal example since I am in fact starting with the data all collated on my computer with peoples names and data being directly exposed. Not private at all...
```
import names #used to get random names
names.get_first_name()+' ' +names.get_last_name() #call random name
users = []
used_names = set()
for idx in range(len(df)):
name = names.get_first_name()+' ' +names.get_last_name()
while name in used_names:
name = names.get_first_name()+' ' +names.get_last_name()
used_names.add(name)
users.append(name)
df['name'] = users
df.head(2)
#get features and labels as numpy arrays which we can convert to tensors
features = df.drop(['label', 'name'], axis=1).values.astype(float)
labels = df['label'].values.astype(float)
#normalize
sclr = MinMaxScaler()
features = sclr.fit_transform(features)
#save features and labels for future use
np.save('../data/features', features)
np.save('../data/labels', labels)
#save labels where shape is (1,2)
labels=pd.get_dummies(df['label']).values.astype(float)
np.save('../data/labels_dim', labels)
```
_Please Note_ <br>
Normalization is not necessary per se for any machine learning algorithm, but it is recommended for deep learning for training purposes. Read more [here](https://datascience.stackexchange.com/a/13221/60648).
## Model Development
I am using PyTorch to create a neural network to classify whether someone is accepted for credit or not. PyTorch integrates will with PySyft, the package used to encrypt our deep learning model
```
import copy
from torch import nn
from torch import optim
import torch.nn.functional as F
import syft as sy
import torch as th
th.manual_seed(42) #so that dropout affects same layers
data = th.tensor(features, dtype=th.float32, requires_grad=True)
target = th.tensor(labels, dtype=th.float32, requires_grad=False).reshape(-1,2)
class Model(nn.Module):
'''
Neural Network Example Model
Attributes
:hidden_layers (nn.ModuleList) - hidden units and dimensions for each layer
:output (nn.Linear) - final fully-connected layer to handle output for model
:dropout (nn.Dropout) - handling of layer-wise drop-out parameter
Functions
:forward - handling of forward pass of datum through the network.
'''
def __init__(self, args):
super(Model, self).__init__()
self.hidden_layers = nn.ModuleList([nn.Linear(args.in_size,
args.hidden_layers[0])])
#create hidden layers
layer_sizes = zip(args.hidden_layers[:-1], args.hidden_layers[1:])
#gives input/output sizes for each layer
self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes])
self.output = nn.Linear(args.hidden_layers[-1], args.out_size)
self.dropout = None if args.drop_p is None \
else nn.Dropout(p=args.drop_p)
def forward(self, x):
x = x.view(-1, args.in_size)
for each in self.hidden_layers:
x = F.relu(each(x)) #apply relu to each hidden node
if self.dropout is not None:
x = self.dropout(x) #apply dropout
x = self.output(x) #apply output weights
if args.activation is None:
return x
return args.activation(x, dim=args.dim) #apply activation log softmax
```
<a id='classical_dl'></a>
## Classical Deep Learning
Here we train our network on data that is not distributed (therefore this is not yet a federated or encrypted problem). However, this exercise is useful in showing how we can transition from traditional deep learning to federated deep learning.
First create a dataset of batch size one. This is realistic since most people would only have their own credit score data. This might be different if we decide to use a secure or trusted third party to manage parts of the data, but we don't trust the credit rating company with our data.
```
class Arguments():
def __init__(self, in_size, out_size, hidden_layers,
activation=F.softmax, dim=-1):
self.batch_size = 1
self.drop_p = None
self.epochs = 300
self.lr = 0.001
self.in_size = in_size
self.out_size = out_size
self.hidden_layers = hidden_layers
self.precision_fractional=10
self.activation = activation
self.dim = dim
dataset = [(data[i], target[i].reshape(1,2)) for i in range(len(data))]
#instantiate model
in_size = data[0].shape[0]
out_size = 2
hidden_layers=[32,15,8]
_data, _target = dataset[0]
_data, _target
def train(model, datasets, criterion):
#use a simple stochastic gradient descent optimizer
#define optimizer for each model
optimizer = optim.SGD(params=model.parameters(), lr=args.lr)
steps=0
model.train() #training mode
for e in range(1, args.epochs+1):
running_loss=0
for ii, (data,target) in enumerate(datasets): #iterates over pointers to remote data
steps+=1
optimizer.zero_grad()#zero out gradients so that one forward pass doesnt pick up previous forward's gradients
outputs = model.forward(data) #make prediction
outputs = outputs.reshape(1,-1) #get shape of (1,2) as we need at least two dimension
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
#print(f"step: {steps}", loss.item())
running_loss+=loss.item()
if e%10==0:
print(f'Epoch: {e} \tLoss: {running_loss/len(datasets):.6f}')
running_loss=0
args = Arguments(in_size, out_size, hidden_layers, activation=F.softmax, dim=1)
base_model = Model(args)
model = copy.deepcopy(base_model) #exact replica of base model
train(model, dataset, nn.MSELoss())
```
We can also use PyTorch's `Dataset` class to make the processing of data a little easier, but for the purpose of this example it will not give any clear benefits. If you would like to read more about PyTorch's abstract `Dataset` class [read here](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html), with another example [here](https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel). Generally speaking, using `Dataset` and `DataLoader` makes the handling of training and testing data much easier.
```
from torch.utils.data import Dataset, DataLoader, TensorDataset
n_train_items = int(len(dataset)*0.7)
n_test_items = len(dataset) - n_train_items
train_dataset = TensorDataset(data[:n_train_items], target[:n_train_items])
test_dataset = TensorDataset(data[:n_train_items], target[:n_train_items])
train_loader = DataLoader(train_dataset, batch_size=1, shuffle=False)
test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)
#this gives us an identical implementation, but split up into train/test set
%%time
#training loss will look a little different since the dataset is shuffled
model = copy.deepcopy(base_model)
train(model, train_loader, nn.MSELoss())
def test(model, dataloader, criterion):
print(f'Testing')
steps = 0
model.eval() # training mode
pred = []
true = []
running_loss=0.
for ii, (data, target) in tqdm_notebook(enumerate(dataloader),
unit='datum', desc='testing',
total=len(dataloader)):
# iterates over pointers to remote data
steps += 1
outputs = model.forward(data) #make prediction
outputs = outputs.reshape(1,-1)
#get shape of (1,2) as we need at least two dimension
loss = criterion(outputs, target)
_, y_pred = th.max(outputs, 1)
_, y_true = th.max(target, 1)
pred.append(y_pred.item())
true.append(y_true.item())
# get loss from remote worker and unencrypt
running_loss += loss.item()
print('Testing Loss: {:.6f}'.format(running_loss/len(dataset)))
return pred, true
y_pred, y_true = test(model, test_loader, nn.MSELoss())
cnf_mtx = confusion_matrix(y_pred, y_true).astype(int)
print(cnf_mtx)
tp = cnf_mtx[1][1]
tn = cnf_mtx[0][0]
total = cnf_mtx.sum()
print(f"accuracy: {(tp+tn)/total:.2f}")
print(f"recall: {tp/sum(y_true):.2f}")
print(f"precision: {tp/sum(y_pred):.2f}")
```
Now we have a credit application model that is training on our data. However, this is by no means yet federated learning. The implementation above simply trains a model with a batch size of 1. We will federate the model in the upcoming section.
Generally, the model looks pretty solid across the test sets. I will save the model parameters here so that we can use them in our federated or encrypted models (so as not to train it all from scratch again).
Also, it took 70 seconds to train this model which amounts to about 0.233 seconds per epoch.
```
checkpoint={'model_state':model.state_dict(),
'in_size':in_size,
'out_size':out_size,
'hidden_layers':hidden_layers}
th.save(checkpoint, 'base_model.pt')
```
Check out the **next step** in my exploration of techniques in privacy preserving AI with [federated learning here](https://github.com/mkucz95/private_ai_finance#federated-learning).
|
github_jupyter
|
import pandas as pd
import numpy as np
import warnings
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from tqdm import tqdm, tqdm_notebook
warnings.filterwarnings('ignore')
# BEWARE, ignoreing warnings is not always a good idea
# I am doing it for presentation
cols = [ f"A{i}" for i in range(1,16)]
cols.append('label')
df = pd.read_csv('../data/crx.data', names=cols)\
.replace(to_replace='?', value=np.nan).dropna()
print(df.shape, "\n ------- \n")
print(df.head(2))
def to_binary(df, col):
u = df[col].unique()
mapping =dict(zip(u, [i for i in range(0,len(u))]))
return df[col].map(mapping)
df.A1.head()
#convert to float
for col in ['A2', 'A3', 'A8', 'A11', 'A14', 'A15']:
df[col] = df[col].astype(float)
#binarize
for col in ['A1', 'A9', 'A10', 'A12', 'label']:
df[col] = to_binary(df, col)
onehot_cols = ['A4', 'A5', 'A6', 'A7', 'A13']
#perform one hot encoding, and drop original columns
df = df.join(pd.get_dummies(df[onehot_cols], dtype=int))\
.drop(onehot_cols, axis=1)
set(df.dtypes) #check that we have the data types we expect, no object types
#distribution of numeric-only columns
df[['A2', 'A3', 'A8', 'A11', 'A14', 'A15']].describe().iloc[1:, :10].round(3)
df.head(2) #double check what our DF looks like
import names #used to get random names
names.get_first_name()+' ' +names.get_last_name() #call random name
users = []
used_names = set()
for idx in range(len(df)):
name = names.get_first_name()+' ' +names.get_last_name()
while name in used_names:
name = names.get_first_name()+' ' +names.get_last_name()
used_names.add(name)
users.append(name)
df['name'] = users
df.head(2)
#get features and labels as numpy arrays which we can convert to tensors
features = df.drop(['label', 'name'], axis=1).values.astype(float)
labels = df['label'].values.astype(float)
#normalize
sclr = MinMaxScaler()
features = sclr.fit_transform(features)
#save features and labels for future use
np.save('../data/features', features)
np.save('../data/labels', labels)
#save labels where shape is (1,2)
labels=pd.get_dummies(df['label']).values.astype(float)
np.save('../data/labels_dim', labels)
import copy
from torch import nn
from torch import optim
import torch.nn.functional as F
import syft as sy
import torch as th
th.manual_seed(42) #so that dropout affects same layers
data = th.tensor(features, dtype=th.float32, requires_grad=True)
target = th.tensor(labels, dtype=th.float32, requires_grad=False).reshape(-1,2)
class Model(nn.Module):
'''
Neural Network Example Model
Attributes
:hidden_layers (nn.ModuleList) - hidden units and dimensions for each layer
:output (nn.Linear) - final fully-connected layer to handle output for model
:dropout (nn.Dropout) - handling of layer-wise drop-out parameter
Functions
:forward - handling of forward pass of datum through the network.
'''
def __init__(self, args):
super(Model, self).__init__()
self.hidden_layers = nn.ModuleList([nn.Linear(args.in_size,
args.hidden_layers[0])])
#create hidden layers
layer_sizes = zip(args.hidden_layers[:-1], args.hidden_layers[1:])
#gives input/output sizes for each layer
self.hidden_layers.extend([nn.Linear(h1, h2) for h1, h2 in layer_sizes])
self.output = nn.Linear(args.hidden_layers[-1], args.out_size)
self.dropout = None if args.drop_p is None \
else nn.Dropout(p=args.drop_p)
def forward(self, x):
x = x.view(-1, args.in_size)
for each in self.hidden_layers:
x = F.relu(each(x)) #apply relu to each hidden node
if self.dropout is not None:
x = self.dropout(x) #apply dropout
x = self.output(x) #apply output weights
if args.activation is None:
return x
return args.activation(x, dim=args.dim) #apply activation log softmax
class Arguments():
def __init__(self, in_size, out_size, hidden_layers,
activation=F.softmax, dim=-1):
self.batch_size = 1
self.drop_p = None
self.epochs = 300
self.lr = 0.001
self.in_size = in_size
self.out_size = out_size
self.hidden_layers = hidden_layers
self.precision_fractional=10
self.activation = activation
self.dim = dim
dataset = [(data[i], target[i].reshape(1,2)) for i in range(len(data))]
#instantiate model
in_size = data[0].shape[0]
out_size = 2
hidden_layers=[32,15,8]
_data, _target = dataset[0]
_data, _target
def train(model, datasets, criterion):
#use a simple stochastic gradient descent optimizer
#define optimizer for each model
optimizer = optim.SGD(params=model.parameters(), lr=args.lr)
steps=0
model.train() #training mode
for e in range(1, args.epochs+1):
running_loss=0
for ii, (data,target) in enumerate(datasets): #iterates over pointers to remote data
steps+=1
optimizer.zero_grad()#zero out gradients so that one forward pass doesnt pick up previous forward's gradients
outputs = model.forward(data) #make prediction
outputs = outputs.reshape(1,-1) #get shape of (1,2) as we need at least two dimension
loss = criterion(outputs, target)
loss.backward()
optimizer.step()
#print(f"step: {steps}", loss.item())
running_loss+=loss.item()
if e%10==0:
print(f'Epoch: {e} \tLoss: {running_loss/len(datasets):.6f}')
running_loss=0
args = Arguments(in_size, out_size, hidden_layers, activation=F.softmax, dim=1)
base_model = Model(args)
model = copy.deepcopy(base_model) #exact replica of base model
train(model, dataset, nn.MSELoss())
from torch.utils.data import Dataset, DataLoader, TensorDataset
n_train_items = int(len(dataset)*0.7)
n_test_items = len(dataset) - n_train_items
train_dataset = TensorDataset(data[:n_train_items], target[:n_train_items])
test_dataset = TensorDataset(data[:n_train_items], target[:n_train_items])
train_loader = DataLoader(train_dataset, batch_size=1, shuffle=False)
test_loader = DataLoader(test_dataset, batch_size=1, shuffle=False)
#this gives us an identical implementation, but split up into train/test set
%%time
#training loss will look a little different since the dataset is shuffled
model = copy.deepcopy(base_model)
train(model, train_loader, nn.MSELoss())
def test(model, dataloader, criterion):
print(f'Testing')
steps = 0
model.eval() # training mode
pred = []
true = []
running_loss=0.
for ii, (data, target) in tqdm_notebook(enumerate(dataloader),
unit='datum', desc='testing',
total=len(dataloader)):
# iterates over pointers to remote data
steps += 1
outputs = model.forward(data) #make prediction
outputs = outputs.reshape(1,-1)
#get shape of (1,2) as we need at least two dimension
loss = criterion(outputs, target)
_, y_pred = th.max(outputs, 1)
_, y_true = th.max(target, 1)
pred.append(y_pred.item())
true.append(y_true.item())
# get loss from remote worker and unencrypt
running_loss += loss.item()
print('Testing Loss: {:.6f}'.format(running_loss/len(dataset)))
return pred, true
y_pred, y_true = test(model, test_loader, nn.MSELoss())
cnf_mtx = confusion_matrix(y_pred, y_true).astype(int)
print(cnf_mtx)
tp = cnf_mtx[1][1]
tn = cnf_mtx[0][0]
total = cnf_mtx.sum()
print(f"accuracy: {(tp+tn)/total:.2f}")
print(f"recall: {tp/sum(y_true):.2f}")
print(f"precision: {tp/sum(y_pred):.2f}")
checkpoint={'model_state':model.state_dict(),
'in_size':in_size,
'out_size':out_size,
'hidden_layers':hidden_layers}
th.save(checkpoint, 'base_model.pt')
| 0.635109 | 0.79653 |
```
%matplotlib inline
from pathlib import Path
import dask.dataframe as dd
import pandas as pd
YEAR = 2019
slookup = pd.read_csv('ghcn_mos_lookup.csv')
```
# GHCN
```
names = ['ID', 'DATE', 'ELEMENT', 'DATA_VALUE', 'M-FLAG', 'Q-FLAG', 'S-FLAG', 'OBS-TIME']
ds = dd.read_csv(f's3://noaa-ghcn-pds/csv/{YEAR}.csv', storage_options={'anon':True},
names=names, parse_dates=['DATE'], dtype={'DATA_VALUE':'object'})
ghcn = ds[['ID', 'DATE', 'ELEMENT', 'DATA_VALUE']][ds['ID'].isin(slookup['ID']) & ds['ELEMENT'].str.match('TAVG')].compute()
ghcn.head()
```
# MOS
```
file_list = list(Path(f'station_filter').glob("*.csv"))
columns = ['station', 'short_model', 'model', 'runtime', 'ftime', 'N/X', 'X/N',
'TMP', 'DPT', 'WDR', 'WSP', 'CIG', 'VIS', 'P06', 'P12', 'POS', 'POZ',
'SNW', 'CLD', 'OBV', 'TYP', 'Q06', 'Q12', 'T06', 'T12']
usecols = ['station', 'runtime', 'ftime', 'TMP']
df = pd.read_csv(file_list[0], names=columns, usecols=usecols).drop_duplicates().dropna()
mask = df['runtime'].str.contains('2019') & df['ftime'].str.contains('2019')
dt = pd.to_datetime(df['ftime'][mask])
dt2 = pd.to_datetime(df['runtime'][mask])
((dt.dt.tz_localize('UTC') - dt2).dt.total_seconds()/3600).astype(int)
mos_tables = []
for f in sorted(file_list):
mos = pd.read_csv(f, names=columns, usecols=usecols).drop_duplicates().dropna()
#somehow there's a row where the header names got repeated
mos.drop(mos[mos['station'].str.match('station')].index, inplace=True)
mask = mos['runtime'].str.contains('2019') & mos['ftime'].str.contains('2019')
mos.drop(mos[~mask].index, inplace=True)
mos
#filter & convert
mosc = mos[['station', 'TMP']].astype({'TMP':float})
mosc['runtime'] = pd.to_datetime(mos['runtime'])
mosc['ftime'] = pd.to_datetime(mos['ftime'])
mosc['hours'] = ((mosc['ftime'].dt.tz_localize('UTC') - mosc['runtime']).dt.total_seconds()/3600).astype(int)
mosc['date'] = mosc['ftime'].dt.date
mosc['TMP'] = mosc['TMP']
mos_tables.append(mosc)
mos_all = pd.concat(mos_tables)
mos_all.to_csv("filtered_mos.csv", index=False)
mos_all.head()
slook = slookup.rename(columns={'Station':'station'})[['ID', 'station']]
slook.dtypes
mos_all.info()
moswn = pd.merge(mos_all, slook, how='right', on='station')
moswn['datestr'] = moswn['date'].astype(str)
mtable = moswn[[ 'ID', 'datestr', 'station','hours', 'TMP']].pivot_table(index=["ID", 'datestr', 'station'], columns=['hours'], values='TMP')
ghcn['DATESTR'] = ghcn['DATE'].astype(str)
ghcn.head()
pairup = pd.merge(mtable.reset_index(), ghcn, how='inner', right_on=['ID', 'DATESTR'], left_on=['ID', 'datestr'])
pairup.info()
pairup.columns
hours = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0, 27.0, 30.0, 33.0, 36.0, 39.0, 42.0, 45.0, 48.0, 51.0, 54.0, 57.0, 60.0, 66.0, 72.0]
hc = {i:str(int(i)) for i in hours}
columns = {'datestr':'date', 'DATA_VALUE':'observed'}
columns.update(hc)
table = pairup[['station', 'datestr', 'DATA_VALUE']+hours].rename(columns=columns)
table
table.to_csv("ALL_2019.csv", index=False)
```
|
github_jupyter
|
%matplotlib inline
from pathlib import Path
import dask.dataframe as dd
import pandas as pd
YEAR = 2019
slookup = pd.read_csv('ghcn_mos_lookup.csv')
names = ['ID', 'DATE', 'ELEMENT', 'DATA_VALUE', 'M-FLAG', 'Q-FLAG', 'S-FLAG', 'OBS-TIME']
ds = dd.read_csv(f's3://noaa-ghcn-pds/csv/{YEAR}.csv', storage_options={'anon':True},
names=names, parse_dates=['DATE'], dtype={'DATA_VALUE':'object'})
ghcn = ds[['ID', 'DATE', 'ELEMENT', 'DATA_VALUE']][ds['ID'].isin(slookup['ID']) & ds['ELEMENT'].str.match('TAVG')].compute()
ghcn.head()
file_list = list(Path(f'station_filter').glob("*.csv"))
columns = ['station', 'short_model', 'model', 'runtime', 'ftime', 'N/X', 'X/N',
'TMP', 'DPT', 'WDR', 'WSP', 'CIG', 'VIS', 'P06', 'P12', 'POS', 'POZ',
'SNW', 'CLD', 'OBV', 'TYP', 'Q06', 'Q12', 'T06', 'T12']
usecols = ['station', 'runtime', 'ftime', 'TMP']
df = pd.read_csv(file_list[0], names=columns, usecols=usecols).drop_duplicates().dropna()
mask = df['runtime'].str.contains('2019') & df['ftime'].str.contains('2019')
dt = pd.to_datetime(df['ftime'][mask])
dt2 = pd.to_datetime(df['runtime'][mask])
((dt.dt.tz_localize('UTC') - dt2).dt.total_seconds()/3600).astype(int)
mos_tables = []
for f in sorted(file_list):
mos = pd.read_csv(f, names=columns, usecols=usecols).drop_duplicates().dropna()
#somehow there's a row where the header names got repeated
mos.drop(mos[mos['station'].str.match('station')].index, inplace=True)
mask = mos['runtime'].str.contains('2019') & mos['ftime'].str.contains('2019')
mos.drop(mos[~mask].index, inplace=True)
mos
#filter & convert
mosc = mos[['station', 'TMP']].astype({'TMP':float})
mosc['runtime'] = pd.to_datetime(mos['runtime'])
mosc['ftime'] = pd.to_datetime(mos['ftime'])
mosc['hours'] = ((mosc['ftime'].dt.tz_localize('UTC') - mosc['runtime']).dt.total_seconds()/3600).astype(int)
mosc['date'] = mosc['ftime'].dt.date
mosc['TMP'] = mosc['TMP']
mos_tables.append(mosc)
mos_all = pd.concat(mos_tables)
mos_all.to_csv("filtered_mos.csv", index=False)
mos_all.head()
slook = slookup.rename(columns={'Station':'station'})[['ID', 'station']]
slook.dtypes
mos_all.info()
moswn = pd.merge(mos_all, slook, how='right', on='station')
moswn['datestr'] = moswn['date'].astype(str)
mtable = moswn[[ 'ID', 'datestr', 'station','hours', 'TMP']].pivot_table(index=["ID", 'datestr', 'station'], columns=['hours'], values='TMP')
ghcn['DATESTR'] = ghcn['DATE'].astype(str)
ghcn.head()
pairup = pd.merge(mtable.reset_index(), ghcn, how='inner', right_on=['ID', 'DATESTR'], left_on=['ID', 'datestr'])
pairup.info()
pairup.columns
hours = [6.0, 9.0, 12.0, 15.0, 18.0, 21.0, 24.0, 27.0, 30.0, 33.0, 36.0, 39.0, 42.0, 45.0, 48.0, 51.0, 54.0, 57.0, 60.0, 66.0, 72.0]
hc = {i:str(int(i)) for i in hours}
columns = {'datestr':'date', 'DATA_VALUE':'observed'}
columns.update(hc)
table = pairup[['station', 'datestr', 'DATA_VALUE']+hours].rename(columns=columns)
table
table.to_csv("ALL_2019.csv", index=False)
| 0.346652 | 0.587322 |
This notebook is based off of the [SGD mnist](https://github.com/fastai/fastai/blob/master/courses/ml1/lesson4-mnist_sgd.ipynb) lesson from fastai
In this notebook we will start with a pytorch neural network implementation of logitistic regression and then program it ourselves
# Imports and Paths
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
from data_sci.imports import *
from data_sci.utilities import *
from data_sci.fastai import *
from data_sci.fastai.dataset import *
from data_sci.fastai.metrics import *
from data_sci.fastai.torch_imports import *
from data_sci.fastai.model import *
import torch.nn as nn
```
Path to download the data
```
PATH = '/data/msnow/data_science/mnist/'
```
# MNIST Data
Let's download, unzip, and format the data.
```
URL='http://deeplearning.net/data/mnist/'
FILENAME='mnist.pkl.gz'
def load_mnist(filename):
return pickle.load(gzip.open(filename, 'rb'), encoding='latin-1')
get_data(os.path.join(URL,FILENAME), os.path.join(PATH,FILENAME))
((x, y), (x_valid, y_valid), _) = load_mnist(os.path.join(PATH,FILENAME))
type(x), x.shape, type(y), y.shape
y_valid.shape
```
### Normalize
Many machine learning algorithms behave better when the data is *normalized*, that is when the mean is 0 and the standard deviation is 1. We will subtract off the mean and standard deviation from our training set in order to normalize the data:
```
mean = x.mean()
std = x.std()
x=(x-mean)/std
mean, std, x.mean(), x.std()
```
Note that for consistency (with the parameters we learn when training), we subtract the mean and standard deviation of our training set from our validation set.
```
x_valid = (x_valid-mean)/std
x_valid.mean(), x_valid.std()
```
### Look at the data
```
def plots(ims, figsize=(12,6), rows=2, titles=None):
f = plt.figure(figsize=figsize)
cols = len(ims)//rows
for i in range(len(ims)):
sp = f.add_subplot(rows, cols, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], cmap='gray')
x_valid.shape
y_valid.shape
```
The data has been reduced from a rank 3 tensor of image number by image width by image height to a rank 2 tensor of number of image number by pixel. Before visualizing the image we need to convert it back into a rank 3 tensor using numpy's `reshape`.
`y_valid` is the list of integers which correspond to the images in `x_valid`
```
x_imgs = np.reshape(x_valid, (-1,28,28)); x_imgs.shape
plt.imshow(x_imgs[0],cmap='gray')
plt.title(y_valid[0]);
plots(x_imgs[:8], titles=y_valid[:8])
```
# Neural Networks
A **function** takes inputs and returns outputs. The classic example is the equation for a line
$$ f(x) = ax + b $$
where $x$ is the input, $a$ and $b$ are the **parameters** and $f(x)$ is the output. A *neural network* is just a specific type of function (an *infinitely flexible function* to be exact), consisting of *layers*, which are made up of parameters. A *layer* is a linear function such as matrix multiplication followed by a non-linear function (the *activation*). For example, an equation for a simple neural network can be represented with the equation:
$$ f(\mathbf{x}) = a_{nl}\left(a_l \mathbf{x}\right) $$
where $\mathbf{x}$ is the input (the bold font just means that it is not just a scalar, i.e., it is a tensor of rank greater than 0), $a_l$ is the linear layer parameter, and $a_{nl}$ is the non linear layer parameter.
However, most neural networks are not nearly this simple. They often have thousands, or even hundreds of thousands of parameters. However the core idea is the same. The neural network is a function, and we will learn the best parameters for modeling our data.
As an aside there always needs to be a non-linear function after each linear function, as multiple linear functions in a row can always be represented by a single linear function. This is part of the definition of linear transformations. For example, multiplying anything by 5 and then dividing the result by 2 is equivalent to multiplying the input by 5/2.
# PyTorch
PyTorch has two overlapping, yet distinct, purposes. As described in the [PyTorch documentation](http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html):
<img src="images/what_is_pytorch.png" alt="pytorch" style="width: 80%"/>
The neural network functionality of PyTorch is built on top of the Numpy-like functionality for fast matrix computations on a GPU. Although the neural network purpose receives way more attention, both are very useful. We'll implement a neural net from scratch today using PyTorch.
**Further learning**: If you are curious to learn what *dynamic* neural networks are, you may want to watch [this talk](https://www.youtube.com/watch?v=Z15cBAuY7Sc) by Soumith Chintala, Facebook AI researcher and core PyTorch contributor.
If you want to learn more PyTorch, you can try this [introductory tutorial](http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html) or this [tutorial to learn by examples](http://pytorch.org/tutorials/beginner/pytorch_with_examples.html).
# Use the built in Neural Net for Logistic Regression in PyTorch
We will begin with the highest level abstraction: using a neural net defined by PyTorch's Sequential class. In PyTorch to switch between cpu and CUDA enabled gpu you just need to add `.cuda()` to the end of any nn architecture:
- GPU enabled
```python
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
).cuda()
```
- GPU disabled (using cpu)
```python
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
)
```
As we said above, each layer of the neural network is just the data modified first by a linear function and then by a non-linear function. So if we wanted to build a neural network version of multinomial logistic regression, termed softmax in neural networks. Just as a reminder, here is the equation for softmax to predict the $j$-th class given a sample vector $\mathbf{x}$ and a weighting vector $\mathbf{w}$:
$$ Pr(y=j \mid \mathbf{x}) = \dfrac{\exp\left(\mathbf{x}^T \mathbf{w}_j\right)}{\sum\limits_{k=1}^K\exp\left(\mathbf{x}^T\mathbf{w}_k\right)}$$
Each input is a vector of size `28*28` pixels and our output is of size `10` (since there are 10 digits: 0, 1, ..., 9).
We use the output of the final layer to generate our predictions. Often for classification problems (like MNIST digit classification), the final layer has the same number of outputs as there are classes. In that case, this is 10: one for each digit from 0 to 9. These can be converted to comparative probabilities. For instance, it may be determined that a particular hand-written image is 80% likely to be a 4, 18% likely to be a 9, and 2% likely to be a 3.
So putting everything together, we want a layer composed of a linear component which converts our `28*28` rank 1 tensor (aka vector) inputs into an output rank 1 tensor of size `10` and then we want to apply a softmax classification function.
```
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
)
md = ImageClassifierData.from_arrays(PATH, (x,y), (x_valid, y_valid))
```
`ImageClassifierData` is from the [Fast AI library](https://github.com/fastai/fastai/blob/master/fastai/dataset.py) and calls a PyTorch data loader, which grabs a few images, sticks them into a mini-batch and makes them available, this is equivalent to a python generator. You can call each of the different data loaders
- training data loader = md.trn_dl
- validation data loader = md.val_dl
- test data loader = md.test_dl
- augmented data loader = md.aug_dl
- test augemented data loader = md.test_aug_dl
## Aside about loss functions and metrics
In machine learning the **loss** function or cost function is representing the price paid for inaccuracy of predictions. To understand where this loss function comes from let's take a little detour into probability theory.
Given a fair coin what is the probability of the following outcomes:
- Tails, Heads
- Heads, Heads
- Tails, Tails
You don't need any equations to know that the probability of each of those outcomes is 0.25, as there are four possible outcomes when you flip a coin twice, and if it is a fair coin the probability of each outcome is 0.25. We can formalize this through the probability mass function (pmf) of the Bernoulli distribution:
$$
f(y;p) = p^y\left(1-p\right)^{1-y} \texttt{ for }k \in \{0,1\}
$$
where $y$ is the number of events occuring, $p$ is the probability of that event occuring and $f(y;p)$ is the probability of $y$ events occuring given $p$. One way to determine the loss of this function is to take the negative log-likelihood (or cross-entropy) of our function.
\begin{align}
\ell\left(f\right) & = \log \left(p^y\left(1-p\right)^{1-y}\right)\\
&= \log \left(p^y\right) + \log\left(\left(1-p\right)^{1-y}\right)\\
&= y\log p + (1-y) \log (1-p)
\end{align}
In machine learning terms the equations are the same, but the terms means slightly different things. For a given observation, $y$ is the true label and $p$ is the probability given by the model's prediction.
```
def binary_loss(y, p):
return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
acts = np.array([1, 0, 0, 1])
preds = np.array([0.9, 0.1, 0.2, 0.8])
binary_loss(acts, preds)
```
Note that in our toy example above our accuracy is 100% and our loss is 0.16.
Why not just maximize accuracy? The binary classification loss is an easier function to optimize.
For multi-class classification, we use *negative log liklihood* (also known as *categorical cross entropy*) which is exactly the same thing, but summed up over all classes. For example, let's say we were trying to train a model to recognize pictures of animals, and the options were horse, dog, cat, lion, tiger, and bear. We can represent each of the different options as a one hot encoded vectors:
| | horse | dog | cat | lion | tiger | bear |
|---:|--------:|------:|------:|-------:|--------:|-------:|
| 0 | 1 | 0 | 0 | 0 | 0 | 0 |
| 1 | 0 | 1 | 0 | 0 | 0 | 0 |
| 2 | 0 | 0 | 1 | 0 | 0 | 0 |
| 3 | 0 | 0 | 0 | 1 | 0 | 0 |
| 4 | 0 | 0 | 0 | 0 | 1 | 0 |
| 5 | 0 | 0 | 0 | 0 | 0 | 1 |
let's say the model predicted the following output for a bear
| | bear | predicted_bear |
|---:|-------:|-----------------:|
| 0 | 0 | 0.05 |
| 1 | 0 | 0.05 |
| 2 | 0 | 0.05 |
| 3 | 0 | 0.05 |
| 4 | 0 | 0.05 |
| 5 | 1 | 0.75 |
to calculate the loss for the prediction we look at the loss for each observation in the vector.
| | bear | predicted_bear | loss_equation | observation_loss |
|---:|-------:|-----------------:|:----------------------------------|-------------------:|
| 0 | 0 | 0.05 | 0\*log(0.05) + (1-0)\*log(1-0.05) | 0.0512933 |
| 1 | 0 | 0.05 | 0\*log(0.05) + (1-0)\*log(1-0.05) | 0.0512933 |
| 2 | 0 | 0.05 | 0\*log(0.05) + (1-0)\*log(1-0.05) | 0.0512933 |
| 3 | 0 | 0.05 | 0\*log(0.05) + (1-0)\*log(1-0.05) | 0.0512933 |
| 4 | 0 | 0.05 | 0\*log(0.05) + (1-0)\*log(1-0.05) | 0.0512933 |
| 5 | 1 | 0.75 | 1\*log(0.75) + (1-1)\*log(1-0.75) | 0.287682 |
The total loss for that prediction is the sum of the individual observation losses which comes out to 0.544
To understand this a little more, let's calculate the observation losses for a few more predictions
| | bear | predict_1 | loss_1 | predict_2 | loss_2 | predict_3 | loss_3 |
|---:|-------:|------------:|----------:|------------:|----------:|------------:|---------:|
| 0 | 0 | 0.05 | 0.0512933 | 0.01 | 0.0100503 | 0.1 | 0.105361 |
| 1 | 0 | 0.05 | 0.0512933 | 0.01 | 0.0100503 | 0.1 | 0.105361 |
| 2 | 0 | 0.05 | 0.0512933 | 0.01 | 0.0100503 | 0.1 | 0.105361 |
| 3 | 0 | 0.05 | 0.0512933 | 0.01 | 0.0100503 | 0.1 | 0.105361 |
| 4 | 0 | 0.05 | 0.0512933 | 0.01 | 0.0100503 | 0.1 | 0.105361 |
| 5 | 1 | 0.75 | 0.287682 | 0.95 | 0.0512933 | 0.5 | 0.693147 |
You can see that as the predictions get more accurate the individual observation losses decrease and vice versa as the predictions get less accurate.
This one-hot encoding for multiclass classification is why when creating our nerual network above we built the layer to be (28,28,10). The last value of the tuple is 10 to represent the 1 by 10 one-hot encoded vectors of the digits
## Now back to the model
As explained in the above aside, we want to use a negative log likelihood loss function for our model, `nn.NLLLoss()` in pytorch. When evaluating our model we can use the accuracy. For the optimization of our model we are going to use stochastic gradient descent, telling it to use the layers in `net` as the parameters, and setting other parameters which will be explained in other notebooks
```
loss=nn.NLLLoss()
metrics=[accuracy]
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9)
```
## Fitting the model
*Fitting* is the process by which the neural net learns the best parameters for the dataset.
```
fit(net, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
```
GPUs are great at handling lots of data at once. We break the data up into **batches**, and that specifies how many samples from our dataset we want to send to the processor (GPU or CPU) at a time. The fastai library defaults to a batch size of 64. On each iteration of the training loop, the error on 1 batch of data will be calculated, and the optimizer will update the parameters based on that.
An **epoch** is completed once each data sample has been used once in the training loop.
Now that we have the parameters for our model, we can make predictions on our validation set.
```
preds = predict(net, md.val_dl)
preds.shape
```
Our output is 10,000 by 10 because we have 10,000 observations in our validation set and 10 predictions per observation, i.e., prediction that it's a zero, prediction that it's a one, ...
To see how accurate our predictions are, we want to know how often the digit highest predicted matched the actual digit. For now, we don't care about the actual value of the prediction, just the location of the highest prediction.
```
pd.DataFrame(preds[:3,:])
```
Argmax gives the location of the highest value in the array, which in this case since all the values are negative is actually the smallest value. Since we want to know the location of the largest value across the columns, we have to set the axis to 1.
```
preds[:3,:].argmax(axis=1)
preds_argmax = preds.argmax(1)
```
Now let's see what percentage of predictions we got right
```
np.mean(preds_argmax == y_valid)
```
If you notice this is the same accruacy that is the output of `fit`. Accuracy is the metric being used because that is what we set it to, much higher up. And the `loss` function we set to `NLLLoss` which is the pytorch abbreviation for negative log likelihood loss
```
plots(x_imgs[:8], titles=preds_argmax[:8])
```
This is a one layer neural net, i.e., a logistic regression, and we can actually recreate the same output using scikit learn's logistic regression function. **TODO**
### DELETE ME Integrate the original code from the notebook into my cells
```
%%time
fit(net, md, n_epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
fit(net, md, n_epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
t = [o.numel() for o in net.parameters()]
t, sum(t)
```
# Building our own Neural Network for Logistic Regression in PyTorch
Above, we used pytorch's `nn.Linear` to create a linear layer, defined as a matrix multiplication followed by an addition (these are also called `affine transformations`). Let's try defining this ourselves, by building our own PyTorch class. A PyTorch module is either a neural net or a layer in a nueral net (since neural nets are modular and a neural net itself can be a "layer" in a larger neural net)
Just as Numpy has `np.matmul` for matrix multiplication (in Python 3, this is equivalent to the `@` operator), PyTorch has `torch.matmul`. In other words `torch.matmul(x,y) == x@y`
Our PyTorch class needs two things: constructor (says what the parameters are) and a forward method (how to calculate a prediction using those parameters) The method `forward` describes how the neural net converts inputs to outputs.
In PyTorch, the optimizer knows to try to optimize any attribute of type **Parameter**. This is why above when we called the SGD optimizer (`optim.sgd`) we had to call the `.parameter()` function on our layers
## Aside about sub classes in python
When creating a class, we can just add functionality to a pre-existing class by calling that class when creating your new class. Our new class is a subclass of `nn.module` and it is inheriting the properties of the nn.module class. There is one rule when creating a subclass, you need to include the following line in your init
```python
super().__init__()
```
This tells python to initialize the superclass (in this case the torch module) before initiating the current subclass (in this case LogReg)
## Back to the model
When initializing the random weight in the `get_weights` function, in order to prevent gradient shrinking or gradient explosion we divide the weight matrix by the size of the tensor.
Since we don't know how many dimensions our data will have, we can use `*args`, which when used as a parameter in a function allows us to send a variable-length argument list. Note that it is the use of the asterisk, \*, that matters, the word `args` is just convention.
In torch `view` is the equivalent of `reshape`, and setting a dimension to -1 tells it to infer the size of that dimension from the other dimensions
```python
>>> x = torch.randn(4, 4)
>>> z = x.view(-1, 8)
>>> z.size()
torch.Size([2, 8])
```
```
def get_weights(*dims): return nn.Parameter(torch.randn(dims)/dims[0])
def softmax(x): return torch.exp(x)/(torch.exp(x).sum(dim=1)[:,None])
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1) # equivalent to reshape in numpy
x = (x @ self.l1_w) + self.l1_b # Linear Layer
x = torch.log(softmax(x)) # Non-linear (LogSoftmax) Layer
return x
```
We create our neural net and the optimizer. (We will use the same loss and metrics from above).
```
# net2 = LogReg().cuda()
net2 = LogReg()
# opt=optim.Adam(net2.parameters())
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net2, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
preds2 = predict(net2, md.val_dl)
preds2_argmax = preds2.argmax(axis=1)
plots(x_imgs[:8], titles=preds2_argmax[:8])
np.mean(preds2_argmax == y_valid)
```
## Not sure where this code is appropriate
```
dl = iter(md.trn_dl)
xmb,ymb = next(dl)
print(xmb.shape)
xmb
```
Wrapping a tensor in `Variable` is how you let PyTorch know to keep track of the differential for this parameter as it will need to perform SGD on it later.
```
# vxmb = Variable(xmb.cuda())
vxmb = Variable(xmb)
vxmb
preds = net2(vxmb).exp(); preds[:3]
preds.shape
preds.data.max(1)
preds.max(1)
preds = preds.data.max(1)[1]; preds
```
# Writing Our Own Training Loop
As a reminder, this is what we did above to write our own logistic regression class (as a pytorch neural net):
```
# Our code from above
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1)
x = x @ self.l1_w + self.l1_b
return torch.log(softmax(x))
# net2 = LogReg().cuda()
net2 = LogReg()
# opt=optim.Adam(net2.parameters())
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net2, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
```
Above, we are using the fastai method `fit` to train our model. Now we will try writing the training loop ourselves.
We will use the LogReg class we created, as well as the same loss function, learning rate, and optimizer as before:
```
# net2 = LogReg().cuda()
net2 = LogReg()
loss=nn.NLLLoss()
# learning_rate = 1e-3
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
# optimizer=optim.Adam(net2.parameters(), lr=learning_rate)
```
md is the ImageClassifierData object we created above. We want an iterable version of our training data
```
dl = iter(md.trn_dl) # Data loader
```
First, we will do a **forward pass**, which means computing the predicted `y` by passing `x` to the model.
```
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
```
We can check the loss:
```
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
print(l)
```
We may also be interested in the accuracy. We don't expect our first predictions to be very good, because the weights of our network were initialized to random values. Our goal is to see the loss decrease (and the accuracy increase) as we train the network:
```
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
```
Now we will use the optimizer to calculate which direction to step in. That is, how should we update our weights to try to decrease the loss?
Pytorch has an automatic differentiation package ([autograd](http://pytorch.org/docs/master/autograd.html)) that takes derivatives for us, so we don't have to calculate the derivative ourselves! We just call `.backward()` on our loss to calculate the direction of steepest descent (the direction to lower the loss the most).
```
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
opt.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
opt.step()
```
Now, let's make another set of predictions and check if our loss is lower:
```
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
print(l)
```
Note that we are using **stochastic** gradient descent, so the loss is not guaranteed to be strictly better each time. The stochasticity comes from the fact that we are using **mini-batches**; we are just using 64 images to calculate our prediction and update the weights, not the whole dataset.
```
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
```
If we run several iterations in a loop, we should see the loss decrease and the accuracy increase with time.
```
for t in range(100):
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
if t % 10 == 0:
accuracy = np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
print("loss: ", l.data[0], "\t accuracy: ", accuracy)
opt.zero_grad()
l.backward()
opt.step()
```
### Put it all together in a training loop
```
def score(x, y):
y_pred = to_np(net2(V(x)))
return np.sum(y_pred.argmax(axis=1) == to_np(y))/len(y_pred)
# net2 = LogReg().cuda()
net2 = LogReg()
loss=nn.NLLLoss()
learning_rate = 1e-2
optimizer=optim.SGD(net2.parameters(), lr=learning_rate)
trn_batches = md.trn_ds.n // md.bs + int(md.trn_ds.n % md.bs > 0)
val_batches = md.val_ds.n // md.bs + int(md.val_ds.n % md.bs > 0)
for epoch in range(1):
losses=[]
dl = iter(md.trn_dl)
# for t in range(len(dl)):
for t in range(trn_batches):
# Forward pass: compute predicted y and loss by passing x to the model.
xt, yt = next(dl)
y_pred = net2(V(xt))
l = loss(y_pred, V(yt))
losses.append(l)
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(val_batches)]
# val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
```
# Stochastic Gradient Descent
Nearly all of deep learning is powered by one very important algorithm: **stochastic gradient descent (SGD)**. SGD can be seeing as an approximation of **gradient descent (GD)**. In GD you have to run through all the samples in your training set to do a single itaration. In SGD you use only a subset of training samples to do the update for a parameter in a particular iteration. The subset used in each iteration is called a batch or minibatch.
Now, instead of using the optimizer, we will do the optimization ourselves!
```
# net2 = LogReg().cuda()
net2 = LogReg()
loss_fn=nn.NLLLoss()
lr = 1e-2
w,b = net2.l1_w,net2.l1_b
trn_batches = md.trn_ds.n // md.bs + int(md.trn_ds.n % md.bs > 0)
val_batches = md.val_ds.n // md.bs + int(md.val_ds.n % md.bs > 0)
for epoch in range(3):
losses=[]
dl = iter(md.trn_dl)
# for t in range(len(dl)):
for t in range(trn_batches):
xt, yt = next(dl)
y_pred = net2(V(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
losses.append(loss)
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
w.data -= w.grad.data * lr
b.data -= b.grad.data * lr
w.grad.data.zero_() # suffix underscore just means do the operation in place instead of creating new variable
b.grad.data.zero_()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(val_batches)]
# val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
```
# Addenda
To see the number of parameters per layer in my neural network by calling the `.parameters()` method on my neural network. This creates a generator object which will go through each layer, If I then iterate through the generator I can extract the number of elements in each layer using `.numel()`
```
t = [o.numel() for o in net.parameters()]
t, sum(t)
```
## L2 Regularization and weight decay
Remember that we calculate our parameters by minimizing a loss function, which has the general form of :
$$ \ell = \sum\limits_{i=1}^n\left(Y_i - \sum\limits_{j=1}^p X_{ij} w_j\right)^2 $$
One way to reduce the overfitting of our model is to penalize all non-zero weights. We can do this by adding a term to our loss function:
$$ \ell = \sum\limits_{i=1}^n\left(Y_i - \sum\limits_{j=1}^p X_{ij} w_j\right)^2 + \ell_{reg}(w)$$
We don't want to penalize weights with a value zero, as that is the same as not having any parameter, just the degree to which the weights differ from zero. Two standard ways of doing this are L1 regularization, also known as lasso regression (least absolute shrinkage and selection operator), and L2 regularization. L1 regularization uses the absolute value of the weights and L2 uses the square of the weights. In both cases you don't want to add the entire value of the weights to the loss function, just some small percentage, on the order of $10^{-4}$ to $10^{-6}$, which we can call $\alpha$.
$$ \ell_{L1} = \sum\limits_{i=1}^n\left(Y_i - \sum\limits_{j=1}^p X_{ij} w_j\right)^2 + \alpha\sum\limits_{j=1}^p\left|w_j\right| $$
$$ \ell_{L2} = \sum\limits_{i=1}^n\left(Y_i - \sum\limits_{j=1}^p X_{ij} w_j\right)^2 + \alpha\sum\limits_{j=1}^p w_j^2 $$
When working with neural networks we are often concerned with the gradient of the loss, which for L2 is easy to calculate
$$\Delta_{L2} = 2\alpha w $$
In neural networks, this L2 gradient is referred to as the weight decay.
Let's build a lsightly deeper simple neural network with and without weight decay
```
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
)
md = ImageClassifierData.from_arrays(PATH, (x,y), (x_valid, y_valid))
loss=nn.NLLLoss()
metrics=[accuracy]
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
)
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
```
You might expect the training loss to be worse with weight decay, but what might be happening here is that weight decay is making the loss function easier to optimize on a per epoch level, resulting in a lower training loss. But in the end you would expect the training loss with weight decay to be worse (larger value) than the training loss without weight decay.
However, weight decay should improve the overall loss on the validation set, which it does in this case. This is because the purpose of the weight decay is to decrease overfitting, which should make the model more generalizable.
|
github_jupyter
|
%load_ext autoreload
%autoreload 2
%matplotlib inline
from data_sci.imports import *
from data_sci.utilities import *
from data_sci.fastai import *
from data_sci.fastai.dataset import *
from data_sci.fastai.metrics import *
from data_sci.fastai.torch_imports import *
from data_sci.fastai.model import *
import torch.nn as nn
PATH = '/data/msnow/data_science/mnist/'
URL='http://deeplearning.net/data/mnist/'
FILENAME='mnist.pkl.gz'
def load_mnist(filename):
return pickle.load(gzip.open(filename, 'rb'), encoding='latin-1')
get_data(os.path.join(URL,FILENAME), os.path.join(PATH,FILENAME))
((x, y), (x_valid, y_valid), _) = load_mnist(os.path.join(PATH,FILENAME))
type(x), x.shape, type(y), y.shape
y_valid.shape
mean = x.mean()
std = x.std()
x=(x-mean)/std
mean, std, x.mean(), x.std()
x_valid = (x_valid-mean)/std
x_valid.mean(), x_valid.std()
def plots(ims, figsize=(12,6), rows=2, titles=None):
f = plt.figure(figsize=figsize)
cols = len(ims)//rows
for i in range(len(ims)):
sp = f.add_subplot(rows, cols, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i], cmap='gray')
x_valid.shape
y_valid.shape
x_imgs = np.reshape(x_valid, (-1,28,28)); x_imgs.shape
plt.imshow(x_imgs[0],cmap='gray')
plt.title(y_valid[0]);
plots(x_imgs[:8], titles=y_valid[:8])
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
).cuda()
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
)
net = nn.Sequential(
nn.Linear(28*28, 10),
nn.LogSoftmax()
)
md = ImageClassifierData.from_arrays(PATH, (x,y), (x_valid, y_valid))
def binary_loss(y, p):
return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
acts = np.array([1, 0, 0, 1])
preds = np.array([0.9, 0.1, 0.2, 0.8])
binary_loss(acts, preds)
loss=nn.NLLLoss()
metrics=[accuracy]
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9)
fit(net, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
preds = predict(net, md.val_dl)
preds.shape
pd.DataFrame(preds[:3,:])
preds[:3,:].argmax(axis=1)
preds_argmax = preds.argmax(1)
np.mean(preds_argmax == y_valid)
plots(x_imgs[:8], titles=preds_argmax[:8])
%%time
fit(net, md, n_epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
fit(net, md, n_epochs=5, crit=loss, opt=opt, metrics=metrics)
set_lrs(opt, 1e-2)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
t = [o.numel() for o in net.parameters()]
t, sum(t)
super().__init__()
>>> x = torch.randn(4, 4)
>>> z = x.view(-1, 8)
>>> z.size()
torch.Size([2, 8])
def get_weights(*dims): return nn.Parameter(torch.randn(dims)/dims[0])
def softmax(x): return torch.exp(x)/(torch.exp(x).sum(dim=1)[:,None])
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1) # equivalent to reshape in numpy
x = (x @ self.l1_w) + self.l1_b # Linear Layer
x = torch.log(softmax(x)) # Non-linear (LogSoftmax) Layer
return x
# net2 = LogReg().cuda()
net2 = LogReg()
# opt=optim.Adam(net2.parameters())
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net2, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
preds2 = predict(net2, md.val_dl)
preds2_argmax = preds2.argmax(axis=1)
plots(x_imgs[:8], titles=preds2_argmax[:8])
np.mean(preds2_argmax == y_valid)
dl = iter(md.trn_dl)
xmb,ymb = next(dl)
print(xmb.shape)
xmb
# vxmb = Variable(xmb.cuda())
vxmb = Variable(xmb)
vxmb
preds = net2(vxmb).exp(); preds[:3]
preds.shape
preds.data.max(1)
preds.max(1)
preds = preds.data.max(1)[1]; preds
# Our code from above
class LogReg(nn.Module):
def __init__(self):
super().__init__()
self.l1_w = get_weights(28*28, 10) # Layer 1 weights
self.l1_b = get_weights(10) # Layer 1 bias
def forward(self, x):
x = x.view(x.size(0), -1)
x = x @ self.l1_w + self.l1_b
return torch.log(softmax(x))
# net2 = LogReg().cuda()
net2 = LogReg()
# opt=optim.Adam(net2.parameters())
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net2, md, n_epochs=1, crit=loss, opt=opt, metrics=metrics)
# net2 = LogReg().cuda()
net2 = LogReg()
loss=nn.NLLLoss()
# learning_rate = 1e-3
opt=optim.SGD(net2.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
# optimizer=optim.Adam(net2.parameters(), lr=learning_rate)
dl = iter(md.trn_dl) # Data loader
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
print(l)
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights
# of the model)
opt.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
opt.step()
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
print(l)
np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
for t in range(100):
xt, yt = next(dl)
# y_pred = net2(Variable(xt).cuda())
y_pred = net2(Variable(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
if t % 10 == 0:
accuracy = np.mean(to_np(y_pred).argmax(axis=1) == to_np(yt))
print("loss: ", l.data[0], "\t accuracy: ", accuracy)
opt.zero_grad()
l.backward()
opt.step()
def score(x, y):
y_pred = to_np(net2(V(x)))
return np.sum(y_pred.argmax(axis=1) == to_np(y))/len(y_pred)
# net2 = LogReg().cuda()
net2 = LogReg()
loss=nn.NLLLoss()
learning_rate = 1e-2
optimizer=optim.SGD(net2.parameters(), lr=learning_rate)
trn_batches = md.trn_ds.n // md.bs + int(md.trn_ds.n % md.bs > 0)
val_batches = md.val_ds.n // md.bs + int(md.val_ds.n % md.bs > 0)
for epoch in range(1):
losses=[]
dl = iter(md.trn_dl)
# for t in range(len(dl)):
for t in range(trn_batches):
# Forward pass: compute predicted y and loss by passing x to the model.
xt, yt = next(dl)
y_pred = net2(V(xt))
l = loss(y_pred, V(yt))
losses.append(l)
# Before the backward pass, use the optimizer object to zero all of the
# gradients for the variables it will update (which are the learnable weights of the model)
optimizer.zero_grad()
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
# Calling the step function on an Optimizer makes an update to its parameters
optimizer.step()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(val_batches)]
# val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
# net2 = LogReg().cuda()
net2 = LogReg()
loss_fn=nn.NLLLoss()
lr = 1e-2
w,b = net2.l1_w,net2.l1_b
trn_batches = md.trn_ds.n // md.bs + int(md.trn_ds.n % md.bs > 0)
val_batches = md.val_ds.n // md.bs + int(md.val_ds.n % md.bs > 0)
for epoch in range(3):
losses=[]
dl = iter(md.trn_dl)
# for t in range(len(dl)):
for t in range(trn_batches):
xt, yt = next(dl)
y_pred = net2(V(xt))
# l = loss(y_pred, Variable(yt).cuda())
l = loss(y_pred, Variable(yt))
losses.append(loss)
# Backward pass: compute gradient of the loss with respect to model parameters
l.backward()
w.data -= w.grad.data * lr
b.data -= b.grad.data * lr
w.grad.data.zero_() # suffix underscore just means do the operation in place instead of creating new variable
b.grad.data.zero_()
val_dl = iter(md.val_dl)
val_scores = [score(*next(val_dl)) for i in range(val_batches)]
# val_scores = [score(*next(val_dl)) for i in range(len(val_dl))]
print(np.mean(val_scores))
t = [o.numel() for o in net.parameters()]
t, sum(t)
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
)
md = ImageClassifierData.from_arrays(PATH, (x,y), (x_valid, y_valid))
loss=nn.NLLLoss()
metrics=[accuracy]
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
net = nn.Sequential(
nn.Linear(28*28, 100),
nn.ReLU(),
nn.Linear(100, 100),
nn.ReLU(),
nn.Linear(100, 10),
nn.LogSoftmax()
)
opt=optim.SGD(net.parameters(), 1e-1, momentum=0.9, weight_decay=1e-3)
fit(net, md, n_epochs=3, crit=loss, opt=opt, metrics=metrics)
| 0.806777 | 0.987841 |
# Analytical solution of the 1-D diffusion equation
As discussed in class, many physical problems encountered in the field of geosciences can be described with a diffusion equation, i.e. _relating the rate of change in time to the curvature in space_:
$$\frac{\partial u}{\partial t} = \kappa \frac{\partial^2 u}{\partial x^2}$$
Here, $u$ represents any type of property (e.g. temperature, pressure, hight, ...) and $\kappa$ is the proportionality constant, a general _diffusivity_.
As this equation is so general, we will investigate some of its properties and typical solutions here.
```
# first some basic Python imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.special
from ipywidgets import interactive
plt.rcParams['figure.figsize'] = [8., 5.]
plt.rcParams['font.size'] = 16
from IPython.display import Audio, display
```
## Steady-state solution
Before actually looking at the change in time, let's take a look at what happens when things _do not_ change anymore - i.e. when the time derivative is equal to zero:
$$\frac{\partial u}{\partial t} = 0$$
So:
$$\kappa \frac{\partial^2 u}{\partial x^2} = 0$$
and as $\kappa \ne 0$:
$$\frac{\partial^2 u}{\partial x^2} = 0$$
What does this eqution mean? The change of gradient (i.e. the curvature) is $\ne 0$, so the solution has to be linear, for a 1-D problem!
Q: what happens for multidimensional problems?
So: how can we now obtain a solution for an actual problem? Note that, for any defined domain $X$, there are infinitely many solutions which would satisfy this equation, e.g. let's look at a couple random realizations:
```
def plot_steady_state(n=1):
# set number of lines with n
np.random.seed(seed=12345)
pts_left = np.random.uniform(0,1, size=(20))
pts_right = np.random.uniform(0,1, size=(20))
plt.plot((np.zeros(n),np.ones(n)),(pts_left[:n],pts_right[:n]), '-')
plt.xlim([0,1])
plt.ylim([0,1])
plt.xlabel('x')
plt.ylabel('u')
plt.show()
v = interactive(plot_steady_state, n=(1,20,1))
display(v)
```
So: **which one of these lines is the one which is the solution to a specific problem?**
It is clear that we need to define some aspects to _fix_ the line in space.
Q: which aspects could this be?
Of course, one solution is to fix points on both sides of the domain, then we only have one solution left, e.g.:
```
pt_left = 0.2
pt_right = 0.5
plt.plot((0,1),(pt_left, pt_right), 'o-', markersize=12)
plt.xlim([-0.01,1.01])
plt.ylim([0,1])
plt.xlabel('x')
plt.ylabel('u')
plt.show()
```
Q: which other options can you think of?
Note: on a mathematical level, these additional conditions are required to obtain a solution are called **boundary conditions**!
## Transient solution
Let's now consider the case where we have a transient solution, evolving over time. As before, in order to define a solution, we have to define *boundary conditions*. In addition, as we consider changes over time, we have to define where we *start* - and this is defined with the *initial condition*.
A typical set of conditions, encountered in many physical problems, is related to an initial uniform state, which is suddenly perturbed on one side of an "infinite half-space":
<img src="./half_space_cooling.png">
The conditions for this model for $T(x,t)$ are, accordingly:
- Initial condition: $T(x,0) = T_0$
- Boundary conditions:
- $T(0,t) = T_1$
- $T(\infty,t) = T_0$
An analytical solution for this problem can be derived, and it has the general form:
$$ T(x,t) = T_1 + (T_0 - T_1)\;\mbox{erf}\left(\frac{x}{2\sqrt{\kappa t}}\right)$$
Where "erf" is the so-called "error function" (due to its relationship with the normal distribution), defined as:
$$\mbox{erf}(\eta) = \frac{2}{\sqrt{\pi}} \int_0^\eta e^{-u^2}\;du$$
### Error function
Here a plot of the error function:
```
xvals = np.arange(0,3.,0.001)
plt.plot(xvals, scipy.special.erf(xvals))
plt.xlabel('$\eta$')
plt.ylabel('erf($\eta$)')
plt.show()
```
Looking at the shape of this curve, it is intuitively evident that it is related to the diffusion problem considered above: a "pulse" of some sort is propagating into a domain (here in x-direction). Better seen maybe even when inspecting the "complementary error function":
```
xvals = np.arange(0,3,0.01)
plt.plot(xvals, scipy.special.erfc(xvals))
plt.xlabel('x')
plt.ylabel('erfc(x)')
plt.show()
```
### Physical example
Let's now consider a "dimensionalized" example, considering actual physical parameters, recall first:
$$ T(x,t) = T_1 + (T_0 - T_1)\;\mbox{erf}\left(\frac{x}{2\sqrt{\kappa t}}\right)$$
If we consider a case of thermal diffusion, then a typical property would be:
- $\kappa = 10^{-6}$
**Q: what do you think, how far does a temperature pulse in such a medium propagate in 1 sec, 1 day, 1 year?**
```
xvals = np.arange(0.,0.006,0.0001)
t = 1 # second!!
def diffusion(x,t):
kappa = 1E-6
T_0 = 0
T_1 = 1
return T_1 + (T_0 - T_1) * scipy.special.erf(x/(2 * np.sqrt(kappa*t)))
plt.plot(xvals, diffusion(xvals, t))
plt.show()
```
Side note: how can we get a feeling for the propagation of such a pulse, on the basis of this analytical solution? Note that, in theory, the erf does never actually reach 0 (it is only asymptotic!).
Idea: define a *point* at which a change in property should be noticable. Typical decision:
$l_c = 2\sqrt{\kappa t}$
**Q: which point in the diagram does this value correspond to? And what is the relationship to the erf-plot?**
```
plt.plot(xvals, diffusion(xvals, t))
plt.axvline(2*np.sqrt(1E-6 * t), color='k', linestyle='--')
plt.show()
```
*Side note*: if you consider again where the definition of the error function actually comes from: what does this value correspond to?
Let's now consider a more geologically meaningful example: propagation over a longer time period, over a greater distance.
**Q: back to the characteristic length: how far would a pulse propagate over, say, 1000 years?**
```
year_sec = 3600.*24*365
char_length = 2 * np.sqrt(1E-6 * 1000 * year_sec)
print(char_length)
```
Let's look at such a propagation in a dynamic system:
```
xvals = np.arange(0,1000)
def plot_temp(year=50):
plt.plot(xvals, diffusion(xvals, year * year_sec))
plt.show()
v = interactive(plot_temp, year=(50,2001,150))
display(v)
```
Or, in comparison to time evolution:
```
xvals = np.arange(0,1000)
def plot_temp(year=50):
for i in range(int(year/50)):
plt.plot(xvals, diffusion(xvals, (i+1)*50 * year_sec),
color=plt.cm.copper_r(i/50), lw=2)
plt.show()
v = interactive(plot_temp, year=(50,2001,150))
display(v)
```
## Additional content
### Relationship error function - Normal distribution
```
xvals = np.arange(-3,3,0.01)
plt.plot(xvals, scipy.special.erf(xvals))
plt.xlabel('x')
plt.ylabel('erf(x)')
plt.show()
```
|
github_jupyter
|
# first some basic Python imports
import matplotlib.pyplot as plt
import numpy as np
import scipy.special
from ipywidgets import interactive
plt.rcParams['figure.figsize'] = [8., 5.]
plt.rcParams['font.size'] = 16
from IPython.display import Audio, display
def plot_steady_state(n=1):
# set number of lines with n
np.random.seed(seed=12345)
pts_left = np.random.uniform(0,1, size=(20))
pts_right = np.random.uniform(0,1, size=(20))
plt.plot((np.zeros(n),np.ones(n)),(pts_left[:n],pts_right[:n]), '-')
plt.xlim([0,1])
plt.ylim([0,1])
plt.xlabel('x')
plt.ylabel('u')
plt.show()
v = interactive(plot_steady_state, n=(1,20,1))
display(v)
pt_left = 0.2
pt_right = 0.5
plt.plot((0,1),(pt_left, pt_right), 'o-', markersize=12)
plt.xlim([-0.01,1.01])
plt.ylim([0,1])
plt.xlabel('x')
plt.ylabel('u')
plt.show()
xvals = np.arange(0,3.,0.001)
plt.plot(xvals, scipy.special.erf(xvals))
plt.xlabel('$\eta$')
plt.ylabel('erf($\eta$)')
plt.show()
xvals = np.arange(0,3,0.01)
plt.plot(xvals, scipy.special.erfc(xvals))
plt.xlabel('x')
plt.ylabel('erfc(x)')
plt.show()
xvals = np.arange(0.,0.006,0.0001)
t = 1 # second!!
def diffusion(x,t):
kappa = 1E-6
T_0 = 0
T_1 = 1
return T_1 + (T_0 - T_1) * scipy.special.erf(x/(2 * np.sqrt(kappa*t)))
plt.plot(xvals, diffusion(xvals, t))
plt.show()
plt.plot(xvals, diffusion(xvals, t))
plt.axvline(2*np.sqrt(1E-6 * t), color='k', linestyle='--')
plt.show()
year_sec = 3600.*24*365
char_length = 2 * np.sqrt(1E-6 * 1000 * year_sec)
print(char_length)
xvals = np.arange(0,1000)
def plot_temp(year=50):
plt.plot(xvals, diffusion(xvals, year * year_sec))
plt.show()
v = interactive(plot_temp, year=(50,2001,150))
display(v)
xvals = np.arange(0,1000)
def plot_temp(year=50):
for i in range(int(year/50)):
plt.plot(xvals, diffusion(xvals, (i+1)*50 * year_sec),
color=plt.cm.copper_r(i/50), lw=2)
plt.show()
v = interactive(plot_temp, year=(50,2001,150))
display(v)
xvals = np.arange(-3,3,0.01)
plt.plot(xvals, scipy.special.erf(xvals))
plt.xlabel('x')
plt.ylabel('erf(x)')
plt.show()
| 0.470493 | 0.994396 |
# LIBRARIES
```
import matplotlib.pyplot as plt
import pylab
from sklearn.metrics import accuracy_score , classification_report, confusion_matrix, roc_auc_score,mean_squared_error,f1_score
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
import seaborn as sb
from sklearn.utils import resample
from sklearn.model_selection import train_test_split, KFold
from numpy import loadtxt
from xgboost import XGBClassifier
import sys
sys.path.append("../")
import os
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
baseline_gbm= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx")
baseline_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/LogReg/LR_Results.xlsx")
#dir level 1
dir_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_1/GBM/gbm_Results.xlsx")
dir_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_1/LogReg/LR_Results.xlsx")
#reweighing
rw_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/GBM/gbm_Results.xlsx")
rw_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/LogReg/LR_Results.xlsx")
#lfr
lfr_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/gbm_Results.xlsx")
lfr_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/LR_Results.xlsx")
#Adversarial Debiasing
AdDeb= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/AdDeb/AdDeb.xlsx")
#PRemover
PRemover=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover100.xlsx")
#Equal Odds
EO_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/EqualOdds/EO_gbm.xlsx")
EO_lr=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/EqualOdds/EO_LogReg.xlsx")
#CalEqual Odds
CalEO_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/CalEqualOdds/CalEO_gbm.xlsx")
CalEO_lr=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/CalEqualOdds/CalEO_LogReg.xlsx")
#varying levels of dir repair
#level 0
dir_gbm_0=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p0/GBM/gbm_Results.xlsx")
dir_lr_0= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p0/LogReg/LR_Results.xlsx")
#level 0.3
dir_gbm_3=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p3/GBM/gbm_Results.xlsx")
dir_lr_3= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p3/LogReg/LR_Results.xlsx")
#level 0.5
dir_gbm_5=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p5/GBM/gbm_Results.xlsx")
dir_lr_5= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p5/LogReg/LR_Results.xlsx")
#level 0.7
dir_gbm_7=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p7/GBM/gbm_Results.xlsx")
dir_lr_7= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p7/LogReg/LR_Results.xlsx")
#PRemover at other values
PRemover75=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover75.xlsx")
PRemover50=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover50.xlsx")
PRemover25=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover25.xlsx")
PRemover1=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover1.xlsx")
#Meta Classifier at various values
Meta0 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta0.xlsx")
Meta2 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta2.xlsx")
Meta4 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta4.xlsx")
Meta6 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta6.xlsx")
Meta8 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta8.xlsx")
Meta1 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta10.xlsx")
```
# METRICS
We put together various metrics for fairness utility trade-off
```
#baselines
Law_gbm_baseline=pd.read_excel(baseline_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_baseline_std= pd.read_excel(baseline_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_baseline= pd.concat([Law_gbm_baseline,Law_gbm_baseline_std], axis=1).drop('index',axis=1) #reset index adds and index column
#logistic regression
Law_lr_baseline= pd.read_excel(baseline_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_baseline_std= pd.read_excel(baseline_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_baseline= pd.concat([Law_lr_baseline,Law_lr_baseline_std], axis=1).drop('index',axis=1)
# #disparate impact remover
Law_gbm_dir=pd.read_excel(dir_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_dir_std= pd.read_excel(dir_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_dir= pd.concat([Law_gbm_dir,Law_gbm_dir_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_dir= pd.read_excel(dir_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_dir_std= pd.read_excel(dir_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_dir= pd.concat([Law_lr_dir,Law_lr_dir_std], axis=1).drop('index',axis=1)
# #reweighing
Law_gbm_rw=pd.read_excel(rw_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_rw_std= pd.read_excel(rw_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_rw= pd.concat([Law_gbm_rw,Law_gbm_rw_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_rw= pd.read_excel(rw_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_rw_std= pd.read_excel(rw_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_rw= pd.concat([Law_lr_rw,Law_lr_rw_std], axis=1).drop('index',axis=1)
# lfr
Law_gbm_lfr=pd.read_excel(lfr_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_lfr_std= pd.read_excel(lfr_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_lfr= pd.concat([Law_gbm_lfr,Law_gbm_lfr_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_lfr= pd.read_excel(lfr_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_lfr_std= pd.read_excel(lfr_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_lfr= pd.concat([Law_lr_lfr,Law_lr_lfr_std], axis=1).drop('index',axis=1)
#Prejudice Remover
Law_pr_remover= pd.read_excel(PRemover, sheet_name="Law")[51:52].reset_index()
Law_pr_remover_std= pd.read_excel(PRemover, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_pr_remover= pd.concat([Law_pr_remover,Law_pr_remover_std], axis=1).drop('index',axis=1)
#Adversarial Debiasing
Law_AdDeb= pd.read_excel(AdDeb, sheet_name="Law")[51:52].reset_index()
Law_AdDeb_std= pd.read_excel(AdDeb, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_AdDeb= pd.concat([Law_AdDeb,Law_AdDeb_std], axis=1).drop('index',axis=1)
#Meta Classifier
Law_Meta= pd.read_excel(Meta1, sheet_name="Law")[51:52].reset_index()
Law_Meta_std= pd.read_excel(Meta1, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_Meta= pd.concat([Law_Meta,Law_Meta_std], axis=1).drop('index',axis=1)
# EqOdds
Law_gbm_EO=pd.read_excel(EO_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_EO_std= pd.read_excel(EO_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_EO= pd.concat([Law_gbm_EO,Law_gbm_EO_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_EO= pd.read_excel(EO_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_EO_std= pd.read_excel(EO_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_EO= pd.concat([Law_lr_EO,Law_lr_EO_std], axis=1).drop('index',axis=1)
# CalEqOdds
Law_gbm_CalEO=pd.read_excel(CalEO_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_CalEO_std= pd.read_excel(CalEO_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_CalEO= pd.concat([Law_gbm_CalEO,Law_gbm_CalEO_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_CalEO= pd.read_excel(CalEO_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_CalEO_std= pd.read_excel(CalEO_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_CalEO= pd.concat([Law_lr_CalEO,Law_lr_CalEO_std], axis=1).drop('index',axis=1)
#ideal dataframes
ideal = pd.DataFrame(index=[0], data={'x':0, 'y':1})
idealfairness = pd.DataFrame(index=[0], data={'x':0, 'y':0})
```
# Plotting Fairness Accuracy Tradeoffs
# Performance Vs Group Fairness
```
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['SP'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['SP'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['SP'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['SP'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['SP'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['SP'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['EO'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['EO'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['EO'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['EO'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['EO'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['EO'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['BGEI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['SP'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['SP'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['SP'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['SP'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['SP'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['SP'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['EO'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['EO'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['EO'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['EO'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['EO'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['EO'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['BGEI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['SP'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['SP'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['SP'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['SP'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['SP'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['SP'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['EO'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['EO'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['EO'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['EO'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['EO'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['EO'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['BGEI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Accuracy')
axs[0,0].set_title('Statistical Parity')
axs[1,0].set_ylabel('Precision')
axs[0,1].set_title('Equal Odds')
axs[2,0].set_ylabel('NPV')
axs[0,2].set_title('BGEI')
fig.suptitle('Data: Law')
#axs[0,0].legend(bbox_to_anchor=(4.8, 1.2),ncol=1)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/PerFair2.png',dpi=300, format='png', bbox_inches='tight')
figsize = (3, 3)
fig_leg = plt.figure(figsize=figsize)
ax_leg = fig_leg.add_subplot(111)
# add the legend from the previous axes
ax_leg.legend(*axs[0,0].get_legend_handles_labels())
# hide the axes frame and the x/y labels
ax_leg.axis('off')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/legend.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
#Performance Vs Individual Fairness
```
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,1,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,1,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,1,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Accuracy')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('Precision')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('NPV')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/PerFair1.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
# CONSISTENCY BETWEEN GROUPAND INDIVIDUAL FAIRNESS
```
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,0,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,0,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,0,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Statistical Parity')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('Equal Odds')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('BGEI')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/GFvsIF1.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,0,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,0,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,0,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Equal Opportunity')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('PPV-diff')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('NPV-diff')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
#axs[0,0].legend(bbox_to_anchor=(4.8, 1.2),ncol=1)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/GFvsIF2.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
#REPAIR LEVEL EFFECTS
GBM
```
#Law
#disparate impact values at varying repair levels for gbm
gbm0=pd.read_excel(dir_gbm_0, sheet_name="Law").iloc[51:52]
gbm3=pd.read_excel(dir_gbm_3, sheet_name="Law").iloc[51:52]
gbm5=pd.read_excel(dir_gbm_5, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
gbm7=pd.read_excel(dir_gbm_7, sheet_name="Law").iloc[51:52]
gbm1= pd.read_excel(dir_gbm, sheet_name='Law').iloc[51:52]
#all together
gbms= pd.concat([gbm0,gbm3, gbm5, gbm7, gbm1])
gbms['Repair Level']=[0,0.3,0.5,0.7,1]
```
LR
```
#disparate impact values at varying repair levels for lr
lr0=pd.read_excel(dir_lr_0, sheet_name="Law").iloc[51:52]
lr3=pd.read_excel(dir_lr_3, sheet_name="Law").iloc[51:52]
lr5=pd.read_excel(dir_lr_5, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
lr7=pd.read_excel(dir_lr_7, sheet_name="Law").iloc[51:52]
lr1= pd.read_excel(dir_lr, sheet_name='Law').iloc[51:52]
#all together
lrs= pd.concat([lr0, lr3, lr5, lr7, lr1])
lrs['Repair Level']=[0,0.3,0.5,0.7,1]
```
disparate impact and repair level
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex= True)
ax1.plot(gbms['Repair Level'],gbms['ACCURACY'], marker='x',label='GBM')
ax1.plot(lrs['Repair Level'], lrs['ACCURACY'], marker='s',label='LogReg')
ax2.plot(gbms['Repair Level'], gbms['DI'], marker='x',label='GBM')
ax2.plot(lrs['Repair Level'], lrs['DI'], marker='s',label='LogReg')
#ax2.plot(1,1, marker= '*', color='black')
ax2.axhline(y=1.0, linestyle='--', color= 'green',label='Ideal DI')
ax2.axhline(y=0.8, linestyle='--', color= 'red',label='minimum DI')
#ax2.plot(1,1, marker= '*', color='black')
ax1.set_ylabel('Accuracy')
ax1.set_title("Disparate Impact Remover | Data: Law")
ax2.set_ylabel('Disparate Impact (DI)')
ax2.set_xlabel('Repair Level (\u03BB)')
plt.legend(bbox_to_anchor=(0.7, -0.4), ncol= 2)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/DIRemover.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
Accuracy and repair levels
```
#disparate impact values at varying repair levels for gbm
PR1=pd.read_excel(PRemover1, sheet_name="Law").iloc[51:52]
PR25=pd.read_excel(PRemover25, sheet_name="Law").iloc[51:52]
PR50=pd.read_excel(PRemover50, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
PR75=pd.read_excel(PRemover75, sheet_name="Law").iloc[51:52]
PR100= pd.read_excel(PRemover, sheet_name='Law').iloc[51:52]
#all together
prs= pd.concat([PR1,PR25,PR50,PR75,PR100])
prs['Etas']=[1,25,50,75,100]
#disparate impact values at varying repair levels for gbm
meta_0=pd.read_excel(Meta0, sheet_name="Law").iloc[51:52]
meta_2=pd.read_excel(Meta2, sheet_name="Law").iloc[51:52]
meta_4=pd.read_excel(Meta4, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
meta_6=pd.read_excel(Meta6, sheet_name="Law").iloc[51:52]
meta_8= pd.read_excel(Meta8, sheet_name='Law').iloc[51:52]
meta_1=pd.read_excel(Meta1, sheet_name='Law').iloc[51:52]
#all together
metas= pd.concat([meta_0, meta_2, meta_4, meta_6, meta_8, meta_1])
metas['Tau']=[0,0.2,0.4,0.6,0.8,1.0]
fig,axs = plt.subplots(nrows= 2, ncols= 2)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].plot(prs['Etas'],prs['ACCURACY'],color='tab:red', label='Accuracy', linestyle='-',marker= "D")
axs[0,0].plot(prs['Etas'],prs['PPV'],color='tab:orange', label='Precision', linestyle='-',marker= "s")
axs[0,0].plot(prs['Etas'],prs['NPV'],color='tab:purple', label='NPV', linestyle='-',marker= "*")
axs[0,1].plot(metas['Tau'],metas['ACCURACY'],color='tab:red', label='Accuracy', linestyle='-',marker= "D")
axs[0,1].plot(metas['Tau'],metas['PPV'],color='tab:orange', label='Precision', linestyle='-',marker= "s")
axs[0,1].plot(metas['Tau'],metas['NPV'],color='tab:purple', label='NPV', linestyle='-',marker= "*")
axs[1,0].plot(prs['Etas'],prs['SP'],color='tab:olive', label='SP', linestyle='-',marker= "v")
axs[1,0].plot(prs['Etas'],prs['WGEI'],color='tab:blue', label='WGEI', linestyle='-',marker= "o")
axs[1,0].plot(prs['Etas'],prs['EO'],color='tab:gray', label='EO', linestyle='-',marker= "x")
axs[1,0].plot(prs['Etas'],prs['BGEI'],color='tab:brown', label='BGEI', linestyle='-',marker= "<")
axs[1,1].plot(metas['Tau'],metas['SP'],color='tab:olive', label='SP', linestyle='-',marker= "v")
axs[1,1].plot(metas['Tau'],metas['WGEI'],color='tab:blue', label='WGEI', linestyle='-',marker= "o")
axs[1,1].plot(metas['Tau'],metas['EO'],color='tab:gray', label='EO', linestyle='-',marker= "x")
axs[1,1].plot(metas['Tau'],metas['BGEI'],color='tab:brown', label='BGEI', linestyle='-',marker= "<")
axs[0,0].set_title('Prejudice Remover')
axs[0,0].set_ylabel('Predictive Performance')
axs[1,0].set_ylabel('Fairness Measure')
axs[0,1].set_title('Meta Algorithm')
axs[1,0].set_xlabel('Tuning Parameter(\u03B7)')
axs[1,1].set_xlabel('Tuning Parametr(\u03C4)')
# Hide x labels and tick labels for top plots and y ticks for right plots.
for ax in axs.flat:
ax.label_outer()
axs[0,0].legend(bbox_to_anchor=(2.8, 1),ncol=1)
axs[1,1].legend(bbox_to_anchor=(1,1),ncol=1)
fig.suptitle('Effect of hyper-parameter variation on performance and fairness | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Hypers.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
# Computing Performance Metrics
## dataframes
Baselines
```
#Baselines
#Logistic Regression Baseline
LR=pd.read_excel(baseline_lr, sheet_name="Law")[51:52]
LR_std= pd.read_excel(baseline_lr, sheet_name="Law")[52:53].add_suffix('_std')
#GBM Baseline
GBM=pd.read_excel(baseline_gbm, sheet_name="Law")[51:52]
GBM_std= pd.read_excel(baseline_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
Disparate impact remover
```
#DIR+ Logistic Regression
LR_dir=pd.read_excel(dir_lr, sheet_name="Law")[51:52]
LR_dir_std= pd.read_excel(dir_lr, sheet_name="Law")[52:53].add_suffix('_std')
#DIR+ GBM
GBM_dir=pd.read_excel(dir_gbm, sheet_name="Law")[51:52]
GBM_dir_std= pd.read_excel(dir_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
Reweighing
```
#RW+ Logistic Regression
LR_rw=pd.read_excel(rw_lr, sheet_name="Law")[51:52]
LR_rw_std= pd.read_excel(rw_lr, sheet_name="Law")[52:53].add_suffix('_std')
#RW+ GBM
GBM_rw=pd.read_excel(rw_gbm, sheet_name="Law")[51:52]
GBM_rw_std= pd.read_excel(rw_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
LFR
```
#LFR+ Logistic Regression
LR_lfr=pd.read_excel(lfr_lr, sheet_name="Law")[51:52]
LR_lfr_std= pd.read_excel(lfr_lr, sheet_name="Law")[52:53].add_suffix('_std')
#LFR+ GBM
GBM_lfr=pd.read_excel(lfr_gbm, sheet_name="Law")[51:52]
GBM_lfr_std= pd.read_excel(lfr_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
Prejudice Remover
```
#Prejudice Remover
PR=pd.read_excel(PRemover, sheet_name="Law")[51:52]
PR_std= pd.read_excel(PRemover, sheet_name="Law")[52:53].add_suffix('_std')
```
AdeDeb
```
# AdDeb
adDeb=pd.read_excel(AdDeb, sheet_name="Law")[51:52]
adDeb_std= pd.read_excel(AdDeb, sheet_name="Law")[52:53].add_suffix('_std')
```
Meta Classifier
```
meta=pd.read_excel(Meta1, sheet_name="Law")[51:52]
meta_std= pd.read_excel(Meta1, sheet_name="Law")[52:53].add_suffix('_std')
```
Equal Odds
```
#EO+ Logistic Regression
LR_EO=pd.read_excel(EO_lr, sheet_name="Law")[51:52]
LR_EO_std= pd.read_excel(EO_lr, sheet_name="Law")[52:53].add_suffix('_std')
#EO+ GBM
GBM_EO=pd.read_excel(EO_gbm, sheet_name="Law")[51:52]
GBM_EO_std= pd.read_excel(EO_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
Calibrated Equal Odds
```
#CalEO+ Logistic Regression
LR_CalEO=pd.read_excel(CalEO_lr, sheet_name="Law")[51:52]
LR_CalEO_std= pd.read_excel(CalEO_lr, sheet_name="Law")[52:53].add_suffix('_std')
#CalEO+ GBM
GBM_CalEO=pd.read_excel(CalEO_gbm, sheet_name="Law")[51:52]
GBM_CalEO_std= pd.read_excel(CalEO_gbm, sheet_name="Law")[52:53].add_suffix('_std')
```
## Accuracies, Precision, NPV
Comparing the performance metrics of the various baseline and bias mitigation algorithms
```
Accuracy= list([LR['ACCURACY'].to_numpy()[0], GBM['ACCURACY'].to_numpy()[0], LR_dir['ACCURACY'].to_numpy()[0],
GBM_dir['ACCURACY'].to_numpy()[0],LR_rw['ACCURACY'].to_numpy()[0], GBM_rw['ACCURACY'].to_numpy()[0],
LR_lfr['ACCURACY'].to_numpy()[0], GBM_lfr['ACCURACY'].to_numpy()[0], PR['ACCURACY'].to_numpy()[0],
adDeb['ACCURACY'].to_numpy()[0],meta['ACCURACY'].to_numpy()[0],
LR_EO['ACCURACY'].to_numpy()[0],GBM_EO['ACCURACY'].to_numpy()[0],LR_CalEO['ACCURACY'].to_numpy()[0],
GBM_CalEO['ACCURACY'].to_numpy()[0]])
Accuracy_std= list([LR_std['ACCURACY_std'].to_numpy()[0], GBM_std['ACCURACY_std'].to_numpy()[0], LR_dir_std['ACCURACY_std'].to_numpy()[0],
GBM_dir_std['ACCURACY_std'].to_numpy()[0],LR_rw_std['ACCURACY_std'].to_numpy()[0], GBM_rw_std['ACCURACY_std'].to_numpy()[0],
LR_lfr_std['ACCURACY_std'].to_numpy()[0], GBM_lfr_std['ACCURACY_std'].to_numpy()[0], PR_std['ACCURACY_std'].to_numpy()[0],
adDeb_std['ACCURACY_std'].to_numpy()[0],meta_std['ACCURACY_std'].to_numpy()[0],
LR_EO_std['ACCURACY_std'].to_numpy()[0],GBM_EO_std['ACCURACY_std'].to_numpy()[0],LR_CalEO_std['ACCURACY_std'].to_numpy()[0],
GBM_CalEO_std['ACCURACY_std'].to_numpy()[0]])
PPV= list([LR['PPV'].to_numpy()[0], GBM['PPV'].to_numpy()[0], LR_dir['PPV'].to_numpy()[0],
GBM_dir['PPV'].to_numpy()[0],LR_rw['PPV'].to_numpy()[0], GBM_rw['PPV'].to_numpy()[0],
LR_lfr['PPV'].to_numpy()[0], GBM_lfr['PPV'].to_numpy()[0], PR['PPV'].to_numpy()[0],
adDeb['PPV'].to_numpy()[0],meta['PPV'].to_numpy()[0],
LR_EO['PPV'].to_numpy()[0],GBM_EO['PPV'].to_numpy()[0],LR_CalEO['PPV'].to_numpy()[0],
GBM_CalEO['PPV'].to_numpy()[0]])
PPV_std= list([LR_std['PPV_std'].to_numpy()[0], GBM_std['PPV_std'].to_numpy()[0], LR_dir_std['PPV_std'].to_numpy()[0],
GBM_dir_std['PPV_std'].to_numpy()[0],LR_rw_std['PPV_std'].to_numpy()[0], GBM_rw_std['PPV_std'].to_numpy()[0],
LR_lfr_std['PPV_std'].to_numpy()[0], GBM_lfr_std['PPV_std'].to_numpy()[0], PR_std['PPV_std'].to_numpy()[0],
adDeb_std['PPV_std'].to_numpy()[0],meta_std['PPV_std'].to_numpy()[0],
LR_EO_std['PPV_std'].to_numpy()[0],GBM_EO_std['PPV_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['PPV_std'].to_numpy()[0]])
NPV= list([LR['NPV'].to_numpy()[0], GBM['NPV'].to_numpy()[0], LR_dir['NPV'].to_numpy()[0],
GBM_dir['NPV'].to_numpy()[0],LR_rw['NPV'].to_numpy()[0], GBM_rw['NPV'].to_numpy()[0],
LR_lfr['NPV'].to_numpy()[0], GBM_lfr['NPV'].to_numpy()[0], PR['NPV'].to_numpy()[0],
adDeb['NPV'].to_numpy()[0],meta['NPV'].to_numpy()[0],
LR_EO['NPV'].to_numpy()[0],GBM_EO['NPV'].to_numpy()[0],LR_CalEO['NPV'].to_numpy()[0],
GBM_CalEO['NPV'].to_numpy()[0]])
NPV_std= list([LR_std['NPV_std'].to_numpy()[0], GBM_std['NPV_std'].to_numpy()[0], LR_dir_std['NPV_std'].to_numpy()[0],
GBM_dir_std['NPV_std'].to_numpy()[0],LR_rw_std['NPV_std'].to_numpy()[0], GBM_rw_std['NPV_std'].to_numpy()[0],
LR_lfr_std['NPV_std'].to_numpy()[0], GBM_lfr_std['NPV_std'].to_numpy()[0], PR_std['NPV_std'].to_numpy()[0],
adDeb_std['NPV_std'].to_numpy()[0],meta_std['NPV_std'].to_numpy()[0],
LR_EO_std['NPV_std'].to_numpy()[0],GBM_EO_std['NPV_std'].to_numpy()[0],LR_CalEO_std['NPV_std'].to_numpy()[0],
GBM_CalEO_std['NPV_std'].to_numpy()[0]])
fig,axs = plt.subplots(3, sharex=True)
ind = np.arange(15)
width=0.2
# Plotting
axs[0].bar( ind, Accuracy,align='center', yerr= Accuracy_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[1].bar( ind, PPV,align='center', yerr= PPV_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[2].bar( ind, NPV,align='center', yerr= NPV_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[0].set_ylabel('Accuracy')
axs[1].set_ylabel('Precision')
axs[2].set_ylabel('NPV')
xlabels= ['LR','GBM','DIR+LR','DIR+GBM','RW+LR','RW+GBM','LFR+LR','LFR+GBM','PRemover','AdDeb','Meta','LR+EqOdds','GBM+EqOdds','LR+CalEqOdds','GBM+CalEqOdds']
plt.xticks(ind + width / 2, xlabels, rotation= 'vertical' )
fig.suptitle('Predictive Performance | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Performance.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
## DI Impact, Consistency, and SP
Comparing DI impact, Consistency, and Statistical Parity in Data, Baseline classifiers and Bias Mitigation Algorithms
```
DI= list([LR['DATA_DI'].to_numpy()[0],LR['DI'].to_numpy()[0], GBM['DI'].to_numpy()[0], LR_dir['DI'].to_numpy()[0],
GBM_dir['DI'].to_numpy()[0],LR_rw['DI'].to_numpy()[0], GBM_rw['DI'].to_numpy()[0],
LR_lfr['DI'].to_numpy()[0], GBM_lfr['DI'].to_numpy()[0], PR['DI'].to_numpy()[0],
adDeb['DI'].to_numpy()[0],meta['DI'].to_numpy()[0],
LR_EO['DI'].to_numpy()[0],GBM_EO['DI'].to_numpy()[0],LR_CalEO['DI'].to_numpy()[0],
GBM_CalEO['DI'].to_numpy()[0]])
DI_std= list([LR_std['DATA_DI_std'].to_numpy()[0], LR_std['DI_std'].to_numpy()[0],GBM_std['DI_std'].to_numpy()[0], LR_dir_std['DI_std'].to_numpy()[0],
GBM_dir_std['DI_std'].to_numpy()[0],LR_rw_std['DI_std'].to_numpy()[0], GBM_rw_std['DI_std'].to_numpy()[0],
LR_lfr_std['DI_std'].to_numpy()[0], GBM_lfr_std['DI_std'].to_numpy()[0], PR_std['DI_std'].to_numpy()[0],
adDeb_std['DI_std'].to_numpy()[0],meta_std['DI_std'].to_numpy()[0],
LR_EO_std['DI_std'].to_numpy()[0],GBM_EO_std['DI_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['DI_std'].to_numpy()[0]])
SP= list([LR['DATA_SP'].to_numpy()[0],LR['SP'].to_numpy()[0], GBM['SP'].to_numpy()[0], LR_dir['SP'].to_numpy()[0],
GBM_dir['SP'].to_numpy()[0],LR_rw['SP'].to_numpy()[0], GBM_rw['SP'].to_numpy()[0],
LR_lfr['SP'].to_numpy()[0], GBM_lfr['SP'].to_numpy()[0], PR['SP'].to_numpy()[0],
adDeb['SP'].to_numpy()[0],meta['SP'].to_numpy()[0],
LR_EO['SP'].to_numpy()[0],GBM_EO['SP'].to_numpy()[0],LR_CalEO['SP'].to_numpy()[0],
GBM_CalEO['SP'].to_numpy()[0]])
SP_std= list([LR_std['DATA_SP_std'].to_numpy()[0], LR_std['SP_std'].to_numpy()[0],GBM_std['SP_std'].to_numpy()[0], LR_dir_std['SP_std'].to_numpy()[0],
GBM_dir_std['SP_std'].to_numpy()[0],LR_rw_std['SP_std'].to_numpy()[0], GBM_rw_std['SP_std'].to_numpy()[0],
LR_lfr_std['SP_std'].to_numpy()[0], GBM_lfr_std['SP_std'].to_numpy()[0], PR_std['SP_std'].to_numpy()[0],
adDeb_std['SP_std'].to_numpy()[0],meta_std['SP_std'].to_numpy()[0],
LR_EO_std['SP_std'].to_numpy()[0],GBM_EO_std['SP_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['SP_std'].to_numpy()[0]])
CONSISTENCY= list([LR['DATA_CONS'].to_numpy()[0],LR['CONSISTENCY'].to_numpy()[0], GBM['CONSISTENCY'].to_numpy()[0], LR_dir['CONSISTENCY'].to_numpy()[0],
GBM_dir['CONSISTENCY'].to_numpy()[0],LR_rw['CONSISTENCY'].to_numpy()[0], GBM_rw['CONSISTENCY'].to_numpy()[0],
LR_lfr['CONSISTENCY'].to_numpy()[0], GBM_lfr['CONSISTENCY'].to_numpy()[0], PR['CONSISTENCY'].to_numpy()[0],
adDeb['CONSISTENCY'].to_numpy()[0],meta['CONSISTENCY'].to_numpy()[0],
LR_EO['CONSISTENCY'].to_numpy()[0],GBM_EO['CONSISTENCY'].to_numpy()[0],LR_CalEO['CONSISTENCY'].to_numpy()[0],
GBM_CalEO['CONSISTENCY'].to_numpy()[0]])
CONSISTENCY_std= list([LR_std['DATA_CONS_std'].to_numpy()[0], LR_std['CONSISTENCY_std'].to_numpy()[0],GBM_std['CONSISTENCY_std'].to_numpy()[0], LR_dir_std['CONSISTENCY_std'].to_numpy()[0],
GBM_dir_std['CONSISTENCY_std'].to_numpy()[0],LR_rw_std['CONSISTENCY_std'].to_numpy()[0], GBM_rw_std['CONSISTENCY_std'].to_numpy()[0],
LR_lfr_std['CONSISTENCY_std'].to_numpy()[0], GBM_lfr_std['CONSISTENCY_std'].to_numpy()[0], PR_std['CONSISTENCY_std'].to_numpy()[0],
adDeb_std['CONSISTENCY_std'].to_numpy()[0],meta_std['CONSISTENCY_std'].to_numpy()[0],
LR_EO_std['CONSISTENCY_std'].to_numpy()[0],GBM_EO_std['CONSISTENCY_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['CONSISTENCY_std'].to_numpy()[0]])
fig,axs = plt.subplots(3, sharex=True)
ind = np.arange(16)
width=0.2
axs[0].bar( ind, SP,align='center', yerr= SP_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[0].axhline(y=0, color= 'green',linestyle= '--')
axs[0].set_ylabel('SP')
axs[1].bar( ind, CONSISTENCY,align='center', yerr= CONSISTENCY_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[1].set_ylabel('Consistency')
axs[2].bar( ind, DI,align='center', yerr= DI_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[2].axhline(y=1, color= 'green',linestyle= '--')
axs[2].axhline(y=0, color= 'black',linestyle= '--')
axs[2].set_ylabel('DI')
xlabels= ['Biased_Data','LR','GBM','DIR+LR','DIR+GBM','RW+LR','RW+GBM','LFR+LR','LFR+GBM','PRemover','AdDeb','Meta','LR+EqOdds','GBM+EqOdds','LR+CalEqOdds','GBM+CalEqOdds']
plt.xticks(ind + width / 2, xlabels, rotation= 'vertical' )
fig.suptitle('Fairness | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Fairness.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
import matplotlib.pyplot as plt
import pylab
from sklearn.metrics import accuracy_score , classification_report, confusion_matrix, roc_auc_score,mean_squared_error,f1_score
import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport
import seaborn as sb
from sklearn.utils import resample
from sklearn.model_selection import train_test_split, KFold
from numpy import loadtxt
from xgboost import XGBClassifier
import sys
sys.path.append("../")
import os
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
baseline_gbm= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/GBM/gbm_Results.xlsx")
baseline_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/BaseLines/LogReg/LR_Results.xlsx")
#dir level 1
dir_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_1/GBM/gbm_Results.xlsx")
dir_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_1/LogReg/LR_Results.xlsx")
#reweighing
rw_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/GBM/gbm_Results.xlsx")
rw_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Reweighing/LogReg/LR_Results.xlsx")
#lfr
lfr_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/gbm_Results.xlsx")
lfr_lr= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/LFR/NewLFR_Results/LR_Results.xlsx")
#Adversarial Debiasing
AdDeb= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/AdDeb/AdDeb.xlsx")
#PRemover
PRemover=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover100.xlsx")
#Equal Odds
EO_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/EqualOdds/EO_gbm.xlsx")
EO_lr=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/EqualOdds/EO_LogReg.xlsx")
#CalEqual Odds
CalEO_gbm=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/CalEqualOdds/CalEO_gbm.xlsx")
CalEO_lr=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/CalEqualOdds/CalEO_LogReg.xlsx")
#varying levels of dir repair
#level 0
dir_gbm_0=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p0/GBM/gbm_Results.xlsx")
dir_lr_0= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p0/LogReg/LR_Results.xlsx")
#level 0.3
dir_gbm_3=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p3/GBM/gbm_Results.xlsx")
dir_lr_3= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p3/LogReg/LR_Results.xlsx")
#level 0.5
dir_gbm_5=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p5/GBM/gbm_Results.xlsx")
dir_lr_5= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p5/LogReg/LR_Results.xlsx")
#level 0.7
dir_gbm_7=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p7/GBM/gbm_Results.xlsx")
dir_lr_7= pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/DIR/level_p7/LogReg/LR_Results.xlsx")
#PRemover at other values
PRemover75=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover75.xlsx")
PRemover50=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover50.xlsx")
PRemover25=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover25.xlsx")
PRemover1=pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/PRemover/PRemover1.xlsx")
#Meta Classifier at various values
Meta0 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta0.xlsx")
Meta2 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta2.xlsx")
Meta4 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta4.xlsx")
Meta6 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta6.xlsx")
Meta8 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta8.xlsx")
Meta1 = pd.ExcelFile(r"/content/gdrive/MyDrive/Datasets/SurveyData/RESULTS/Meta/Meta10.xlsx")
#baselines
Law_gbm_baseline=pd.read_excel(baseline_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_baseline_std= pd.read_excel(baseline_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_baseline= pd.concat([Law_gbm_baseline,Law_gbm_baseline_std], axis=1).drop('index',axis=1) #reset index adds and index column
#logistic regression
Law_lr_baseline= pd.read_excel(baseline_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_baseline_std= pd.read_excel(baseline_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_baseline= pd.concat([Law_lr_baseline,Law_lr_baseline_std], axis=1).drop('index',axis=1)
# #disparate impact remover
Law_gbm_dir=pd.read_excel(dir_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_dir_std= pd.read_excel(dir_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_dir= pd.concat([Law_gbm_dir,Law_gbm_dir_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_dir= pd.read_excel(dir_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_dir_std= pd.read_excel(dir_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_dir= pd.concat([Law_lr_dir,Law_lr_dir_std], axis=1).drop('index',axis=1)
# #reweighing
Law_gbm_rw=pd.read_excel(rw_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_rw_std= pd.read_excel(rw_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_rw= pd.concat([Law_gbm_rw,Law_gbm_rw_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_rw= pd.read_excel(rw_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_rw_std= pd.read_excel(rw_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_rw= pd.concat([Law_lr_rw,Law_lr_rw_std], axis=1).drop('index',axis=1)
# lfr
Law_gbm_lfr=pd.read_excel(lfr_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_lfr_std= pd.read_excel(lfr_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_lfr= pd.concat([Law_gbm_lfr,Law_gbm_lfr_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_lfr= pd.read_excel(lfr_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_lfr_std= pd.read_excel(lfr_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_lfr= pd.concat([Law_lr_lfr,Law_lr_lfr_std], axis=1).drop('index',axis=1)
#Prejudice Remover
Law_pr_remover= pd.read_excel(PRemover, sheet_name="Law")[51:52].reset_index()
Law_pr_remover_std= pd.read_excel(PRemover, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_pr_remover= pd.concat([Law_pr_remover,Law_pr_remover_std], axis=1).drop('index',axis=1)
#Adversarial Debiasing
Law_AdDeb= pd.read_excel(AdDeb, sheet_name="Law")[51:52].reset_index()
Law_AdDeb_std= pd.read_excel(AdDeb, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_AdDeb= pd.concat([Law_AdDeb,Law_AdDeb_std], axis=1).drop('index',axis=1)
#Meta Classifier
Law_Meta= pd.read_excel(Meta1, sheet_name="Law")[51:52].reset_index()
Law_Meta_std= pd.read_excel(Meta1, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_Meta= pd.concat([Law_Meta,Law_Meta_std], axis=1).drop('index',axis=1)
# EqOdds
Law_gbm_EO=pd.read_excel(EO_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_EO_std= pd.read_excel(EO_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_EO= pd.concat([Law_gbm_EO,Law_gbm_EO_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_EO= pd.read_excel(EO_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_EO_std= pd.read_excel(EO_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_EO= pd.concat([Law_lr_EO,Law_lr_EO_std], axis=1).drop('index',axis=1)
# CalEqOdds
Law_gbm_CalEO=pd.read_excel(CalEO_gbm, sheet_name="Law")[51:52].reset_index()
Law_gbm_CalEO_std= pd.read_excel(CalEO_gbm, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_gbm_CalEO= pd.concat([Law_gbm_CalEO,Law_gbm_CalEO_std], axis=1).drop('index',axis=1)
#logistic regression
Law_lr_CalEO= pd.read_excel(CalEO_lr, sheet_name="Law")[51:52].reset_index()
Law_lr_CalEO_std= pd.read_excel(CalEO_lr, sheet_name="Law")[52:53].add_suffix('_std').reset_index()
#avg and std together
Law_lr_CalEO= pd.concat([Law_lr_CalEO,Law_lr_CalEO_std], axis=1).drop('index',axis=1)
#ideal dataframes
ideal = pd.DataFrame(index=[0], data={'x':0, 'y':1})
idealfairness = pd.DataFrame(index=[0], data={'x':0, 'y':0})
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['SP'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['SP'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['SP'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['SP'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['SP'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['SP'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['EO'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['EO'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['EO'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['EO'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['EO'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['EO'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['BGEI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['SP'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['SP'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['SP'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['SP'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['SP'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['SP'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['EO'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['EO'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['EO'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['EO'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['EO'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['EO'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['BGEI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['SP'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['SP'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['SP'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['SP'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['SP'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['SP'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['SP'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['SP'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['SP'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['SP'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['SP'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['SP'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['SP'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['SP'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['SP'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['EO'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['EO'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['EO'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['EO'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['EO'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['EO'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['EO'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['EO'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['EO'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['EO'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['EO'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['EO'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['EO'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['EO'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['BGEI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['BGEI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['BGEI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['BGEI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['BGEI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['BGEI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['BGEI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['BGEI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['BGEI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['BGEI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['BGEI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['BGEI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['BGEI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['BGEI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['BGEI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Accuracy')
axs[0,0].set_title('Statistical Parity')
axs[1,0].set_ylabel('Precision')
axs[0,1].set_title('Equal Odds')
axs[2,0].set_ylabel('NPV')
axs[0,2].set_title('BGEI')
fig.suptitle('Data: Law')
#axs[0,0].legend(bbox_to_anchor=(4.8, 1.2),ncol=1)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/PerFair2.png',dpi=300, format='png', bbox_inches='tight')
figsize = (3, 3)
fig_leg = plt.figure(figsize=figsize)
ax_leg = fig_leg.add_subplot(111)
# add the legend from the previous axes
ax_leg.legend(*axs[0,0].get_legend_handles_labels())
# hide the axes frame and the x/y labels
ax_leg.axis('off')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/legend.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,1,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['ACCURACY'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['ACCURACY'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['ACCURACY'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['ACCURACY'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['ACCURACY'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['ACCURACY'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['ACCURACY'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['ACCURACY'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['ACCURACY'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['ACCURACY'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['ACCURACY'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['ACCURACY'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['ACCURACY'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['ACCURACY'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['ACCURACY'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,1,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['EO'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['PPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['PPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['PPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['PPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['PPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['PPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['PPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['PPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['PPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['PPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['PPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['PPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['PPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['PPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['PPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,1,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['NPV'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['NPV'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['NPV'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['NPV'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['NPV'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['NPV'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['NPV'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['NPV'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['NPV'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['NPV'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['NPV'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['NPV'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['NPV'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['NPV'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['NPV'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(ideal['x'],ideal['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Accuracy')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('Precision')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('NPV')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/PerFair1.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,0,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['SP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['SP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['SP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['SP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['SP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['SP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['SP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['SP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['SP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['SP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['SP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['SP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['SP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['SP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['SP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,0,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['EO'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['EO'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['EO'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['EO'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['EO'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['EO'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['EO'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['EO'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['EO'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['EO'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['EO'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['EO'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['EO'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['EO'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['EO'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,0,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['BGEI'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['BGEI'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['BGEI'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['BGEI'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['BGEI'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['BGEI'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['BGEI'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['BGEI'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['BGEI'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['BGEI'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['BGEI'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['BGEI'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['BGEI'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['BGEI'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['BGEI'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Statistical Parity')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('Equal Odds')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('BGEI')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/GFvsIF1.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
fig,axs = plt.subplots(nrows= 3, ncols= 3)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,0].scatter(1,0,color='black', marker= '*', s=25)
axs[0,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,1].scatter(Law_Meta['WGEI'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[0,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['EOP'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['EOP'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['EOP'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['EOP'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['EOP'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['EOP'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['EOP'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[0,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['EOP'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[0,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['EOP'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[0,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['EOP'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[0,2].scatter(Law_Meta['WGTI'],Law_Meta['EOP'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[0,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['EOP'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['EOP'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['EOP'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[0,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['EOP'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[0,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#**********************************************************************************************************************************************
axs[1,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,0].scatter(1,0,color='black', marker= '*', s=25)
axs[1,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,1].scatter(Law_Meta['WGEI'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[1,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['PPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['PPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['PPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['PPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['PPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['PPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['PPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[1,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['PPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[1,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['PPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[1,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['PPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[1,2].scatter(Law_Meta['WGTI'],Law_Meta['PPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[1,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['PPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['PPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['PPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[1,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['PPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[1,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
#***********************************************************************************************************************************************
axs[2,0].scatter(Law_lr_baseline['CONSISTENCY'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_baseline['CONSISTENCY'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_dir['CONSISTENCY'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_dir['CONSISTENCY'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_rw['CONSISTENCY'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_rw['CONSISTENCY'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_lfr['CONSISTENCY'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,0].scatter(Law_gbm_lfr['CONSISTENCY'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,0].scatter(Law_pr_remover['CONSISTENCY'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,0].scatter(Law_AdDeb['CONSISTENCY'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,0].scatter(Law_Meta['CONSISTENCY'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,0].scatter(Law_lr_EO['CONSISTENCY'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_EO['CONSISTENCY'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(Law_lr_CalEO['CONSISTENCY'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,0].scatter(Law_gbm_CalEO['CONSISTENCY'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,0].scatter(1,0,color='black', marker= '*', s=25)
axs[2,1].scatter(Law_lr_baseline['WGEI'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_baseline['WGEI'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_dir['WGEI'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_dir['WGEI'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_rw['WGEI'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_rw['WGEI'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_lfr['WGEI'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,1].scatter(Law_gbm_lfr['WGEI'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,1].scatter(Law_pr_remover['WGEI'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,1].scatter(Law_AdDeb['WGEI'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,1].scatter(Law_Meta['WGEI'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,1].scatter(Law_lr_EO['WGEI'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_EO['WGEI'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(Law_lr_CalEO['WGEI'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,1].scatter(Law_gbm_CalEO['WGEI'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,1].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
axs[2,2].scatter(Law_lr_baseline['WGTI'],Law_lr_baseline['NPV_diff'],color='tab:red', label='LR', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_baseline['WGTI'],Law_gbm_baseline['NPV_diff'],color='tab:red', label='GBM', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_dir['WGTI'],Law_lr_dir['NPV_diff'],color='tab:green', label='LR_DIRemover', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_dir['WGTI'],Law_gbm_dir['NPV_diff'],color='tab:green', label='GBM_DIRemover', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_rw['WGTI'],Law_lr_rw['NPV_diff'],color='tab:olive', label='LR_Reweigh', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_rw['WGTI'],Law_gbm_rw['NPV_diff'],color='tab:olive', label='GBM_Reweigh', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_lfr['WGTI'],Law_lr_lfr['NPV_diff'],color='tab:blue', label='LR_LFR', linestyle='-',marker= "D",)
axs[2,2].scatter(Law_gbm_lfr['WGTI'],Law_gbm_lfr['NPV_diff'],color='tab:blue', label='GBM_LFR', linestyle='-',marker= "o")
axs[2,2].scatter(Law_pr_remover['WGTI'],Law_pr_remover['NPV_diff'],color='tab:gray', label='PrejudiceRemover', linestyle='-',marker= "v",)
axs[2,2].scatter(Law_AdDeb['WGTI'],Law_AdDeb['NPV_diff'],color='tab:brown', label='AdDeb', linestyle='-',marker= "s")
axs[2,2].scatter(Law_Meta['WGTI'],Law_Meta['NPV_diff'],color='tab:pink', label='Meta', linestyle='-',marker= "x")
axs[2,2].scatter(Law_lr_EO['WGTI'],Law_lr_EO['NPV_diff'],color='tab:purple', label='LR_EqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_EO['WGTI'],Law_gbm_EO['NPV_diff'],color='tab:purple', label='GBM_EqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(Law_lr_CalEO['WGTI'],Law_lr_CalEO['NPV_diff'],color='tab:orange', label='LR_CalEqOdds', linestyle='-',marker= "D")
axs[2,2].scatter(Law_gbm_CalEO['WGTI'],Law_gbm_CalEO['NPV_diff'],color='tab:orange', label='GBM_CalEqOdds', linestyle='-',marker= "o")
axs[2,2].scatter(idealfairness['x'],idealfairness['y'],color='black', marker= '*', s=25)
for ax in axs.flat:
ax.label_outer()
axs[0,0].set_ylabel('Equal Opportunity')
axs[0,0].set_title('Consistency')
axs[1,0].set_ylabel('PPV-diff')
axs[0,1].set_title('WGEI')
axs[2,0].set_ylabel('NPV-diff')
axs[0,2].set_title('WGTI')
fig.suptitle('Data: Law')
#axs[0,0].legend(bbox_to_anchor=(4.8, 1.2),ncol=1)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/GFvsIF2.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
#Law
#disparate impact values at varying repair levels for gbm
gbm0=pd.read_excel(dir_gbm_0, sheet_name="Law").iloc[51:52]
gbm3=pd.read_excel(dir_gbm_3, sheet_name="Law").iloc[51:52]
gbm5=pd.read_excel(dir_gbm_5, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
gbm7=pd.read_excel(dir_gbm_7, sheet_name="Law").iloc[51:52]
gbm1= pd.read_excel(dir_gbm, sheet_name='Law').iloc[51:52]
#all together
gbms= pd.concat([gbm0,gbm3, gbm5, gbm7, gbm1])
gbms['Repair Level']=[0,0.3,0.5,0.7,1]
#disparate impact values at varying repair levels for lr
lr0=pd.read_excel(dir_lr_0, sheet_name="Law").iloc[51:52]
lr3=pd.read_excel(dir_lr_3, sheet_name="Law").iloc[51:52]
lr5=pd.read_excel(dir_lr_5, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
lr7=pd.read_excel(dir_lr_7, sheet_name="Law").iloc[51:52]
lr1= pd.read_excel(dir_lr, sheet_name='Law').iloc[51:52]
#all together
lrs= pd.concat([lr0, lr3, lr5, lr7, lr1])
lrs['Repair Level']=[0,0.3,0.5,0.7,1]
fig, (ax1, ax2) = plt.subplots(2, 1, sharex= True)
ax1.plot(gbms['Repair Level'],gbms['ACCURACY'], marker='x',label='GBM')
ax1.plot(lrs['Repair Level'], lrs['ACCURACY'], marker='s',label='LogReg')
ax2.plot(gbms['Repair Level'], gbms['DI'], marker='x',label='GBM')
ax2.plot(lrs['Repair Level'], lrs['DI'], marker='s',label='LogReg')
#ax2.plot(1,1, marker= '*', color='black')
ax2.axhline(y=1.0, linestyle='--', color= 'green',label='Ideal DI')
ax2.axhline(y=0.8, linestyle='--', color= 'red',label='minimum DI')
#ax2.plot(1,1, marker= '*', color='black')
ax1.set_ylabel('Accuracy')
ax1.set_title("Disparate Impact Remover | Data: Law")
ax2.set_ylabel('Disparate Impact (DI)')
ax2.set_xlabel('Repair Level (\u03BB)')
plt.legend(bbox_to_anchor=(0.7, -0.4), ncol= 2)
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/DIRemover.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
#disparate impact values at varying repair levels for gbm
PR1=pd.read_excel(PRemover1, sheet_name="Law").iloc[51:52]
PR25=pd.read_excel(PRemover25, sheet_name="Law").iloc[51:52]
PR50=pd.read_excel(PRemover50, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
PR75=pd.read_excel(PRemover75, sheet_name="Law").iloc[51:52]
PR100= pd.read_excel(PRemover, sheet_name='Law').iloc[51:52]
#all together
prs= pd.concat([PR1,PR25,PR50,PR75,PR100])
prs['Etas']=[1,25,50,75,100]
#disparate impact values at varying repair levels for gbm
meta_0=pd.read_excel(Meta0, sheet_name="Law").iloc[51:52]
meta_2=pd.read_excel(Meta2, sheet_name="Law").iloc[51:52]
meta_4=pd.read_excel(Meta4, sheet_name="Law").iloc[51:52] #old not working had recreate new tab (Law_use)
meta_6=pd.read_excel(Meta6, sheet_name="Law").iloc[51:52]
meta_8= pd.read_excel(Meta8, sheet_name='Law').iloc[51:52]
meta_1=pd.read_excel(Meta1, sheet_name='Law').iloc[51:52]
#all together
metas= pd.concat([meta_0, meta_2, meta_4, meta_6, meta_8, meta_1])
metas['Tau']=[0,0.2,0.4,0.6,0.8,1.0]
fig,axs = plt.subplots(nrows= 2, ncols= 2)
plt.rcParams.update({'figure.max_open_warning': 0})
axs[0,0].plot(prs['Etas'],prs['ACCURACY'],color='tab:red', label='Accuracy', linestyle='-',marker= "D")
axs[0,0].plot(prs['Etas'],prs['PPV'],color='tab:orange', label='Precision', linestyle='-',marker= "s")
axs[0,0].plot(prs['Etas'],prs['NPV'],color='tab:purple', label='NPV', linestyle='-',marker= "*")
axs[0,1].plot(metas['Tau'],metas['ACCURACY'],color='tab:red', label='Accuracy', linestyle='-',marker= "D")
axs[0,1].plot(metas['Tau'],metas['PPV'],color='tab:orange', label='Precision', linestyle='-',marker= "s")
axs[0,1].plot(metas['Tau'],metas['NPV'],color='tab:purple', label='NPV', linestyle='-',marker= "*")
axs[1,0].plot(prs['Etas'],prs['SP'],color='tab:olive', label='SP', linestyle='-',marker= "v")
axs[1,0].plot(prs['Etas'],prs['WGEI'],color='tab:blue', label='WGEI', linestyle='-',marker= "o")
axs[1,0].plot(prs['Etas'],prs['EO'],color='tab:gray', label='EO', linestyle='-',marker= "x")
axs[1,0].plot(prs['Etas'],prs['BGEI'],color='tab:brown', label='BGEI', linestyle='-',marker= "<")
axs[1,1].plot(metas['Tau'],metas['SP'],color='tab:olive', label='SP', linestyle='-',marker= "v")
axs[1,1].plot(metas['Tau'],metas['WGEI'],color='tab:blue', label='WGEI', linestyle='-',marker= "o")
axs[1,1].plot(metas['Tau'],metas['EO'],color='tab:gray', label='EO', linestyle='-',marker= "x")
axs[1,1].plot(metas['Tau'],metas['BGEI'],color='tab:brown', label='BGEI', linestyle='-',marker= "<")
axs[0,0].set_title('Prejudice Remover')
axs[0,0].set_ylabel('Predictive Performance')
axs[1,0].set_ylabel('Fairness Measure')
axs[0,1].set_title('Meta Algorithm')
axs[1,0].set_xlabel('Tuning Parameter(\u03B7)')
axs[1,1].set_xlabel('Tuning Parametr(\u03C4)')
# Hide x labels and tick labels for top plots and y ticks for right plots.
for ax in axs.flat:
ax.label_outer()
axs[0,0].legend(bbox_to_anchor=(2.8, 1),ncol=1)
axs[1,1].legend(bbox_to_anchor=(1,1),ncol=1)
fig.suptitle('Effect of hyper-parameter variation on performance and fairness | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Hypers.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
#Baselines
#Logistic Regression Baseline
LR=pd.read_excel(baseline_lr, sheet_name="Law")[51:52]
LR_std= pd.read_excel(baseline_lr, sheet_name="Law")[52:53].add_suffix('_std')
#GBM Baseline
GBM=pd.read_excel(baseline_gbm, sheet_name="Law")[51:52]
GBM_std= pd.read_excel(baseline_gbm, sheet_name="Law")[52:53].add_suffix('_std')
#DIR+ Logistic Regression
LR_dir=pd.read_excel(dir_lr, sheet_name="Law")[51:52]
LR_dir_std= pd.read_excel(dir_lr, sheet_name="Law")[52:53].add_suffix('_std')
#DIR+ GBM
GBM_dir=pd.read_excel(dir_gbm, sheet_name="Law")[51:52]
GBM_dir_std= pd.read_excel(dir_gbm, sheet_name="Law")[52:53].add_suffix('_std')
#RW+ Logistic Regression
LR_rw=pd.read_excel(rw_lr, sheet_name="Law")[51:52]
LR_rw_std= pd.read_excel(rw_lr, sheet_name="Law")[52:53].add_suffix('_std')
#RW+ GBM
GBM_rw=pd.read_excel(rw_gbm, sheet_name="Law")[51:52]
GBM_rw_std= pd.read_excel(rw_gbm, sheet_name="Law")[52:53].add_suffix('_std')
#LFR+ Logistic Regression
LR_lfr=pd.read_excel(lfr_lr, sheet_name="Law")[51:52]
LR_lfr_std= pd.read_excel(lfr_lr, sheet_name="Law")[52:53].add_suffix('_std')
#LFR+ GBM
GBM_lfr=pd.read_excel(lfr_gbm, sheet_name="Law")[51:52]
GBM_lfr_std= pd.read_excel(lfr_gbm, sheet_name="Law")[52:53].add_suffix('_std')
#Prejudice Remover
PR=pd.read_excel(PRemover, sheet_name="Law")[51:52]
PR_std= pd.read_excel(PRemover, sheet_name="Law")[52:53].add_suffix('_std')
# AdDeb
adDeb=pd.read_excel(AdDeb, sheet_name="Law")[51:52]
adDeb_std= pd.read_excel(AdDeb, sheet_name="Law")[52:53].add_suffix('_std')
meta=pd.read_excel(Meta1, sheet_name="Law")[51:52]
meta_std= pd.read_excel(Meta1, sheet_name="Law")[52:53].add_suffix('_std')
#EO+ Logistic Regression
LR_EO=pd.read_excel(EO_lr, sheet_name="Law")[51:52]
LR_EO_std= pd.read_excel(EO_lr, sheet_name="Law")[52:53].add_suffix('_std')
#EO+ GBM
GBM_EO=pd.read_excel(EO_gbm, sheet_name="Law")[51:52]
GBM_EO_std= pd.read_excel(EO_gbm, sheet_name="Law")[52:53].add_suffix('_std')
#CalEO+ Logistic Regression
LR_CalEO=pd.read_excel(CalEO_lr, sheet_name="Law")[51:52]
LR_CalEO_std= pd.read_excel(CalEO_lr, sheet_name="Law")[52:53].add_suffix('_std')
#CalEO+ GBM
GBM_CalEO=pd.read_excel(CalEO_gbm, sheet_name="Law")[51:52]
GBM_CalEO_std= pd.read_excel(CalEO_gbm, sheet_name="Law")[52:53].add_suffix('_std')
Accuracy= list([LR['ACCURACY'].to_numpy()[0], GBM['ACCURACY'].to_numpy()[0], LR_dir['ACCURACY'].to_numpy()[0],
GBM_dir['ACCURACY'].to_numpy()[0],LR_rw['ACCURACY'].to_numpy()[0], GBM_rw['ACCURACY'].to_numpy()[0],
LR_lfr['ACCURACY'].to_numpy()[0], GBM_lfr['ACCURACY'].to_numpy()[0], PR['ACCURACY'].to_numpy()[0],
adDeb['ACCURACY'].to_numpy()[0],meta['ACCURACY'].to_numpy()[0],
LR_EO['ACCURACY'].to_numpy()[0],GBM_EO['ACCURACY'].to_numpy()[0],LR_CalEO['ACCURACY'].to_numpy()[0],
GBM_CalEO['ACCURACY'].to_numpy()[0]])
Accuracy_std= list([LR_std['ACCURACY_std'].to_numpy()[0], GBM_std['ACCURACY_std'].to_numpy()[0], LR_dir_std['ACCURACY_std'].to_numpy()[0],
GBM_dir_std['ACCURACY_std'].to_numpy()[0],LR_rw_std['ACCURACY_std'].to_numpy()[0], GBM_rw_std['ACCURACY_std'].to_numpy()[0],
LR_lfr_std['ACCURACY_std'].to_numpy()[0], GBM_lfr_std['ACCURACY_std'].to_numpy()[0], PR_std['ACCURACY_std'].to_numpy()[0],
adDeb_std['ACCURACY_std'].to_numpy()[0],meta_std['ACCURACY_std'].to_numpy()[0],
LR_EO_std['ACCURACY_std'].to_numpy()[0],GBM_EO_std['ACCURACY_std'].to_numpy()[0],LR_CalEO_std['ACCURACY_std'].to_numpy()[0],
GBM_CalEO_std['ACCURACY_std'].to_numpy()[0]])
PPV= list([LR['PPV'].to_numpy()[0], GBM['PPV'].to_numpy()[0], LR_dir['PPV'].to_numpy()[0],
GBM_dir['PPV'].to_numpy()[0],LR_rw['PPV'].to_numpy()[0], GBM_rw['PPV'].to_numpy()[0],
LR_lfr['PPV'].to_numpy()[0], GBM_lfr['PPV'].to_numpy()[0], PR['PPV'].to_numpy()[0],
adDeb['PPV'].to_numpy()[0],meta['PPV'].to_numpy()[0],
LR_EO['PPV'].to_numpy()[0],GBM_EO['PPV'].to_numpy()[0],LR_CalEO['PPV'].to_numpy()[0],
GBM_CalEO['PPV'].to_numpy()[0]])
PPV_std= list([LR_std['PPV_std'].to_numpy()[0], GBM_std['PPV_std'].to_numpy()[0], LR_dir_std['PPV_std'].to_numpy()[0],
GBM_dir_std['PPV_std'].to_numpy()[0],LR_rw_std['PPV_std'].to_numpy()[0], GBM_rw_std['PPV_std'].to_numpy()[0],
LR_lfr_std['PPV_std'].to_numpy()[0], GBM_lfr_std['PPV_std'].to_numpy()[0], PR_std['PPV_std'].to_numpy()[0],
adDeb_std['PPV_std'].to_numpy()[0],meta_std['PPV_std'].to_numpy()[0],
LR_EO_std['PPV_std'].to_numpy()[0],GBM_EO_std['PPV_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['PPV_std'].to_numpy()[0]])
NPV= list([LR['NPV'].to_numpy()[0], GBM['NPV'].to_numpy()[0], LR_dir['NPV'].to_numpy()[0],
GBM_dir['NPV'].to_numpy()[0],LR_rw['NPV'].to_numpy()[0], GBM_rw['NPV'].to_numpy()[0],
LR_lfr['NPV'].to_numpy()[0], GBM_lfr['NPV'].to_numpy()[0], PR['NPV'].to_numpy()[0],
adDeb['NPV'].to_numpy()[0],meta['NPV'].to_numpy()[0],
LR_EO['NPV'].to_numpy()[0],GBM_EO['NPV'].to_numpy()[0],LR_CalEO['NPV'].to_numpy()[0],
GBM_CalEO['NPV'].to_numpy()[0]])
NPV_std= list([LR_std['NPV_std'].to_numpy()[0], GBM_std['NPV_std'].to_numpy()[0], LR_dir_std['NPV_std'].to_numpy()[0],
GBM_dir_std['NPV_std'].to_numpy()[0],LR_rw_std['NPV_std'].to_numpy()[0], GBM_rw_std['NPV_std'].to_numpy()[0],
LR_lfr_std['NPV_std'].to_numpy()[0], GBM_lfr_std['NPV_std'].to_numpy()[0], PR_std['NPV_std'].to_numpy()[0],
adDeb_std['NPV_std'].to_numpy()[0],meta_std['NPV_std'].to_numpy()[0],
LR_EO_std['NPV_std'].to_numpy()[0],GBM_EO_std['NPV_std'].to_numpy()[0],LR_CalEO_std['NPV_std'].to_numpy()[0],
GBM_CalEO_std['NPV_std'].to_numpy()[0]])
fig,axs = plt.subplots(3, sharex=True)
ind = np.arange(15)
width=0.2
# Plotting
axs[0].bar( ind, Accuracy,align='center', yerr= Accuracy_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[1].bar( ind, PPV,align='center', yerr= PPV_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[2].bar( ind, NPV,align='center', yerr= NPV_std, ecolor='black', capsize=5, color=['r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[0].set_ylabel('Accuracy')
axs[1].set_ylabel('Precision')
axs[2].set_ylabel('NPV')
xlabels= ['LR','GBM','DIR+LR','DIR+GBM','RW+LR','RW+GBM','LFR+LR','LFR+GBM','PRemover','AdDeb','Meta','LR+EqOdds','GBM+EqOdds','LR+CalEqOdds','GBM+CalEqOdds']
plt.xticks(ind + width / 2, xlabels, rotation= 'vertical' )
fig.suptitle('Predictive Performance | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Performance.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
DI= list([LR['DATA_DI'].to_numpy()[0],LR['DI'].to_numpy()[0], GBM['DI'].to_numpy()[0], LR_dir['DI'].to_numpy()[0],
GBM_dir['DI'].to_numpy()[0],LR_rw['DI'].to_numpy()[0], GBM_rw['DI'].to_numpy()[0],
LR_lfr['DI'].to_numpy()[0], GBM_lfr['DI'].to_numpy()[0], PR['DI'].to_numpy()[0],
adDeb['DI'].to_numpy()[0],meta['DI'].to_numpy()[0],
LR_EO['DI'].to_numpy()[0],GBM_EO['DI'].to_numpy()[0],LR_CalEO['DI'].to_numpy()[0],
GBM_CalEO['DI'].to_numpy()[0]])
DI_std= list([LR_std['DATA_DI_std'].to_numpy()[0], LR_std['DI_std'].to_numpy()[0],GBM_std['DI_std'].to_numpy()[0], LR_dir_std['DI_std'].to_numpy()[0],
GBM_dir_std['DI_std'].to_numpy()[0],LR_rw_std['DI_std'].to_numpy()[0], GBM_rw_std['DI_std'].to_numpy()[0],
LR_lfr_std['DI_std'].to_numpy()[0], GBM_lfr_std['DI_std'].to_numpy()[0], PR_std['DI_std'].to_numpy()[0],
adDeb_std['DI_std'].to_numpy()[0],meta_std['DI_std'].to_numpy()[0],
LR_EO_std['DI_std'].to_numpy()[0],GBM_EO_std['DI_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['DI_std'].to_numpy()[0]])
SP= list([LR['DATA_SP'].to_numpy()[0],LR['SP'].to_numpy()[0], GBM['SP'].to_numpy()[0], LR_dir['SP'].to_numpy()[0],
GBM_dir['SP'].to_numpy()[0],LR_rw['SP'].to_numpy()[0], GBM_rw['SP'].to_numpy()[0],
LR_lfr['SP'].to_numpy()[0], GBM_lfr['SP'].to_numpy()[0], PR['SP'].to_numpy()[0],
adDeb['SP'].to_numpy()[0],meta['SP'].to_numpy()[0],
LR_EO['SP'].to_numpy()[0],GBM_EO['SP'].to_numpy()[0],LR_CalEO['SP'].to_numpy()[0],
GBM_CalEO['SP'].to_numpy()[0]])
SP_std= list([LR_std['DATA_SP_std'].to_numpy()[0], LR_std['SP_std'].to_numpy()[0],GBM_std['SP_std'].to_numpy()[0], LR_dir_std['SP_std'].to_numpy()[0],
GBM_dir_std['SP_std'].to_numpy()[0],LR_rw_std['SP_std'].to_numpy()[0], GBM_rw_std['SP_std'].to_numpy()[0],
LR_lfr_std['SP_std'].to_numpy()[0], GBM_lfr_std['SP_std'].to_numpy()[0], PR_std['SP_std'].to_numpy()[0],
adDeb_std['SP_std'].to_numpy()[0],meta_std['SP_std'].to_numpy()[0],
LR_EO_std['SP_std'].to_numpy()[0],GBM_EO_std['SP_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['SP_std'].to_numpy()[0]])
CONSISTENCY= list([LR['DATA_CONS'].to_numpy()[0],LR['CONSISTENCY'].to_numpy()[0], GBM['CONSISTENCY'].to_numpy()[0], LR_dir['CONSISTENCY'].to_numpy()[0],
GBM_dir['CONSISTENCY'].to_numpy()[0],LR_rw['CONSISTENCY'].to_numpy()[0], GBM_rw['CONSISTENCY'].to_numpy()[0],
LR_lfr['CONSISTENCY'].to_numpy()[0], GBM_lfr['CONSISTENCY'].to_numpy()[0], PR['CONSISTENCY'].to_numpy()[0],
adDeb['CONSISTENCY'].to_numpy()[0],meta['CONSISTENCY'].to_numpy()[0],
LR_EO['CONSISTENCY'].to_numpy()[0],GBM_EO['CONSISTENCY'].to_numpy()[0],LR_CalEO['CONSISTENCY'].to_numpy()[0],
GBM_CalEO['CONSISTENCY'].to_numpy()[0]])
CONSISTENCY_std= list([LR_std['DATA_CONS_std'].to_numpy()[0], LR_std['CONSISTENCY_std'].to_numpy()[0],GBM_std['CONSISTENCY_std'].to_numpy()[0], LR_dir_std['CONSISTENCY_std'].to_numpy()[0],
GBM_dir_std['CONSISTENCY_std'].to_numpy()[0],LR_rw_std['CONSISTENCY_std'].to_numpy()[0], GBM_rw_std['CONSISTENCY_std'].to_numpy()[0],
LR_lfr_std['CONSISTENCY_std'].to_numpy()[0], GBM_lfr_std['CONSISTENCY_std'].to_numpy()[0], PR_std['CONSISTENCY_std'].to_numpy()[0],
adDeb_std['CONSISTENCY_std'].to_numpy()[0],meta_std['CONSISTENCY_std'].to_numpy()[0],
LR_EO_std['CONSISTENCY_std'].to_numpy()[0],GBM_EO_std['CONSISTENCY_std'].to_numpy()[0],LR_CalEO_std['PPV_std'].to_numpy()[0],
GBM_CalEO_std['CONSISTENCY_std'].to_numpy()[0]])
fig,axs = plt.subplots(3, sharex=True)
ind = np.arange(16)
width=0.2
axs[0].bar( ind, SP,align='center', yerr= SP_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[0].axhline(y=0, color= 'green',linestyle= '--')
axs[0].set_ylabel('SP')
axs[1].bar( ind, CONSISTENCY,align='center', yerr= CONSISTENCY_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[1].set_ylabel('Consistency')
axs[2].bar( ind, DI,align='center', yerr= DI_std, ecolor='black', capsize=5, color=['brown','r','r','b','b','b','b','b','b','g','g','g','gray','gray','gray','gray'])
axs[2].axhline(y=1, color= 'green',linestyle= '--')
axs[2].axhline(y=0, color= 'black',linestyle= '--')
axs[2].set_ylabel('DI')
xlabels= ['Biased_Data','LR','GBM','DIR+LR','DIR+GBM','RW+LR','RW+GBM','LFR+LR','LFR+GBM','PRemover','AdDeb','Meta','LR+EqOdds','GBM+EqOdds','LR+CalEqOdds','GBM+CalEqOdds']
plt.xticks(ind + width / 2, xlabels, rotation= 'vertical' )
fig.suptitle('Fairness | Data: Law')
plt.savefig(r'/content/gdrive/MyDrive/Colab_Notebooks/FAIRNESS_SURVEY/images/Law/Fairness.png',dpi=300, format='png', bbox_inches='tight')
plt.show()
| 0.189709 | 0.295395 |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/NER_DEMOGRAPHICS.ipynb)
# **Detect demographic information**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['JSL_SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = secret.split('-')[0]
jsl_version
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python and start the Spark session
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:' +sparknlp.version()) \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-{jsl_version}.jar')
spark = builder.getOrCreate()
```
## 2. Select the NER model and construct the pipeline
Select the NER model - Demographics models: **ner_deid_enriched, ner_deid_large, ner_jsl**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# You can change this to the model you want to use and re-run cells below.
# Demographics models: ner_deid_enriched, ner_deid_large, ner_jsl
MODEL_NAME = "ner_deid_enriched"
```
Create the pipeline
```
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
input_list = [
"""HISTORY OF PRESENT ILLNESS: Mr. Smith is a 60-year-old white male veteran with multiple comorbidities, who has a history of bladder cancer diagnosed approximately two years ago by the VA Hospital. He underwent a resection there. He was to be admitted to the Day Hospital for cystectomy. He was seen in Urology Clinic and Radiology Clinic on 02/04/2003.
HOSPITAL COURSE: Mr. Smith presented to the Day Hospital in anticipation for Urology surgery. On evaluation, EKG, echocardiogram was abnormal, a Cardiology consult was obtained. A cardiac adenosine stress MRI was then proceeded, same was positive for inducible ischemia, mild-to-moderate inferolateral subendocardial infarction with peri-infarct ischemia. In addition, inducible ischemia seen in the inferior lateral septum. Mr. Smith underwent a left heart catheterization, which revealed two vessel coronary artery disease. The RCA, proximal was 95% stenosed and the distal 80% stenosed. The mid LAD was 85% stenosed and the distal LAD was 85% stenosed. There was four Multi-Link Vision bare metal stents placed to decrease all four lesions to 0%. Following intervention, Mr. Smith was admitted to 7 Ardmore Tower under Cardiology Service under the direction of Dr. Hart. Mr. Smith had a noncomplicated post-intervention hospital course. He was stable for discharge home on 02/07/2003 with instructions to take Plavix daily for one month and Urology is aware of the same."""
]
```
## 4. Use the pipeline to create outputs
```
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
```
## 5. Visualize results
Visualize outputs as data frame
```
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
```
Functions to display outputs as HTML
```
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
```
Display example outputs as HTML
```
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
```
|
github_jupyter
|
import os
import json
with open('/content/workshop_license_keys.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['JSL_SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['JSL_OCR_LICENSE'] = license_keys['JSL_OCR_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
jsl_version = secret.split('-')[0]
jsl_version
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
builder = SparkSession.builder \
.appName('Spark NLP Licensed') \
.master('local[*]') \
.config('spark.driver.memory', '16G') \
.config('spark.serializer', 'org.apache.spark.serializer.KryoSerializer') \
.config('spark.kryoserializer.buffer.max', '2000M') \
.config('spark.jars.packages', 'com.johnsnowlabs.nlp:spark-nlp_2.11:' +sparknlp.version()) \
.config('spark.jars', f'https://pypi.johnsnowlabs.com/{secret}/spark-nlp-jsl-{jsl_version}.jar')
spark = builder.getOrCreate()
# You can change this to the model you want to use and re-run cells below.
# Demographics models: ner_deid_enriched, ner_deid_large, ner_jsl
MODEL_NAME = "ner_deid_enriched"
document_assembler = DocumentAssembler() \
.setInputCol('text')\
.setOutputCol('document')
sentence_detector = SentenceDetector() \
.setInputCols(['document'])\
.setOutputCol('sentence')
tokenizer = Tokenizer()\
.setInputCols(['sentence']) \
.setOutputCol('token')
word_embeddings = WordEmbeddingsModel.pretrained('embeddings_clinical', 'en', 'clinical/models') \
.setInputCols(['sentence', 'token']) \
.setOutputCol('embeddings')
clinical_ner = NerDLModel.pretrained(MODEL_NAME, 'en', 'clinical/models') \
.setInputCols(['sentence', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter()\
.setInputCols(['sentence', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF("text")
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
# Enter examples as strings in this array
input_list = [
"""HISTORY OF PRESENT ILLNESS: Mr. Smith is a 60-year-old white male veteran with multiple comorbidities, who has a history of bladder cancer diagnosed approximately two years ago by the VA Hospital. He underwent a resection there. He was to be admitted to the Day Hospital for cystectomy. He was seen in Urology Clinic and Radiology Clinic on 02/04/2003.
HOSPITAL COURSE: Mr. Smith presented to the Day Hospital in anticipation for Urology surgery. On evaluation, EKG, echocardiogram was abnormal, a Cardiology consult was obtained. A cardiac adenosine stress MRI was then proceeded, same was positive for inducible ischemia, mild-to-moderate inferolateral subendocardial infarction with peri-infarct ischemia. In addition, inducible ischemia seen in the inferior lateral septum. Mr. Smith underwent a left heart catheterization, which revealed two vessel coronary artery disease. The RCA, proximal was 95% stenosed and the distal 80% stenosed. The mid LAD was 85% stenosed and the distal LAD was 85% stenosed. There was four Multi-Link Vision bare metal stents placed to decrease all four lesions to 0%. Following intervention, Mr. Smith was admitted to 7 Ardmore Tower under Cardiology Service under the direction of Dr. Hart. Mr. Smith had a noncomplicated post-intervention hospital course. He was stable for discharge home on 02/07/2003 with instructions to take Plavix daily for one month and Urology is aware of the same."""
]
df = spark.createDataFrame(pd.DataFrame({"text": input_list}))
result = pipeline_model.transform(df)
exploded = F.explode(F.arrays_zip('ner_chunk.result', 'ner_chunk.metadata'))
select_expression_0 = F.expr("cols['0']").alias("chunk")
select_expression_1 = F.expr("cols['1']['entity']").alias("ner_label")
result.select(exploded.alias("cols")) \
.select(select_expression_0, select_expression_1).show(truncate=False)
result = result.toPandas()
from IPython.display import HTML, display
import random
def get_color():
r = lambda: random.randint(128,255)
return "#%02x%02x%02x" % (r(), r(), r())
def annotation_to_html(full_annotation):
ner_chunks = full_annotation[0]['ner_chunk']
text = full_annotation[0]['document'][0].result
label_color = {}
for chunk in ner_chunks:
label_color[chunk.metadata['entity']] = get_color()
html_output = "<div>"
pos = 0
for n in ner_chunks:
if pos < n.begin and pos < len(text):
html_output += f"<span class=\"others\">{text[pos:n.begin]}</span>"
pos = n.end + 1
html_output += f"<span class=\"entity-wrapper\" style=\"color: black; background-color: {label_color[n.metadata['entity']]}\"> <span class=\"entity-name\">{n.result}</span> <span class=\"entity-type\">[{n.metadata['entity']}]</span></span>"
if pos < len(text):
html_output += f"<span class=\"others\">{text[pos:]}</span>"
html_output += "</div>"
display(HTML(html_output))
for example in input_list:
annotation_to_html(light_pipeline.fullAnnotate(example))
| 0.439988 | 0.85931 |
# Example of using DOpt Federov Exchange Algorithm
## Algorithm obtained from
- **Algorithm AS 295:** A Fedorov Exchange Algorithm for D-Optimal Design
- **Author(s):** Alan J. Miller and Nam-Ky Nguyen
- **Source:** Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 43, No. 4, pp. 669-677, 1994
- **Stable URL:** http://www.jstor.org/stable/2986264
## Source code from
- http://ftp.uni-bayreuth.de/math/statlib/apstat/
# Load the dopt shared library that provides the interface
### Print the documentation and note that
### Input
- $x$ is the 2D numpy array that contains the candidate points to select from
- $n$ is the number of points in the final design
- $in$ is the number of preselected points that MUST be in the final design (>= 0)
- $rstart$ indicate if a random start should be performed, should be True in most cases. If False the user must supply the initial design in $picked$
- $picked$ is a 1D array that contains the preselected point ID's (remember FORTRAN is 1 based array) on input. The first $in$ entries are read for ID's. On output it contains the ID's in x of the final selection
### Output
- $lndet$ is the logarithm of the determinant of the best design
- $ifault$ is possible fault codes
>- -1 if no full rank starting design is found
>- 0 if no error is detected
>- 1* if DIM1 < NCAND
>- 2* if K < N
>- 4* if NRBAR < K(K - 1)/2
>- 8* if K KIN + NBLOCK
>- 16* if the sum of block sizes is not equal to N
>- 32* if any IN(I) < 0 or any IN(I) > BLKSIZ(I)
```
import numpy as np
import math as m
import dopt
print( dopt.dopt.__doc__ )
```
# Define a sample set of data and setup all parameters to call the interface
- Note that for the picked array we need to define the entry type as np.int32
- Example problem obtained from: https://ncss-wpengine.netdna-ssl.com/wp-content/themes/ncss/pdf/Procedures/NCSS/D-Optimal_Designs.pdf
>- 3 Design variables, Full Quadratic model
>- 27 Candidate points in design
>- Would like to select 10 D-Optimal points
```
# Sample data set
data = [
[ -1, -1, -1 ],
[ 0, -1, -1 ],
[ 1, -1, -1 ],
[ -1, 0, -1 ],
[ 0, 0, -1 ],
[ 1, 0, -1 ],
[ -1, 1, -1 ],
[ 0, 1, -1 ],
[ 1, 1, -1 ],
[ -1, -1, 0 ],
[ 0, -1, 0 ],
[ 1, -1, 0 ],
[ -1, 0, 0 ],
[ 0, 0, 0 ],
[ 1, 0, 0 ],
[ -1, 1, 0 ],
[ 0, 1, 0 ],
[ 1, 1, 0 ],
[ -1, -1, 1 ],
[ 0, -1, 1 ],
[ 1, -1, 1 ],
[ -1, 0, 1 ],
[ 0, 0, 1 ],
[ 1, 0, 1 ],
[ -1, 1, 1 ],
[ 0, 1, 1 ],
[ 1, 1, 1 ] ]
# Create numpy array to store model matrix
x = np.zeros( (len(data), 10), float, order='F' )
# Create model matrix from data set - remember this is a 3 variable, full quadratic model
for i in range(len(data)):
A = data[i][0]
B = data[i][1]
C = data[i][2]
x[i, 0] = 1.0
x[i, 1] = A
x[i, 2] = B
x[i, 3] = C
x[i, 4] = A * A
x[i, 5] = A * B
x[i, 6] = A * C
x[i, 7] = B * B
x[i, 8] = B * C
x[i, 9] = C * C
print( x )
print( x.shape )
# Number of points to pick
n = np.int32(10)
# Array of point ID's that will be picked
picked = np.zeros( n, np.int32 , order='F')
# Preselected points that must be in the array
#picked[0] = 3
#picked[1] = 4
# Number of picked points
npicked = np.int32(0)
```
# Call the interface and print the output and the picked array
- The reported maximum determinant for this design is 1327104
- We raise an exception with iFault is not 0 - this is just good practice
- We repeat the DOptimal process 10 times and pick the best design. We do this in an attempt to avoid local minima
```
# Store the best design and the corresponding determinant values
bestDes = np.copy( picked )
bestDet = 0.0
# Repeat the process 10 times and store the best design
for i in range(0, 10) :
picked = np.random.randint( 1, x.shape[0]+1, n, np.int32 )
lnDet, iFault = dopt.dopt( x, np.int32(n), np.int32(npicked), False, picked )
print(lnDet)
# Raise an exception if iFault is not equal to 0
if iFault != 0:
raise ValueError( "Non-zero return code form dopt algorith. iFault = ", iFault )
# Store the best design
if m.fabs(lnDet) > bestDet:
bestDet =m.fabs(lnDet)
bestDes = np.copy( picked )
# Print the best design out
print( "Maximum Determinant Found:", bestDet, m.exp(bestDet) )
print( "\nBest Design Found (indices):\n", np.sort(bestDes) )
print( "\nBest Design Found (variables):\n", x[np.sort(bestDes)-1,1:4] )
```
|
github_jupyter
|
import numpy as np
import math as m
import dopt
print( dopt.dopt.__doc__ )
# Sample data set
data = [
[ -1, -1, -1 ],
[ 0, -1, -1 ],
[ 1, -1, -1 ],
[ -1, 0, -1 ],
[ 0, 0, -1 ],
[ 1, 0, -1 ],
[ -1, 1, -1 ],
[ 0, 1, -1 ],
[ 1, 1, -1 ],
[ -1, -1, 0 ],
[ 0, -1, 0 ],
[ 1, -1, 0 ],
[ -1, 0, 0 ],
[ 0, 0, 0 ],
[ 1, 0, 0 ],
[ -1, 1, 0 ],
[ 0, 1, 0 ],
[ 1, 1, 0 ],
[ -1, -1, 1 ],
[ 0, -1, 1 ],
[ 1, -1, 1 ],
[ -1, 0, 1 ],
[ 0, 0, 1 ],
[ 1, 0, 1 ],
[ -1, 1, 1 ],
[ 0, 1, 1 ],
[ 1, 1, 1 ] ]
# Create numpy array to store model matrix
x = np.zeros( (len(data), 10), float, order='F' )
# Create model matrix from data set - remember this is a 3 variable, full quadratic model
for i in range(len(data)):
A = data[i][0]
B = data[i][1]
C = data[i][2]
x[i, 0] = 1.0
x[i, 1] = A
x[i, 2] = B
x[i, 3] = C
x[i, 4] = A * A
x[i, 5] = A * B
x[i, 6] = A * C
x[i, 7] = B * B
x[i, 8] = B * C
x[i, 9] = C * C
print( x )
print( x.shape )
# Number of points to pick
n = np.int32(10)
# Array of point ID's that will be picked
picked = np.zeros( n, np.int32 , order='F')
# Preselected points that must be in the array
#picked[0] = 3
#picked[1] = 4
# Number of picked points
npicked = np.int32(0)
# Store the best design and the corresponding determinant values
bestDes = np.copy( picked )
bestDet = 0.0
# Repeat the process 10 times and store the best design
for i in range(0, 10) :
picked = np.random.randint( 1, x.shape[0]+1, n, np.int32 )
lnDet, iFault = dopt.dopt( x, np.int32(n), np.int32(npicked), False, picked )
print(lnDet)
# Raise an exception if iFault is not equal to 0
if iFault != 0:
raise ValueError( "Non-zero return code form dopt algorith. iFault = ", iFault )
# Store the best design
if m.fabs(lnDet) > bestDet:
bestDet =m.fabs(lnDet)
bestDes = np.copy( picked )
# Print the best design out
print( "Maximum Determinant Found:", bestDet, m.exp(bestDet) )
print( "\nBest Design Found (indices):\n", np.sort(bestDes) )
print( "\nBest Design Found (variables):\n", x[np.sort(bestDes)-1,1:4] )
| 0.237487 | 0.910306 |
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
<br></br>
<br></br>
## *Data Science Unit 4 Sprint 3 Assignment 2*
# Convolutional Neural Networks (CNNs)
# Assignment
- <a href="#p1">Part 1:</a> Pre-Trained Model
- <a href="#p2">Part 2:</a> Custom CNN Model
- <a href="#p3">Part 3:</a> CNN with Data Augmentation
You will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero).
|Mountain (+)|Forest (-)|
|---|---|
|||
The problem is realively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be sometime that can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several differnet possible models.
# Pre - Trained Model
<a id="p1"></a>
Load a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:
```python
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D()
from tensorflow.keras.models import Model # This is the functional API
resnet = ResNet50(weights='imagenet', include_top=False)
```
The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes.
```python
for layer in resnet.layers:
layer.trainable = False
```
Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still).
```python
x = res.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(res.input, predictions)
```
Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero).
Steps to complete assignment:
1. Load in Image Data into numpy arrays (`X`)
2. Create a `y` for the labels
3. Train your model with pretrained layers from resnet
4. Report your model's accuracy
## Load in Data

Check out out [`skimage`](https://scikit-image.org/) for useful functions related to processing the images. In particular checkout the documentation for `skimage.io.imread_collection` and `skimage.transform.resize`.
```
import skimage
import numpy as np
from skimage.io import imread_collection
from skimage.transform import resize
image_files = ['./data/mountain/*', './data/forest/*']
mountain = np.asarray(imread_collection('./data/mountain/*'))
forest = np.asarray(imread_collection('./data/forest/*'))
X_train = np.append(mountain, forest, axis=0)
y_train = []
for _ in mountain:
y_train.append(1)
for _ in forest:
y_train.append(0)
y_train = np.array(y_train)
X_train.shape
y_train.shape
```
## Instatiate Model
```
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
resnet = ResNet50(weights='imagenet', include_top=False)
for layer in resnet.layers:
layer.trainable = False
x = GlobalAveragePooling2D()(resnet.output)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='sigmoid')(x)
model = Model(resnet.input, predictions)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
```
## Fit Model
```
model.fit(X_train, y_train,
epochs=1,
verbose=1,
)
```
# Custom CNN Model
```
# Compile Model
# Fit Model
```
# Resources and Stretch Goals
Stretch goals
- Enhance your code to use classes/functions and accept terms to search and classes to look for in recognizing the downloaded images (e.g. download images of parties, recognize all that contain balloons)
- Check out [other available pretrained networks](https://tfhub.dev), try some and compare
- Image recognition/classification is somewhat solved, but *relationships* between entities and describing an image is not - check out some of the extended resources (e.g. [Visual Genome](https://visualgenome.org/)) on the topic
- Transfer learning - using images you source yourself, [retrain a classifier](https://www.tensorflow.org/hub/tutorials/image_retraining) with a new category
- (Not CNN related) Use [piexif](https://pypi.org/project/piexif/) to check out the metadata of images passed in to your system - see if they're from a national park! (Note - many images lack GPS metadata, so this won't work in most cases, but still cool)
Resources
- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) - influential paper (introduced ResNet)
- [YOLO: Real-Time Object Detection](https://pjreddie.com/darknet/yolo/) - an influential convolution based object detection system, focused on inference speed (for applications to e.g. self driving vehicles)
- [R-CNN, Fast R-CNN, Faster R-CNN, YOLO](https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e) - comparison of object detection systems
- [Common Objects in Context](http://cocodataset.org/) - a large-scale object detection, segmentation, and captioning dataset
- [Visual Genome](https://visualgenome.org/) - a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language
|
github_jupyter
|
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D()
from tensorflow.keras.models import Model # This is the functional API
resnet = ResNet50(weights='imagenet', include_top=False)
for layer in resnet.layers:
layer.trainable = False
x = res.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(res.input, predictions)
import skimage
import numpy as np
from skimage.io import imread_collection
from skimage.transform import resize
image_files = ['./data/mountain/*', './data/forest/*']
mountain = np.asarray(imread_collection('./data/mountain/*'))
forest = np.asarray(imread_collection('./data/forest/*'))
X_train = np.append(mountain, forest, axis=0)
y_train = []
for _ in mountain:
y_train.append(1)
for _ in forest:
y_train.append(0)
y_train = np.array(y_train)
X_train.shape
y_train.shape
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model
resnet = ResNet50(weights='imagenet', include_top=False)
for layer in resnet.layers:
layer.trainable = False
x = GlobalAveragePooling2D()(resnet.output)
x = Dense(1024, activation='relu')(x)
predictions = Dense(2, activation='sigmoid')(x)
model = Model(resnet.input, predictions)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(X_train, y_train,
epochs=1,
verbose=1,
)
# Compile Model
# Fit Model
| 0.629205 | 0.941385 |
```
# look at tools/set_up_magics.ipynb
yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \'// setup cpp code highlighting\\nIPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\')\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if line.startswith("%"):\n run_prefix = "%run "\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n run_prefix = "%# "\n if line.startswith(run_prefix):\n f.write(line_comment_start + " " + line_to_write)\n continue\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n else:\n f.write(line_to_write)\n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n display(Markdown("Run: `%s`" % cmd))\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n assert not fname\n save_file("makefile", cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \ndef show_file(file, clear_at_begin=True, return_html_string=False):\n if clear_at_begin:\n get_ipython().system("truncate --size 0 " + file)\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n elem.innerText = xmlhttp.responseText;\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (errors___OBJ__ < 10 && !entrance___OBJ__) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n \n <font color="white"> <tt>\n <p id="__OBJ__" style="font-size: 16px; border:3px #333333 solid; background: #333333; border-radius: 10px; padding: 10px; "></p>\n </tt> </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__ -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n \nBASH_POPEN_TMP_DIR = "./bash_popen_tmp"\n \ndef bash_popen_terminate_all():\n for p in globals().get("bash_popen_list", []):\n print("Terminate pid=" + str(p.pid), file=sys.stderr)\n p.terminate()\n globals()["bash_popen_list"] = []\n if os.path.exists(BASH_POPEN_TMP_DIR):\n shutil.rmtree(BASH_POPEN_TMP_DIR)\n\nbash_popen_terminate_all() \n\ndef bash_popen(cmd):\n if not os.path.exists(BASH_POPEN_TMP_DIR):\n os.mkdir(BASH_POPEN_TMP_DIR)\n h = os.path.join(BASH_POPEN_TMP_DIR, str(random.randint(0, 1e18)))\n stdout_file = h + ".out.html"\n stderr_file = h + ".err.html"\n run_log_file = h + ".fin.html"\n \n stdout = open(stdout_file, "wb")\n stdout = open(stderr_file, "wb")\n \n html = """\n <table width="100%">\n <colgroup>\n <col span="1" style="width: 70px;">\n <col span="1">\n </colgroup> \n <tbody>\n <tr> <td><b>STDOUT</b></td> <td> {stdout} </td> </tr>\n <tr> <td><b>STDERR</b></td> <td> {stderr} </td> </tr>\n <tr> <td><b>RUN LOG</b></td> <td> {run_log} </td> </tr>\n </tbody>\n </table>\n """.format(\n stdout=show_file(stdout_file, return_html_string=True),\n stderr=show_file(stderr_file, return_html_string=True),\n run_log=show_file(run_log_file, return_html_string=True),\n )\n \n cmd = """\n bash -c {cmd} &\n pid=$!\n echo "Process started! pid=${{pid}}" > {run_log_file}\n wait ${{pid}}\n echo "Process finished! exit_code=$?" >> {run_log_file}\n """.format(cmd=shlex.quote(cmd), run_log_file=run_log_file)\n # print(cmd)\n display(HTML(html))\n \n p = Popen(["bash", "-c", cmd], stdin=PIPE, stdout=stdout, stderr=stdout)\n \n bash_popen_list.append(p)\n return p\n\n\n@register_line_magic\ndef bash_async(line):\n bash_popen(line)\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def close(self):\n self.inq_f.close()\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END
```
# Сокеты и tcp-сокеты в частности
<br>
<div style="text-align: right"> Спасибо <a href="https://github.com/SyrnikRebirth">Сове Глебу</a> и <a href="https://github.com/Disadvantaged">Голяр Димитрису</a> за участие в написании текста </div>
<br>
**Модель OSI**
[Подробнее про уровни](https://zvondozvon.ru/tehnologii/model-osi)
1. Физический уровень (PHYSICAL)
2. Канальный уровень (DATA LINK) <br>
Отвечает за передачу фреймов информации. Для этого каждому к каждому блоку добавляется метаинформация и чексумма. <br>
Справляется с двумя важными проблемами: <br>
1. Передача фрейма данных
2. Коллизии данных. <br>
Это можно сделать двумя способами: повторно передавать данные либо передавать данные через Ethernet кабели, добавив коммутаторы в качестве посредника (наличие всего двух субъектов на каждом канале упрощает совместное использование среды).
3. Сетевой уровень (NETWORK) <br>
Появляются IP-адреса. Выполняет выбор маршрута передачи данных, учитывая длительность пути, нагруженность сети, etc. <br>
Один IP может соответствовать нескольким устройствам. Для этого используется хак на уровне маршрутизатора(NAT)
Одному устройству может соответствовать несколько IP. Это без хаков.
4. Транспортный уровень (TRANSPORT) `<-` **сокеты** это интерфейсы вот этого уровня <br>
Важный момент. Сетевой уровень - это про пересылку сообщений между конкретными хостами. А транспортный - между конкретными программами на конкретных хостах. <br>
Реализуются часто в ядре операционной системы <br>
Еще стоит понимать, что транспортный уровень, предоставляя один интерфейс может иметь разные реализации. Например сокеты UNIX, в этом случае под транспортным уровнем нет сетевого, так как передача данных ведется внутри одной машины. <br>
Появляется понятие порта - порт идентифицирует программу-получателя на хосте. <br>
Протоколы передачи данных:
1. TCP - устанавливает соединение, похожее на пайп. Надежный, переотправляет данные, но медленный. Регулирует скорость отправки данных в зависимости от нагрузки сети, чтобы не дропнуть интернет.
2. UDP - Быстрый, ненадёжный. Отправляет данные сразу все.
5. Сеансовый уровень (SESSION) (IMHO не нужен)
6. Уровень представления данных (PRESENTATION) (IMHO не нужен)
7. Прикладной уровень (APPLICATION)
Сегодня в программе:
* `socketpair` - <a href="#socketpair" style="color:#856024">аналог `pipe`</a>, но полученные дескрипторы обладают сокетными свойствами: файловый дескриптор работает и на чтение и на запись (соответственно этот "pipe" двусторонний), закрывать нужно с вызовом `shutdown`
* `socket` - функция создания сокета
* TCP
* <a href="#socket_unix" style="color:#856024">AF_UNIX</a> - сокет внутри системы. Адрес в данном случае - адрес файла сокета в файловой системе.
* <a href="#socket_inet" style="color:#856024">AF_INET</a> - сокет для стандартных ipv4 соединений. **И это самый важный пример в этом ноутбуке**.
* <a href="#socket_inet6" style="color:#856024">AF_INET6</a> - сокет для стандартных ipv6 соединений.
* UDP
* <a href="#socket_udp" style="color:#856024">AF_INET</a> - посылаем датаграммы по ipv4.
[Сайт с хорошими картинками про порядок низкоуровневых вызовов в клиентском и серверном приложении](http://support.fastwel.ru/AppNotes/AN/AN-0001.html#server_tcp_init)
<a href="#hw" style="color:#856024">Комментарии к ДЗ</a>
[Ридинг Яковлева](https://github.com/victor-yacovlev/mipt-diht-caos/tree/master/practice/sockets-tcp)
# netcat
Для отладки может быть полезным:
* `netcat -lv localhost 30000` - слушать указанный порт по TCP. Выводит все, что пишет клиент. Данные из своего stdin отправляет подключенному клиенту.
* `netcat localhost 30000` - подключиться к серверу по TCP. Ввод вывод работает.
* `netcat -lvu localhost 30000` - слушать по UDP. Но что-то мне кажется, эта команда умеет только одну датаграмму принимать, потом что-то ломается.
* `echo "asfrtvf" | netcat -u -q1 localhost 30000` - отправить датаграмму. Опция -v в этом случае ведет себя странно почему-то.
# <a name="socketpair"></a> socketpair в качестве pipe
Socket в качестве pipe (т.е. внутри системы) используется для написания примерно одинакового кода (для локальных соединений и соединений через интернет) и для использования возможностей сокета.
[close vs shutdown](https://stackoverflow.com/questions/48208236/tcp-close-vs-shutdown-in-linux-os)
```
%%cpp socketpair.cpp
%run gcc socketpair.cpp -o socketpair.exe
%run ./socketpair.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int main() {
union {
int arr_fd[2];
struct {
int fd_1; // ==arr_fd[0] can change order, it will work
int fd_2; // ==arr_fd[1]
};
} fds;
assert(socketpair(AF_UNIX, SOCK_STREAM, 0, fds.arr_fd) == 0); //socketpair создает пару соединенных сокетов(по сути pipe)
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
close(fds.fd_2);
write_smth(fds.fd_1);
shutdown(fds.fd_1, SHUT_RDWR); // important, try to comment out and look at time. Если мы не закроем соединение, то мы будем сидеть и ждать информации, даже когда ее уже нет
close(fds.fd_1);
log_printf("Writing is done\n");
sleep(3);
return 0;
}
if ((pid_2 = fork()) == 0) {
close(fds.fd_1);
read_all(fds.fd_2);
shutdown(fds.fd_2, SHUT_RDWR);
close(fds.fd_2);
return 0;
}
close(fds.fd_1);
close(fds.fd_2);
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
```
# <a name="socket_unix"></a> socket + AF_UNIX + TCP
```
%%cpp socket_unix.cpp
%run gcc socket_unix.cpp -o socket_unix.exe
%run ./socket_unix.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <sys/un.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
// important to use "/tmp/*", otherwise you can have problems with permissions
const char* SOCKET_PATH = "/tmp/my_precious_unix_socket";
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket");
// Тип переменной адреса (sockaddr_un) отличается от того что будет в следующем примере (т.е. тип зависит от того какое соединение используется)
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
// Кастуем sockaddr_un* -> sockaddr*. Знакомьтесь, сишные абстрактные структуры.
int connect_ret = connect(socket_fd, (const struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
unlink(SOCKET_PATH); // remove socket if exists, because bind fail if it exists
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_un peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_un);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // После accept можно делать fork и обрабатывать соединение в отдельном процессе
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
unlink(SOCKET_PATH);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
```
# <a name="socket_inet"></a> socket + AF_INET + TCP
[На первый взгляд приличная статейка про программирование на сокетах в linux](https://www.rsdn.org/article/unix/sockets.xml)
[Ответ на stackoverflow про то, что делает shutdown](https://stackoverflow.com/a/23483487)
```
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
%run diff socket_unix.cpp socket_inet.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1); // Нужен, чтобы сервер успел запуститься.
// В нормальном мире ошибки у пользователя решаются через retry.
int socket_fd = socket(AF_INET, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket"); // Проверяем на ошибку. Всегда так делаем, потому что что угодно (и где угодно) может сломаться при работе с сетью
// Формирование адреса
struct sockaddr_in addr; // Структурка адреса сервера, к которому обращаемся
addr.sin_family = AF_INET; // Указали семейство протоколов
addr.sin_port = htons(PORT); // Указали порт. htons преобразует локальный порядок байтов в сетевой(little endian to big).
struct hostent *hosts = gethostbyname("localhost"); // simple function but it is legacy. Prefer getaddrinfo. Получили информацию о хосте с именем localhost
conditional_handle_error(!hosts, "can't get host by name");
memcpy(&addr.sin_addr, hosts->h_addr_list[0], sizeof(addr.sin_addr)); // Указали в addr первый адрес из hosts
int connect_ret = connect(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); //Тут делаем коннект
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
log_printf("writing is done\n");
shutdown(socket_fd, SHUT_RDWR); // Закрываем соединение
close(socket_fd); // Закрываем файловый дескриптор уже закрытого соединения. Стоит делать оба закрытия.
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
// Смотри ридинг Яковлева. Вызовы, которые скажут нам, что мы готовы переиспользовать порт (потому что он может ещё не быть полностью освобожденным после прошлого использования)
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {.sin_family = AF_INET, .sin_port = htons(PORT)};
// addr.sin_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); // Привязали сокет к порту
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG); // Говорим что готовы принимать соединения. Не больше чем LISTEN_BACKLOG за раз
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in peer_addr = {0}; // Сюда запишется адрес клиента, который к нам подключится
socklen_t peer_addr_size = sizeof(struct sockaddr_in); // Считаем длину, чтобы accept() безопасно записал адрес и не переполнил ничего
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // Принимаем соединение и записываем адрес
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR); // }
close(connection_fd); // }Закрыли сокет соединение
shutdown(socket_fd, SHUT_RDWR); // }
close(socket_fd); // } Закрыли сам сокет
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
```
# getaddrinfo
Резолвим адрес по имени.
[Документация](https://linux.die.net/man/3/getaddrinfo)
Из документации взята реализация. Но она не работала, пришлось ее подправить :)
```
%%cpp getaddrinfo.cpp
%run gcc -DDEBUG getaddrinfo.cpp -o getaddrinfo.exe
%run ./getaddrinfo.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <netdb.h>
#include <string.h>
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
int main() {
try_connect_by_name("localhost", 22, AF_UNSPEC);
try_connect_by_name("localhost", 22, AF_INET6);
try_connect_by_name("ya.ru", 80, AF_UNSPEC);
try_connect_by_name("ya.ru", 80, AF_INET6);
return 0;
}
```
# <a name="socket_inet6"></a> socket + AF_INET6 + getaddrinfo + TCP
Вынужден использовать getaddrinfo из-за ipv6. При этом пришлось его немного поломать, так как при реализации из мануала rp->ai_socktype и rp->ai_protocol давали неподходящие значения для установки соединения.
```
%%cpp socket_inet6.cpp
%run gcc -DDEBUG socket_inet6.cpp -o socket_inet6.exe
%run ./socket_inet6.exe
%run diff socket_inet.cpp socket_inet6.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = try_connect_by_name("localhost", PORT, AF_INET6);
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET6, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in6 addr = {.sin6_family = AF_INET6, .sin6_port = htons(PORT)};
// addr.sin6_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in6 peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_in6);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size);
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
```
# <a name="socket_udp"></a> socket + AF_INET + UDP
```
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
const int PORT = 31008;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0); // создаем UDP сокет
conditional_handle_error(socket_fd == -1, "can't initialize socket");
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_LOOPBACK)}, // более эффективный способ присвоить адрес localhost
};
int written_bytes;
// посылаем первую датаграмму, явно указываем, кому (функция sendto)
const char msg1[] = "Hello 1";
written_bytes = sendto(socket_fd, msg1, sizeof(msg1), 0,
(struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(written_bytes == -1, "can't sendto");
// здесь вызываем connect. В данном случае он просто сохраняет адрес, никаких данных по сети не передается
// посылаем вторую датаграмму, по сохраненному адресу. Используем функцию send
const char msg2[] = "Hello 2";
int connect_ret = connect(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(connect_ret == -1, "can't connect OoOo");
written_bytes = send(socket_fd, msg2, sizeof(msg2), 0);
conditional_handle_error(written_bytes == -1, "can't send");
// посылаем третью датаграмму (write - эквивалент send с последним аргументом = 0)
const char msg3[] = "LastHello";
written_bytes = write(socket_fd, msg3, sizeof(msg3));
conditional_handle_error(written_bytes == -1, "can't write");
log_printf("client finished\n");
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_ANY)}, // более надежный способ сказать, что мы готовы принимать на любой входящий адрес (раньше просто 0 неявно записывали)
};
int bind_ret = bind(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(bind_ret < 0, "can't bind socket");
char buf[1024];
int bytes_read;
while (true) {
// last 2 arguments: struct sockaddr *src_addr, socklen_t *addrlen)
bytes_read = recvfrom(socket_fd, buf, 1024, 0, NULL, NULL);
buf[bytes_read] = '\0';
log_printf("%s\n", buf);
if (strcmp("LastHello", buf) == 0) {
break;
}
}
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
```
# QA
```
Димитрис Голяр, [23 февр. 2020 г., 18:11:26 (23.02.2020, 18:09:14)]:
Привет! У меня возник вопрос по работе сервера. В задаче 14-1 написано, что программа должна прослушивать соединения на сервере localhost. А что вообще произойдёт если я пропишу не localhost, а что-то другое?) Я буду прослушивать соединения другого какого-то сервера?
Yuri Pechatnov, [23 февр. 2020 г., 18:36:07]:
Я это понимаю так: у хоста может быть несколько IP адресов. Например, глобальный в интернете и 127.0.0.1 (=localhost)
Если ты укзазываешь адрес 0 при создании сервера, то ты принимаешь пакеты адресованные на любой IP этого хоста
А если указываешь конкретный адрес, то только пакеты адресованнные на этот конкретный адрес
И если ты указываешь localhost, то обрабатываешь только те пакеты у которых целевой адрес 127.0.0.1
а эти пакеты могли быть отправлены только с твоего хоста (иначе бы они остались на хосте-отправителе и не дошли до тебя)
Кстати, эта особенность стреляет при запуске jupyter notebook. Если ты не укажешь «—ip=0.0.0.0» то не сможешь подключиться к нему с другой машины, так как он сядет слушать только пакеты адресованные в localhost
```
# <a name="hw"></a> Комментарии к ДЗ
* inf14-0: posix/sockets/tcp-client -- требуется решить несколько задач:
1. Сформировать адрес. Здесь можно использовать функции, которые делают из доменного имени адрес (им неважно, преобразовывать "192.168.1.2" или "ya.ru"). А можно специальную функцию `inet_aton` или `inet_pton`.
2. Установить соединение. Так же как и раньше делали
3. Написать логику про чтение/запись чисел. Так как порядок байт LittleEndian - тут вообще никаких сетевых особенностей нет.
* inf14-1: posix/sockets/http-server-1 -- задача больше на работу с файлами, чем на сетевую часть. Единственный момент -- реагирование на сигналы. Тут можно просто хранить в атомиках файловые дескрипторы и в хендлере закрывать их с последующим exit. Или можно заморочиться с мультиплексированием ввода-вывода (будет на следующем семинаре)
* inf14-2: posix/sockets/udp-client -- на следующем семинаре разберём udp. Или можете сами почитать, там просто в сравнении с UDP. (Пример уже в этом ноутбуке есть)
* inf14-3: posix/sockets/http-server-2 -- усложнение inf14-1, но не по сетевой части. Просто вспомнить проверку файлов на исполняемость и позапускать, правильно прокинув файловые дескрипторы.
Длинный комментарий про задачи-серверы:
`man sendfile` - эта функция вам пригодится.
Смотрю я на вашу работу с сигналами в задачах-серверах и в большинстве случаем все страшненько
К сожалению не могу предложить какой-то эталонный способ, как с этим хорошо работать, но советую посмотреть в следующих направлениях:
1. signalfd - информацию о сигналах можно читать из файловых дескрипторов - тогда можно делать epoll на условную пару (socket_fd, signal_fd) и если пришел сигнал синхронно хорошо его обрабатывать
2. В хендлерах только проставлять флаги того, что пришли сигналы. Опцию SA_RESTART не ставить. И проверять флаги в основном цикле и после каждого системного вызова.
3. Блокировка сигналов. Тут все сложненько, так как если сигналы будут заблокированы во время условного accept вы не вероятно прерветесь. В целом можно защищать некоторые области кода блокированием сигналов, но не стоит в этих областях делать блокирующие вызовы. (Но можно сделать так: с помощью epoll подождать, пока в socket_fd что-то появится, в потом в защищенной секции сделать connection_fd = accept(…) (который выполнится мгновенно))
Классические ошибки
1. Блокировка сигналов там, где она не нужна
2. atomic_connection_fd = accept(…); + неуправляемо асинхронный хендлер, в котором atomic_connection_fd должен закрываться и делаться exit
Тогда хендлер может сработать после завершения accept но до присвоения атомика. И соединение вы не закроете
# <a name="hw_server"></a> Относительно безопасный шаблон для домашки про сервер
Очень много прям откровенно плохой обработки сигналов (сходу придумываются кейсы, когда решения ломаются). Поэтому предлагаю свою версию (без вырезок зашла в ejudge, да).
Суть в том, чтобы избежать асинхронной обработки сигналов и связанных с этим проблем. Превратить пришедший сигнал в данные в декскрипторе и следить за ним с помощью epoll.
```
%%cpp server_sol.c --ejudge-style
//%run gcc server_sol.c -o server_sol.exe
//%run ./server_sol.exe 30045
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <signal.h>
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
#include <stdbool.h>
#include <sys/stat.h>
#include <wait.h>
#include <sys/epoll.h>
#include <assert.h>
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
//...
// должен работать до тех пор, пока в stop_fd не появится что-нибудь доступное для чтения
int server_main(int argc, char** argv, int stop_fd) {
assert(argc >= 2);
//...
int epoll_fd = epoll_create1(0);
{
int fds[] = {stop_fd, socket_fd, -1};
for (int* fd = fds; *fd != -1; ++fd) {
struct epoll_event event = {
.events = EPOLLIN | EPOLLERR | EPOLLHUP,
.data = {.fd = *fd}
};
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, *fd, &event);
}
}
while (true) {
struct epoll_event event;
int epoll_ret = epoll_wait(epoll_fd, &event, 1, 1000); // Читаем события из epoll-объект (то есть из множества файловых дескриптотров, по которым есть события)
if (epoll_ret <= 0) {
continue;
}
if (event.data.fd == stop_fd) {
break;
}
// отработает мгновенно, так как уже подождали в epoll
int fd = accept(socket_fd, NULL, NULL);
// ... а тут обрабатываем соединение
shutdown(fd, SHUT_RDWR);
close(fd);
}
close(epoll_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
// Основную работу будем делать в дочернем процессе.
// А этот процесс будет принимать сигналы и напишет в пайп, когда пора останавливаться
// (Кстати, лишний процесс и пайп можно было заменить на signalf, но это менее портируемо)
// (A еще можно установить хендлер сигнала из которого и писать в пайп, то есть не делать лишнего процесса тут)
int main(int argc, char** argv) {
sigset_t full_mask;
sigfillset(&full_mask);
sigprocmask(SIG_BLOCK, &full_mask, NULL);
int fds[2];
assert(pipe(fds) == 0);
int child_pid = fork();
assert(child_pid >= 0);
if (child_pid == 0) {
close(fds[1]);
server_main(argc, argv, fds[0]);
close(fds[0]);
return 0;
} else {
// Код ленивого человека, просто скопировавшего этот шаблон
close(fds[0]);
while (1) {
siginfo_t info;
sigwaitinfo(&full_mask, &info);
int received_signal = info.si_signo;
if (received_signal == SIGTERM || received_signal == SIGINT) {
int written = write(fds[1], "X", 1);
conditional_handle_error(written != 1, "writing failed");
close(fds[1]);
break;
}
}
int status;
assert(waitpid(child_pid, &status, 0) != -1);
}
return 0;
}
```
|
github_jupyter
|
# look at tools/set_up_magics.ipynb
yandex_metrica_allowed = True ; get_ipython().run_cell('# one_liner_str\n\nget_ipython().run_cell_magic(\'javascript\', \'\', \'// setup cpp code highlighting\\nIPython.CodeCell.options_default.highlight_modes["text/x-c++src"] = {\\\'reg\\\':[/^%%cpp/]} ;\')\n\n# creating magics\nfrom IPython.core.magic import register_cell_magic, register_line_magic\nfrom IPython.display import display, Markdown, HTML\nimport argparse\nfrom subprocess import Popen, PIPE\nimport random\nimport sys\nimport os\nimport re\nimport signal\nimport shutil\nimport shlex\nimport glob\n\n@register_cell_magic\ndef save_file(args_str, cell, line_comment_start="#"):\n parser = argparse.ArgumentParser()\n parser.add_argument("fname")\n parser.add_argument("--ejudge-style", action="store_true")\n args = parser.parse_args(args_str.split())\n \n cell = cell if cell[-1] == \'\\n\' or args.no_eof_newline else cell + "\\n"\n cmds = []\n with open(args.fname, "w") as f:\n f.write(line_comment_start + " %%cpp " + args_str + "\\n")\n for line in cell.split("\\n"):\n line_to_write = (line if not args.ejudge_style else line.rstrip()) + "\\n"\n if line.startswith("%"):\n run_prefix = "%run "\n if line.startswith(run_prefix):\n cmds.append(line[len(run_prefix):].strip())\n f.write(line_comment_start + " " + line_to_write)\n continue\n run_prefix = "%# "\n if line.startswith(run_prefix):\n f.write(line_comment_start + " " + line_to_write)\n continue\n raise Exception("Unknown %%save_file subcommand: \'%s\'" % line)\n else:\n f.write(line_to_write)\n f.write("" if not args.ejudge_style else line_comment_start + r" line without \\n")\n for cmd in cmds:\n display(Markdown("Run: `%s`" % cmd))\n get_ipython().system(cmd)\n\n@register_cell_magic\ndef cpp(fname, cell):\n save_file(fname, cell, "//")\n\n@register_cell_magic\ndef asm(fname, cell):\n save_file(fname, cell, "//")\n \n@register_cell_magic\ndef makefile(fname, cell):\n assert not fname\n save_file("makefile", cell.replace(" " * 4, "\\t"))\n \n@register_line_magic\ndef p(line):\n try:\n expr, comment = line.split(" #")\n display(Markdown("`{} = {}` # {}".format(expr.strip(), eval(expr), comment.strip())))\n except:\n display(Markdown("{} = {}".format(line, eval(line))))\n \ndef show_file(file, clear_at_begin=True, return_html_string=False):\n if clear_at_begin:\n get_ipython().system("truncate --size 0 " + file)\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n elem.innerText = xmlhttp.responseText;\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (errors___OBJ__ < 10 && !entrance___OBJ__) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n \n <font color="white"> <tt>\n <p id="__OBJ__" style="font-size: 16px; border:3px #333333 solid; background: #333333; border-radius: 10px; padding: 10px; "></p>\n </tt> </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__ -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n \nBASH_POPEN_TMP_DIR = "./bash_popen_tmp"\n \ndef bash_popen_terminate_all():\n for p in globals().get("bash_popen_list", []):\n print("Terminate pid=" + str(p.pid), file=sys.stderr)\n p.terminate()\n globals()["bash_popen_list"] = []\n if os.path.exists(BASH_POPEN_TMP_DIR):\n shutil.rmtree(BASH_POPEN_TMP_DIR)\n\nbash_popen_terminate_all() \n\ndef bash_popen(cmd):\n if not os.path.exists(BASH_POPEN_TMP_DIR):\n os.mkdir(BASH_POPEN_TMP_DIR)\n h = os.path.join(BASH_POPEN_TMP_DIR, str(random.randint(0, 1e18)))\n stdout_file = h + ".out.html"\n stderr_file = h + ".err.html"\n run_log_file = h + ".fin.html"\n \n stdout = open(stdout_file, "wb")\n stdout = open(stderr_file, "wb")\n \n html = """\n <table width="100%">\n <colgroup>\n <col span="1" style="width: 70px;">\n <col span="1">\n </colgroup> \n <tbody>\n <tr> <td><b>STDOUT</b></td> <td> {stdout} </td> </tr>\n <tr> <td><b>STDERR</b></td> <td> {stderr} </td> </tr>\n <tr> <td><b>RUN LOG</b></td> <td> {run_log} </td> </tr>\n </tbody>\n </table>\n """.format(\n stdout=show_file(stdout_file, return_html_string=True),\n stderr=show_file(stderr_file, return_html_string=True),\n run_log=show_file(run_log_file, return_html_string=True),\n )\n \n cmd = """\n bash -c {cmd} &\n pid=$!\n echo "Process started! pid=${{pid}}" > {run_log_file}\n wait ${{pid}}\n echo "Process finished! exit_code=$?" >> {run_log_file}\n """.format(cmd=shlex.quote(cmd), run_log_file=run_log_file)\n # print(cmd)\n display(HTML(html))\n \n p = Popen(["bash", "-c", cmd], stdin=PIPE, stdout=stdout, stderr=stdout)\n \n bash_popen_list.append(p)\n return p\n\n\n@register_line_magic\ndef bash_async(line):\n bash_popen(line)\n \n \ndef show_log_file(file, return_html_string=False):\n obj = file.replace(\'.\', \'_\').replace(\'/\', \'_\') + "_obj"\n html_string = \'\'\'\n <!--MD_BEGIN_FILTER-->\n <script type=text/javascript>\n var entrance___OBJ__ = 0;\n var errors___OBJ__ = 0;\n function halt__OBJ__(elem, color)\n {\n elem.setAttribute("style", "font-size: 14px; background: " + color + "; padding: 10px; border: 3px; border-radius: 5px; color: white; "); \n }\n function refresh__OBJ__()\n {\n entrance___OBJ__ -= 1;\n if (entrance___OBJ__ < 0) {\n entrance___OBJ__ = 0;\n }\n var elem = document.getElementById("__OBJ__");\n if (elem) {\n var xmlhttp=new XMLHttpRequest();\n xmlhttp.onreadystatechange=function()\n {\n var elem = document.getElementById("__OBJ__");\n console.log(!!elem, xmlhttp.readyState, xmlhttp.status, entrance___OBJ__);\n if (elem && xmlhttp.readyState==4) {\n if (xmlhttp.status==200)\n {\n errors___OBJ__ = 0;\n if (!entrance___OBJ__) {\n if (elem.innerHTML != xmlhttp.responseText) {\n elem.innerHTML = xmlhttp.responseText;\n }\n if (elem.innerHTML.includes("Process finished.")) {\n halt__OBJ__(elem, "#333333");\n } else {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n }\n }\n return xmlhttp.responseText;\n } else {\n errors___OBJ__ += 1;\n if (!entrance___OBJ__) {\n if (errors___OBJ__ < 6) {\n entrance___OBJ__ += 1;\n console.log("req");\n window.setTimeout("refresh__OBJ__()", 300); \n } else {\n halt__OBJ__(elem, "#994444");\n }\n }\n }\n }\n }\n xmlhttp.open("GET", "__FILE__", true);\n xmlhttp.setRequestHeader("Cache-Control", "no-cache");\n xmlhttp.send(); \n }\n }\n \n if (!entrance___OBJ__) {\n entrance___OBJ__ += 1;\n refresh__OBJ__(); \n }\n </script>\n\n <p id="__OBJ__" style="font-size: 14px; background: #000000; padding: 10px; border: 3px; border-radius: 5px; color: white; ">\n </p>\n \n </font>\n <!--MD_END_FILTER-->\n <!--MD_FROM_FILE __FILE__.md -->\n \'\'\'.replace("__OBJ__", obj).replace("__FILE__", file)\n if return_html_string:\n return html_string\n display(HTML(html_string))\n\n \nclass TInteractiveLauncher:\n tmp_path = "./interactive_launcher_tmp"\n def __init__(self, cmd):\n try:\n os.mkdir(TInteractiveLauncher.tmp_path)\n except:\n pass\n name = str(random.randint(0, 1e18))\n self.inq_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".inq")\n self.log_path = os.path.join(TInteractiveLauncher.tmp_path, name + ".log")\n \n os.mkfifo(self.inq_path)\n open(self.log_path, \'w\').close()\n open(self.log_path + ".md", \'w\').close()\n\n self.pid = os.fork()\n if self.pid == -1:\n print("Error")\n if self.pid == 0:\n exe_cands = glob.glob("../tools/launcher.py") + glob.glob("../../tools/launcher.py")\n assert(len(exe_cands) == 1)\n assert(os.execvp("python3", ["python3", exe_cands[0], "-l", self.log_path, "-i", self.inq_path, "-c", cmd]) == 0)\n self.inq_f = open(self.inq_path, "w")\n interactive_launcher_opened_set.add(self.pid)\n show_log_file(self.log_path)\n\n def write(self, s):\n s = s.encode()\n assert len(s) == os.write(self.inq_f.fileno(), s)\n \n def get_pid(self):\n n = 100\n for i in range(n):\n try:\n return int(re.findall(r"PID = (\\d+)", open(self.log_path).readline())[0])\n except:\n if i + 1 == n:\n raise\n time.sleep(0.1)\n \n def input_queue_path(self):\n return self.inq_path\n \n def close(self):\n self.inq_f.close()\n os.waitpid(self.pid, 0)\n os.remove(self.inq_path)\n # os.remove(self.log_path)\n self.inq_path = None\n self.log_path = None \n interactive_launcher_opened_set.remove(self.pid)\n self.pid = None\n \n @staticmethod\n def terminate_all():\n if "interactive_launcher_opened_set" not in globals():\n globals()["interactive_launcher_opened_set"] = set()\n global interactive_launcher_opened_set\n for pid in interactive_launcher_opened_set:\n print("Terminate pid=" + str(pid), file=sys.stderr)\n os.kill(pid, signal.SIGKILL)\n os.waitpid(pid, 0)\n interactive_launcher_opened_set = set()\n if os.path.exists(TInteractiveLauncher.tmp_path):\n shutil.rmtree(TInteractiveLauncher.tmp_path)\n \nTInteractiveLauncher.terminate_all()\n \nyandex_metrica_allowed = bool(globals().get("yandex_metrica_allowed", False))\nif yandex_metrica_allowed:\n display(HTML(\'\'\'<!-- YANDEX_METRICA_BEGIN -->\n <script type="text/javascript" >\n (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)};\n m[i].l=1*new Date();k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)})\n (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym");\n\n ym(59260609, "init", {\n clickmap:true,\n trackLinks:true,\n accurateTrackBounce:true\n });\n </script>\n <noscript><div><img src="https://mc.yandex.ru/watch/59260609" style="position:absolute; left:-9999px;" alt="" /></div></noscript>\n <!-- YANDEX_METRICA_END -->\'\'\'))\n\ndef make_oneliner():\n html_text = \'("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "")\'\n html_text += \' + "<""!-- MAGICS_SETUP_PRINTING_END -->"\'\n return \'\'.join([\n \'# look at tools/set_up_magics.ipynb\\n\',\n \'yandex_metrica_allowed = True ; get_ipython().run_cell(%s);\' % repr(one_liner_str),\n \'display(HTML(%s))\' % html_text,\n \' #\'\'MAGICS_SETUP_END\'\n ])\n \n\n');display(HTML(("В этот ноутбук встроен код Яндекс Метрики для сбора статистики использований. Если вы не хотите, чтобы по вам собиралась статистика, исправьте: yandex_metrica_allowed = False" if yandex_metrica_allowed else "") + "<""!-- MAGICS_SETUP_PRINTING_END -->")) #MAGICS_SETUP_END
%%cpp socketpair.cpp
%run gcc socketpair.cpp -o socketpair.exe
%run ./socketpair.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int main() {
union {
int arr_fd[2];
struct {
int fd_1; // ==arr_fd[0] can change order, it will work
int fd_2; // ==arr_fd[1]
};
} fds;
assert(socketpair(AF_UNIX, SOCK_STREAM, 0, fds.arr_fd) == 0); //socketpair создает пару соединенных сокетов(по сути pipe)
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
close(fds.fd_2);
write_smth(fds.fd_1);
shutdown(fds.fd_1, SHUT_RDWR); // important, try to comment out and look at time. Если мы не закроем соединение, то мы будем сидеть и ждать информации, даже когда ее уже нет
close(fds.fd_1);
log_printf("Writing is done\n");
sleep(3);
return 0;
}
if ((pid_2 = fork()) == 0) {
close(fds.fd_1);
read_all(fds.fd_2);
shutdown(fds.fd_2, SHUT_RDWR);
close(fds.fd_2);
return 0;
}
close(fds.fd_1);
close(fds.fd_2);
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
%%cpp socket_unix.cpp
%run gcc socket_unix.cpp -o socket_unix.exe
%run ./socket_unix.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <sys/un.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
write(fd, "X", 1);
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
// important to use "/tmp/*", otherwise you can have problems with permissions
const char* SOCKET_PATH = "/tmp/my_precious_unix_socket";
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket");
// Тип переменной адреса (sockaddr_un) отличается от того что будет в следующем примере (т.е. тип зависит от того какое соединение используется)
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
// Кастуем sockaddr_un* -> sockaddr*. Знакомьтесь, сишные абстрактные структуры.
int connect_ret = connect(socket_fd, (const struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_UNIX, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
unlink(SOCKET_PATH); // remove socket if exists, because bind fail if it exists
struct sockaddr_un addr = {.sun_family = AF_UNIX};
strncpy(addr.sun_path, SOCKET_PATH, sizeof(addr.sun_path) - 1);
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr.sun_path));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_un peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_un);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // После accept можно делать fork и обрабатывать соединение в отдельном процессе
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
unlink(SOCKET_PATH);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
%run diff socket_unix.cpp socket_inet.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1); // Нужен, чтобы сервер успел запуститься.
// В нормальном мире ошибки у пользователя решаются через retry.
int socket_fd = socket(AF_INET, SOCK_STREAM, 0); // == connection_fd in this case
conditional_handle_error(socket_fd == -1, "can't initialize socket"); // Проверяем на ошибку. Всегда так делаем, потому что что угодно (и где угодно) может сломаться при работе с сетью
// Формирование адреса
struct sockaddr_in addr; // Структурка адреса сервера, к которому обращаемся
addr.sin_family = AF_INET; // Указали семейство протоколов
addr.sin_port = htons(PORT); // Указали порт. htons преобразует локальный порядок байтов в сетевой(little endian to big).
struct hostent *hosts = gethostbyname("localhost"); // simple function but it is legacy. Prefer getaddrinfo. Получили информацию о хосте с именем localhost
conditional_handle_error(!hosts, "can't get host by name");
memcpy(&addr.sin_addr, hosts->h_addr_list[0], sizeof(addr.sin_addr)); // Указали в addr первый адрес из hosts
int connect_ret = connect(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); //Тут делаем коннект
conditional_handle_error(connect_ret == -1, "can't connect to unix socket");
write_smth(socket_fd);
log_printf("writing is done\n");
shutdown(socket_fd, SHUT_RDWR); // Закрываем соединение
close(socket_fd); // Закрываем файловый дескриптор уже закрытого соединения. Стоит делать оба закрытия.
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
// Смотри ридинг Яковлева. Вызовы, которые скажут нам, что мы готовы переиспользовать порт (потому что он может ещё не быть полностью освобожденным после прошлого использования)
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {.sin_family = AF_INET, .sin_port = htons(PORT)};
// addr.sin_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr)); // Привязали сокет к порту
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG); // Говорим что готовы принимать соединения. Не больше чем LISTEN_BACKLOG за раз
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in peer_addr = {0}; // Сюда запишется адрес клиента, который к нам подключится
socklen_t peer_addr_size = sizeof(struct sockaddr_in); // Считаем длину, чтобы accept() безопасно записал адрес и не переполнил ничего
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size); // Принимаем соединение и записываем адрес
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR); // }
close(connection_fd); // }Закрыли сокет соединение
shutdown(socket_fd, SHUT_RDWR); // }
close(socket_fd); // } Закрыли сам сокет
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
%%cpp getaddrinfo.cpp
%run gcc -DDEBUG getaddrinfo.cpp -o getaddrinfo.exe
%run ./getaddrinfo.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <netdb.h>
#include <string.h>
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
int main() {
try_connect_by_name("localhost", 22, AF_UNSPEC);
try_connect_by_name("localhost", 22, AF_INET6);
try_connect_by_name("ya.ru", 80, AF_UNSPEC);
try_connect_by_name("ya.ru", 80, AF_INET6);
return 0;
}
%%cpp socket_inet6.cpp
%run gcc -DDEBUG socket_inet6.cpp -o socket_inet6.exe
%run ./socket_inet6.exe
%run diff socket_inet.cpp socket_inet6.cpp | grep -v "// %" | grep -e '>' -e '<' -C 1
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
void write_smth(int fd) {
for (int i = 0; i < 1000; ++i) {
int write_ret = write(fd, "X", 1);
conditional_handle_error(write_ret != 1, "writing failed");
struct timespec t = {.tv_sec = 0, .tv_nsec = 10000};
nanosleep(&t, &t);
}
}
void read_all(int fd) {
int bytes = 0;
while (true) {
char c;
int r = read(fd, &c, 1);
if (r > 0) {
bytes += r;
} else if (r < 0) {
assert(errno == EAGAIN);
} else {
break;
}
}
log_printf("Read %d bytes\n", bytes);
}
int try_connect_by_name(const char* name, int port, int ai_family) {
struct addrinfo hints;
struct addrinfo *result, *rp;
int sfd, s, j;
size_t len;
ssize_t nread;
/* Obtain address(es) matching host/port */
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_family = ai_family;
hints.ai_socktype = SOCK_STREAM;
hints.ai_flags = 0;
hints.ai_protocol = 0; /* Any protocol */
char port_s[20];
sprintf(port_s, "%d", port);
s = getaddrinfo(name, port_s, &hints, &result);
if (s != 0) {
fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(s));
exit(EXIT_FAILURE);
}
/* getaddrinfo() returns a list of address structures.
Try each address until we successfully connect(2).
If socket(2) (or connect(2)) fails, we (close the socket
and) try the next address. */
for (rp = result; rp != NULL; rp = rp->ai_next) {
char hbuf[NI_MAXHOST], sbuf[NI_MAXSERV];
if (getnameinfo(rp->ai_addr, rp->ai_addrlen, hbuf, sizeof(hbuf), sbuf, sizeof(sbuf), NI_NUMERICHOST | NI_NUMERICSERV) == 0)
fprintf(stderr, "Try ai_family=%d host=%s, serv=%s\n", rp->ai_family, hbuf, sbuf);
sfd = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
if (sfd == -1)
continue;
if (connect(sfd, rp->ai_addr, rp->ai_addrlen) != -1)
break; /* Success */
close(sfd);
}
freeaddrinfo(result);
if (rp == NULL) { /* No address succeeded */
fprintf(stderr, "Could not connect\n");
return -1;
}
return sfd;
}
const int PORT = 31008;
const int LISTEN_BACKLOG = 2;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = try_connect_by_name("localhost", PORT, AF_INET6);
write_smth(socket_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("client finished\n");
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET6, SOCK_STREAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in6 addr = {.sin6_family = AF_INET6, .sin6_port = htons(PORT)};
// addr.sin6_addr == 0, so we are ready to receive connections directed to all our addresses
int bind_ret = bind(socket_fd, (struct sockaddr*)&addr, sizeof(addr));
conditional_handle_error(bind_ret == -1, "can't bind to unix socket");
int listen_ret = listen(socket_fd, LISTEN_BACKLOG);
conditional_handle_error(listen_ret == -1, "can't listen to unix socket");
struct sockaddr_in6 peer_addr = {0};
socklen_t peer_addr_size = sizeof(struct sockaddr_in6);
int connection_fd = accept(socket_fd, (struct sockaddr*)&peer_addr, &peer_addr_size);
conditional_handle_error(connection_fd == -1, "can't accept incoming connection");
read_all(connection_fd);
shutdown(connection_fd, SHUT_RDWR);
close(connection_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
%%cpp socket_inet.cpp
%run gcc -DDEBUG socket_inet.cpp -o socket_inet.exe
%run ./socket_inet.exe
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <assert.h>
#include <fcntl.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/socket.h>
#include <errno.h>
#include <time.h>
#include <netinet/in.h>
#include <netdb.h>
#include <string.h>
char* extract_t(char* s) { s[19] = '\0'; return s + 10; }
#define log_printf_impl(fmt, ...) { time_t t = time(0); dprintf(2, "%s : " fmt "%s", extract_t(ctime(&t)), __VA_ARGS__); }
#define log_printf(...) log_printf_impl(__VA_ARGS__, "")
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
const int PORT = 31008;
int main() {
pid_t pid_1, pid_2;
if ((pid_1 = fork()) == 0) {
// client
sleep(1);
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0); // создаем UDP сокет
conditional_handle_error(socket_fd == -1, "can't initialize socket");
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_LOOPBACK)}, // более эффективный способ присвоить адрес localhost
};
int written_bytes;
// посылаем первую датаграмму, явно указываем, кому (функция sendto)
const char msg1[] = "Hello 1";
written_bytes = sendto(socket_fd, msg1, sizeof(msg1), 0,
(struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(written_bytes == -1, "can't sendto");
// здесь вызываем connect. В данном случае он просто сохраняет адрес, никаких данных по сети не передается
// посылаем вторую датаграмму, по сохраненному адресу. Используем функцию send
const char msg2[] = "Hello 2";
int connect_ret = connect(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(connect_ret == -1, "can't connect OoOo");
written_bytes = send(socket_fd, msg2, sizeof(msg2), 0);
conditional_handle_error(written_bytes == -1, "can't send");
// посылаем третью датаграмму (write - эквивалент send с последним аргументом = 0)
const char msg3[] = "LastHello";
written_bytes = write(socket_fd, msg3, sizeof(msg3));
conditional_handle_error(written_bytes == -1, "can't write");
log_printf("client finished\n");
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
if ((pid_2 = fork()) == 0) {
// server
int socket_fd = socket(AF_INET, SOCK_DGRAM, 0);
conditional_handle_error(socket_fd == -1, "can't initialize socket");
#ifdef DEBUG
int reuse_val = 1;
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEADDR, &reuse_val, sizeof(reuse_val));
setsockopt(socket_fd, SOL_SOCKET, SO_REUSEPORT, &reuse_val, sizeof(reuse_val));
#endif
struct sockaddr_in addr = {
.sin_family = AF_INET,
.sin_port = htons(PORT),
.sin_addr = {.s_addr = htonl(INADDR_ANY)}, // более надежный способ сказать, что мы готовы принимать на любой входящий адрес (раньше просто 0 неявно записывали)
};
int bind_ret = bind(socket_fd, (struct sockaddr *)&addr, sizeof(addr));
conditional_handle_error(bind_ret < 0, "can't bind socket");
char buf[1024];
int bytes_read;
while (true) {
// last 2 arguments: struct sockaddr *src_addr, socklen_t *addrlen)
bytes_read = recvfrom(socket_fd, buf, 1024, 0, NULL, NULL);
buf[bytes_read] = '\0';
log_printf("%s\n", buf);
if (strcmp("LastHello", buf) == 0) {
break;
}
}
log_printf("server finished\n");
return 0;
}
int status;
assert(waitpid(pid_1, &status, 0) != -1);
assert(waitpid(pid_2, &status, 0) != -1);
return 0;
}
Димитрис Голяр, [23 февр. 2020 г., 18:11:26 (23.02.2020, 18:09:14)]:
Привет! У меня возник вопрос по работе сервера. В задаче 14-1 написано, что программа должна прослушивать соединения на сервере localhost. А что вообще произойдёт если я пропишу не localhost, а что-то другое?) Я буду прослушивать соединения другого какого-то сервера?
Yuri Pechatnov, [23 февр. 2020 г., 18:36:07]:
Я это понимаю так: у хоста может быть несколько IP адресов. Например, глобальный в интернете и 127.0.0.1 (=localhost)
Если ты укзазываешь адрес 0 при создании сервера, то ты принимаешь пакеты адресованные на любой IP этого хоста
А если указываешь конкретный адрес, то только пакеты адресованнные на этот конкретный адрес
И если ты указываешь localhost, то обрабатываешь только те пакеты у которых целевой адрес 127.0.0.1
а эти пакеты могли быть отправлены только с твоего хоста (иначе бы они остались на хосте-отправителе и не дошли до тебя)
Кстати, эта особенность стреляет при запуске jupyter notebook. Если ты не укажешь «—ip=0.0.0.0» то не сможешь подключиться к нему с другой машины, так как он сядет слушать только пакеты адресованные в localhost
%%cpp server_sol.c --ejudge-style
//%run gcc server_sol.c -o server_sol.exe
//%run ./server_sol.exe 30045
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <signal.h>
#include <ctype.h>
#include <errno.h>
#include <fcntl.h>
#include <stdbool.h>
#include <sys/stat.h>
#include <wait.h>
#include <sys/epoll.h>
#include <assert.h>
#define conditional_handle_error(stmt, msg) \
do { if (stmt) { perror(msg " (" #stmt ")"); exit(EXIT_FAILURE); } } while (0)
//...
// должен работать до тех пор, пока в stop_fd не появится что-нибудь доступное для чтения
int server_main(int argc, char** argv, int stop_fd) {
assert(argc >= 2);
//...
int epoll_fd = epoll_create1(0);
{
int fds[] = {stop_fd, socket_fd, -1};
for (int* fd = fds; *fd != -1; ++fd) {
struct epoll_event event = {
.events = EPOLLIN | EPOLLERR | EPOLLHUP,
.data = {.fd = *fd}
};
epoll_ctl(epoll_fd, EPOLL_CTL_ADD, *fd, &event);
}
}
while (true) {
struct epoll_event event;
int epoll_ret = epoll_wait(epoll_fd, &event, 1, 1000); // Читаем события из epoll-объект (то есть из множества файловых дескриптотров, по которым есть события)
if (epoll_ret <= 0) {
continue;
}
if (event.data.fd == stop_fd) {
break;
}
// отработает мгновенно, так как уже подождали в epoll
int fd = accept(socket_fd, NULL, NULL);
// ... а тут обрабатываем соединение
shutdown(fd, SHUT_RDWR);
close(fd);
}
close(epoll_fd);
shutdown(socket_fd, SHUT_RDWR);
close(socket_fd);
return 0;
}
// Основную работу будем делать в дочернем процессе.
// А этот процесс будет принимать сигналы и напишет в пайп, когда пора останавливаться
// (Кстати, лишний процесс и пайп можно было заменить на signalf, но это менее портируемо)
// (A еще можно установить хендлер сигнала из которого и писать в пайп, то есть не делать лишнего процесса тут)
int main(int argc, char** argv) {
sigset_t full_mask;
sigfillset(&full_mask);
sigprocmask(SIG_BLOCK, &full_mask, NULL);
int fds[2];
assert(pipe(fds) == 0);
int child_pid = fork();
assert(child_pid >= 0);
if (child_pid == 0) {
close(fds[1]);
server_main(argc, argv, fds[0]);
close(fds[0]);
return 0;
} else {
// Код ленивого человека, просто скопировавшего этот шаблон
close(fds[0]);
while (1) {
siginfo_t info;
sigwaitinfo(&full_mask, &info);
int received_signal = info.si_signo;
if (received_signal == SIGTERM || received_signal == SIGINT) {
int written = write(fds[1], "X", 1);
conditional_handle_error(written != 1, "writing failed");
close(fds[1]);
break;
}
}
int status;
assert(waitpid(child_pid, &status, 0) != -1);
}
return 0;
}
| 0.133331 | 0.227491 |
# Feature Engineering and Creation
#### v 2.0
In this feature engineering pipeline, the focus will be to try to improve the result for XGBoost model.
## Imports and Setup
```
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
from collections import Counter
from sklearn.decomposition import TruncatedSVD, PCA
from sklearn.feature_selection import SelectKBest
from sklearn.preprocessing import StandardScaler, OrdinalEncoder
sns.set(style='darkgrid', palette='pastel')
pd.options.display.max_columns = None
data = pd.read_csv('../data/processed/0_cleaned.csv')
data_backup = data.copy()
label = data[['Value', 'Wage']]
data.drop(columns=['Value', 'Wage'], inplace=True)
data.head(5)
label.isnull().sum()
```
## Transform Categorical Features
```
categorical_columns = np.asarray([not np.issubdtype(data[col].dtype, np.number) for col in data.columns], dtype=np.bool)
cat_col_names = [col for col in data.columns if not np.issubdtype(data[col].dtype, np.number)]
num_col_names = [col for col in data.columns if np.issubdtype(data[col].dtype, np.number)]
categorical_data = data.iloc[:, categorical_columns]
non_categorical = data.iloc[:, ~categorical_columns]
categorical_data.reset_index(inplace=True, drop=True)
non_categorical.reset_index(inplace=True, drop=True)
```
### Transform Using OneHotEncoding
```
cardinality_threshold = 10
cols_to_reduce_dim = []
for c in cat_col_names:
levels = categorical_data[c].drop_duplicates().shape[0]
if levels > cardinality_threshold:
cols_to_reduce_dim.append(c)
df_reduce = categorical_data[cols_to_reduce_dim]
df_reduce = pd.get_dummies(df_reduce)
svd = PCA(n_components=0.85)
df_reduce = svd.fit_transform(df_reduce)
column_names = []
for col in range(df_reduce.shape[1]):
column_names.append('reduced_col_{}'.format(col))
df_reduce = pd.DataFrame(df_reduce, columns=column_names)
df_dummies = categorical_data.drop(cols_to_reduce_dim, axis=1)
df_dummies = pd.get_dummies(df_dummies)
df_dummies = df_dummies.join(df_reduce)
df_dummies = df_dummies.join(non_categorical)
```
### Transform Using Ordinal Encoding
```
encoder = OrdinalEncoder()
encoder.fit(categorical_data)
column_names = categorical_data.columns
categorical_data = encoder.transform(categorical_data)
df_ordinal = pd.DataFrame(categorical_data, columns=column_names)
del categorical_data
df_ordinal = non_categorical.join(df_ordinal)
del non_categorical
```
## Data Boxplot
```
def get_boxpolt_info(data, size=[10,5], axis_rotation=0):
plt.rcParams['figure.figsize'] = size
ax = sns.boxplot(data=data)
plt.title('Numerical Features Distribution')
if axis_rotation:
plt.xticks(rotation=axis_rotation)
plt.show()
get_boxpolt_info(df_ordinal[num_col_names], size=(20, 5), axis_rotation=90)
get_boxpolt_info(df_ordinal[cat_col_names], size=(20, 5), axis_rotation=90)
df_ordinal.reset_index(inplace=True, drop=True)
df_dummies.reset_index(inplace=True, drop=True)
label.reset_index(inplace=True, drop=True)
df_ordinal_final = df_ordinal.join(label)
df_ordinal_final.to_csv('../data/processed/2_1_processed_ordinal_encoding_xgboost.csv', index_label=False)
df_dummies_final = df_dummies.join(label)
df_dummies_final.to_csv('../data/processed/2_1_processed_onehot_encoding_xgboost.csv', index_label=False)
```
|
github_jupyter
|
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
from collections import Counter
from sklearn.decomposition import TruncatedSVD, PCA
from sklearn.feature_selection import SelectKBest
from sklearn.preprocessing import StandardScaler, OrdinalEncoder
sns.set(style='darkgrid', palette='pastel')
pd.options.display.max_columns = None
data = pd.read_csv('../data/processed/0_cleaned.csv')
data_backup = data.copy()
label = data[['Value', 'Wage']]
data.drop(columns=['Value', 'Wage'], inplace=True)
data.head(5)
label.isnull().sum()
categorical_columns = np.asarray([not np.issubdtype(data[col].dtype, np.number) for col in data.columns], dtype=np.bool)
cat_col_names = [col for col in data.columns if not np.issubdtype(data[col].dtype, np.number)]
num_col_names = [col for col in data.columns if np.issubdtype(data[col].dtype, np.number)]
categorical_data = data.iloc[:, categorical_columns]
non_categorical = data.iloc[:, ~categorical_columns]
categorical_data.reset_index(inplace=True, drop=True)
non_categorical.reset_index(inplace=True, drop=True)
cardinality_threshold = 10
cols_to_reduce_dim = []
for c in cat_col_names:
levels = categorical_data[c].drop_duplicates().shape[0]
if levels > cardinality_threshold:
cols_to_reduce_dim.append(c)
df_reduce = categorical_data[cols_to_reduce_dim]
df_reduce = pd.get_dummies(df_reduce)
svd = PCA(n_components=0.85)
df_reduce = svd.fit_transform(df_reduce)
column_names = []
for col in range(df_reduce.shape[1]):
column_names.append('reduced_col_{}'.format(col))
df_reduce = pd.DataFrame(df_reduce, columns=column_names)
df_dummies = categorical_data.drop(cols_to_reduce_dim, axis=1)
df_dummies = pd.get_dummies(df_dummies)
df_dummies = df_dummies.join(df_reduce)
df_dummies = df_dummies.join(non_categorical)
encoder = OrdinalEncoder()
encoder.fit(categorical_data)
column_names = categorical_data.columns
categorical_data = encoder.transform(categorical_data)
df_ordinal = pd.DataFrame(categorical_data, columns=column_names)
del categorical_data
df_ordinal = non_categorical.join(df_ordinal)
del non_categorical
def get_boxpolt_info(data, size=[10,5], axis_rotation=0):
plt.rcParams['figure.figsize'] = size
ax = sns.boxplot(data=data)
plt.title('Numerical Features Distribution')
if axis_rotation:
plt.xticks(rotation=axis_rotation)
plt.show()
get_boxpolt_info(df_ordinal[num_col_names], size=(20, 5), axis_rotation=90)
get_boxpolt_info(df_ordinal[cat_col_names], size=(20, 5), axis_rotation=90)
df_ordinal.reset_index(inplace=True, drop=True)
df_dummies.reset_index(inplace=True, drop=True)
label.reset_index(inplace=True, drop=True)
df_ordinal_final = df_ordinal.join(label)
df_ordinal_final.to_csv('../data/processed/2_1_processed_ordinal_encoding_xgboost.csv', index_label=False)
df_dummies_final = df_dummies.join(label)
df_dummies_final.to_csv('../data/processed/2_1_processed_onehot_encoding_xgboost.csv', index_label=False)
| 0.39712 | 0.921852 |
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from statsmodels.tsa.arima.model import ARIMA
sns.set_theme()
# Load data (city)
df = pd.read_csv('../data/GlobalLandTemperaturesByCity.csv')
df['dt'] = pd.DatetimeIndex(df['dt'])
df['Year'] = pd.DatetimeIndex(df['dt']).year
df = df[df['Year'] != 1743]
df = df[df['Year'] != 2013]
# For each country average all AverageTemperature values and add iso_alpha to countries
df_avg = pd.DataFrame({'AverageTemperature': df.groupby('Country')['AverageTemperature'].mean(), 'Country': df.groupby('Country')['Country'].first()})
df_avg = df_avg.reset_index(drop=True)
#print('Average country temperature df: ', df_avg)
df_avg_city = pd.DataFrame({'AverageTemperature': df.groupby('City')['AverageTemperature'].mean(), 'City': df.groupby('City')['City'].first()})
df_avg_city = df_avg_city.reset_index(drop=True)
#print('\n \n Average city temperature df: ', df_avg_city)
# For each Country for each Year average all AverageTemperature values
D1 = df.groupby(['Year', 'Country'])['AverageTemperature'].mean().reset_index()
meantemp = D1.groupby('Year')['AverageTemperature'].mean().reset_index() # Rough - er ren average.
meantemp1900 = meantemp[156:] # 1900-2012
# Normalize temperature relative to median
mediantemp = D1.groupby('Country')['AverageTemperature'].median()
df.groupby(['Year','Country'])['AverageTemperature'].mean().reset_index()['AverageTemperature']
#temp.columns = ['Year', 'Country', 'mean','median']
#D1['NormalizedTemperature'] = temp['mean'] - temp['median']
df
import copy
D2 = copy.deepcopy(D1)
normies = np.array([])
for country in D2['Country'].unique():
norm = np.array(D2.groupby('Country')['AverageTemperature'].get_group(country) - D2.groupby('Country')['AverageTemperature'].get_group(country).median())
normies = np.append(normies, norm)
D2['NormalizedTemperature'] = normies
# Imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
from statsmodels.tsa.arima.model import ARIMA
from statsmodels.graphics.tsaplots import plot_pacf, plot_acf
from statsmodels.tsa.stattools import adfuller
from itertools import product
from tqdm import tqdm_notebook
import warnings
warnings.filterwarnings("ignore")
sns.set_theme()
# fit an ARIMA model and plot residual error
# fit model
modeltype = ARIMA(meantemp1900.AverageTemperature, order=(2,1,2))
model = modeltype.fit()
# summary of fit model
print(model.summary())
# line plot of residuals
residuals = pd.DataFrame(model.resid)
residuals.plot()
# # density plot of residuals
residuals.plot(kind='kde')
# # summary stats of residuals
print(residuals.describe())
plt.plot(meantemp1900.AverageTemperature[2:])
plt.plot(model.fittedvalues[2:], color='red')
```
### Using TSA to predict future CO2 emissions
```
df3 = pd.read_csv('../data/annual-co-emissions-by-region.csv')
df3 = df3.rename(columns={'Annual CO2 emissions (zero filled)': "Annual CO2 emissions"})
df3 = df3.loc[df3['Entity'] == 'World']
df3 = df3.sort_values('Year')
px.line(df3, x = 'Year', y = 'Annual CO2 emissions')
```
#### First order difference to account for increasing trend
```
diff1 = np.diff(np.log(df3['Annual CO2 emissions']))
df3['Log transformed first order differenced CO2 emissions'] = np.pad(diff1, (0, 1), 'constant')
df3['Log transformed CO2 emissions'] = np.log(df3['Annual CO2 emissions'])
px.line(df3, x = 'Year', y = 'Log transformed first order differenced CO2 emissions')
```
#### Checking for stationarity
```
# Dickey-Fuller test of stationarity
ad_fuller_result = adfuller(df3['Log transformed first order differenced CO2 emissions'])
print(f'ADF Statistic: {ad_fuller_result[0]}')
print(f'p-value: {ad_fuller_result[1]}')
```
#### Inspecting the ACF and PACF
```
acf = plot_acf(df3['Log transformed first order differenced CO2 emissions'], lags = 20)
pacf = plot_pacf(df3['Log transformed first order differenced CO2 emissions'], lags = 20)
plt.show(acf)
plt.show(pacf)
# Finding optimal ARIMA order
def optimize_ARIMA(order_list, exog):
"""
Return dataframe with parameters and corresponding AIC
order_list - list with (p, d, q) tuples
exog - the exogenous variable
"""
results = []
for order in tqdm_notebook(order_list):
try:
model = ARIMA(df3['Log transformed first order differenced CO2 emissions'], order = order).fit()
except:
continue
aic = model.aic
results.append([order, model.aic])
result_df = pd.DataFrame(results)
print(result_df)
result_df.columns = ['(p, d, q)', 'AIC']
#Sort in ascending order, lower AIC is better
result_df = result_df.sort_values(by='AIC', ascending=True).reset_index(drop=True)
return result_df
ps = range(0, 8, 1)
d = 1
qs = range(0, 8, 1)# Create a list with all possible combination of parameters
parameters = product(ps, qs)
parameters_list = list(parameters)
order_list = []
for each in parameters_list:
each = list(each)
each.insert(1, 1)
each = tuple(each)
order_list.append(each)
result_df = optimize_ARIMA(order_list, exog=df3['Log transformed CO2 emissions'])
print(result_df[result_df.AIC == result_df.AIC.min()])
# 1,1,2 ARIMA Model
model = ARIMA(df3['Log transformed CO2 emissions'], order=(1,1,2))
model_fit = model.fit()
print(model_fit.summary())
model_fit
y = 2020
for i in range(10):
y = y + 1
df3 = df3.append({'Entity': 'World', 'Code': 'OWID_WRL','Year': y, 'Annual CO2 emissions': float("NAN"), 'Log transformed first order differenced CO2 emissions': float("NAN"),'Log transformed CO2 emissions': float("NAN")}, ignore_index=True)
preds = model_fit.get_prediction(0,281) # 95% conf
preds_ci = preds.conf_int()
preds_ci
preds_mu = preds.predicted_mean
df3['Predictions'] = preds_mu
# Plot
fig = px.line(df3, x='Year', y='Annual CO2 emissions')
# Only thing I figured is - I could do this
fig.add_scatter(x=df3['Year'], y=np.exp(df3['Predictions']), mode='lines')
# Show plot
fig.show()
```
|
github_jupyter
|
# Imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from statsmodels.tsa.arima.model import ARIMA
sns.set_theme()
# Load data (city)
df = pd.read_csv('../data/GlobalLandTemperaturesByCity.csv')
df['dt'] = pd.DatetimeIndex(df['dt'])
df['Year'] = pd.DatetimeIndex(df['dt']).year
df = df[df['Year'] != 1743]
df = df[df['Year'] != 2013]
# For each country average all AverageTemperature values and add iso_alpha to countries
df_avg = pd.DataFrame({'AverageTemperature': df.groupby('Country')['AverageTemperature'].mean(), 'Country': df.groupby('Country')['Country'].first()})
df_avg = df_avg.reset_index(drop=True)
#print('Average country temperature df: ', df_avg)
df_avg_city = pd.DataFrame({'AverageTemperature': df.groupby('City')['AverageTemperature'].mean(), 'City': df.groupby('City')['City'].first()})
df_avg_city = df_avg_city.reset_index(drop=True)
#print('\n \n Average city temperature df: ', df_avg_city)
# For each Country for each Year average all AverageTemperature values
D1 = df.groupby(['Year', 'Country'])['AverageTemperature'].mean().reset_index()
meantemp = D1.groupby('Year')['AverageTemperature'].mean().reset_index() # Rough - er ren average.
meantemp1900 = meantemp[156:] # 1900-2012
# Normalize temperature relative to median
mediantemp = D1.groupby('Country')['AverageTemperature'].median()
df.groupby(['Year','Country'])['AverageTemperature'].mean().reset_index()['AverageTemperature']
#temp.columns = ['Year', 'Country', 'mean','median']
#D1['NormalizedTemperature'] = temp['mean'] - temp['median']
df
import copy
D2 = copy.deepcopy(D1)
normies = np.array([])
for country in D2['Country'].unique():
norm = np.array(D2.groupby('Country')['AverageTemperature'].get_group(country) - D2.groupby('Country')['AverageTemperature'].get_group(country).median())
normies = np.append(normies, norm)
D2['NormalizedTemperature'] = normies
# Imports
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import plotly.express as px
import plotly.graph_objects as go
from statsmodels.tsa.arima.model import ARIMA
from statsmodels.graphics.tsaplots import plot_pacf, plot_acf
from statsmodels.tsa.stattools import adfuller
from itertools import product
from tqdm import tqdm_notebook
import warnings
warnings.filterwarnings("ignore")
sns.set_theme()
# fit an ARIMA model and plot residual error
# fit model
modeltype = ARIMA(meantemp1900.AverageTemperature, order=(2,1,2))
model = modeltype.fit()
# summary of fit model
print(model.summary())
# line plot of residuals
residuals = pd.DataFrame(model.resid)
residuals.plot()
# # density plot of residuals
residuals.plot(kind='kde')
# # summary stats of residuals
print(residuals.describe())
plt.plot(meantemp1900.AverageTemperature[2:])
plt.plot(model.fittedvalues[2:], color='red')
df3 = pd.read_csv('../data/annual-co-emissions-by-region.csv')
df3 = df3.rename(columns={'Annual CO2 emissions (zero filled)': "Annual CO2 emissions"})
df3 = df3.loc[df3['Entity'] == 'World']
df3 = df3.sort_values('Year')
px.line(df3, x = 'Year', y = 'Annual CO2 emissions')
diff1 = np.diff(np.log(df3['Annual CO2 emissions']))
df3['Log transformed first order differenced CO2 emissions'] = np.pad(diff1, (0, 1), 'constant')
df3['Log transformed CO2 emissions'] = np.log(df3['Annual CO2 emissions'])
px.line(df3, x = 'Year', y = 'Log transformed first order differenced CO2 emissions')
# Dickey-Fuller test of stationarity
ad_fuller_result = adfuller(df3['Log transformed first order differenced CO2 emissions'])
print(f'ADF Statistic: {ad_fuller_result[0]}')
print(f'p-value: {ad_fuller_result[1]}')
acf = plot_acf(df3['Log transformed first order differenced CO2 emissions'], lags = 20)
pacf = plot_pacf(df3['Log transformed first order differenced CO2 emissions'], lags = 20)
plt.show(acf)
plt.show(pacf)
# Finding optimal ARIMA order
def optimize_ARIMA(order_list, exog):
"""
Return dataframe with parameters and corresponding AIC
order_list - list with (p, d, q) tuples
exog - the exogenous variable
"""
results = []
for order in tqdm_notebook(order_list):
try:
model = ARIMA(df3['Log transformed first order differenced CO2 emissions'], order = order).fit()
except:
continue
aic = model.aic
results.append([order, model.aic])
result_df = pd.DataFrame(results)
print(result_df)
result_df.columns = ['(p, d, q)', 'AIC']
#Sort in ascending order, lower AIC is better
result_df = result_df.sort_values(by='AIC', ascending=True).reset_index(drop=True)
return result_df
ps = range(0, 8, 1)
d = 1
qs = range(0, 8, 1)# Create a list with all possible combination of parameters
parameters = product(ps, qs)
parameters_list = list(parameters)
order_list = []
for each in parameters_list:
each = list(each)
each.insert(1, 1)
each = tuple(each)
order_list.append(each)
result_df = optimize_ARIMA(order_list, exog=df3['Log transformed CO2 emissions'])
print(result_df[result_df.AIC == result_df.AIC.min()])
# 1,1,2 ARIMA Model
model = ARIMA(df3['Log transformed CO2 emissions'], order=(1,1,2))
model_fit = model.fit()
print(model_fit.summary())
model_fit
y = 2020
for i in range(10):
y = y + 1
df3 = df3.append({'Entity': 'World', 'Code': 'OWID_WRL','Year': y, 'Annual CO2 emissions': float("NAN"), 'Log transformed first order differenced CO2 emissions': float("NAN"),'Log transformed CO2 emissions': float("NAN")}, ignore_index=True)
preds = model_fit.get_prediction(0,281) # 95% conf
preds_ci = preds.conf_int()
preds_ci
preds_mu = preds.predicted_mean
df3['Predictions'] = preds_mu
# Plot
fig = px.line(df3, x='Year', y='Annual CO2 emissions')
# Only thing I figured is - I could do this
fig.add_scatter(x=df3['Year'], y=np.exp(df3['Predictions']), mode='lines')
# Show plot
fig.show()
| 0.675551 | 0.712945 |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
# Save config information
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
number = 1
city_name = []
lat = []
lng = []
temp = []
humid = []
clouds = []
wind = []
for city in cities:
try:
city_data = (requests.get(url + '&q=' + city + '&q=' + units + '&appid=' + weather_api_key)).json()
city_name.append(city_data['name'])
lat.append(city_data['coord']['lat'])
lng.append(city_data['coord']['lon'])
temp.append(city_data['main']['temp'])
humid.append(city_data['main']['humidity'])
clouds.append(city_data['clouds']['all'])
wind.append(city_data['wind']['speed'])
print(f'City number {number} of {len(cities)} complete. | Added {city}')
number = number + 1
except KeyError:
print(f'Missing data in city number {number} of {len(cities)}. | Skipping {city}')
number = number + 1
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
city_data_df = pd.DataFrame({'City': city_name,
'Latitude': lat,
'Longitude': lng,
'Temperature': temp,
'Humidity': humid,
'Cloudiness': clouds,
'Wind Speed': wind})
pd.DataFrame.to_csv(city_data_df, 'city_data.csv')
city_data_df.head()
print(city_data_df)
```
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
```
# Get the indices of cities that have humidity over 100%.
humid_cities = city_data_df[city_data_df['Humidity'] > 100].index
print(humid_cities)
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
clean_city_df = city_data_df.drop(humid_cities, inplace=False)
clean_city_df.head()
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
```
from datetime import date
```
## Latitude vs. Temperature Plot
```
plt.scatter(clean_city_df['Latitude'], clean_city_df['Temperature'])
plt.title(f'City Latitude vs. Temperature {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.grid(True)
plt.savefig('lat_temp.png', bbox_inches='tight')
```
## Latitude vs. Humidity Plot
```
plt.scatter(clean_city_df['Latitude'], clean_city_df['Humidity'])
plt.title(f'City Latitude vs. Humidity {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.grid(True)
plt.savefig('lat_humid.png', bbox_inches='tight')
```
## Latitude vs. Cloudiness Plot
```
plt.scatter(clean_city_df['Latitude'], clean_city_df['Cloudiness'])
plt.title(f'City Latitude vs. Cloudiness {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.grid(True)
plt.savefig('lat_cloud.png', bbox_inches='tight')
```
## Latitude vs. Wind Speed Plot
```
plt.scatter(clean_city_df['Latitude'], clean_city_df['Wind Speed'])
plt.title(f'City Latitude vs. Wind Speed {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.grid(True)
plt.savefig('lat_wind.png', bbox_inches='tight')
```
## Linear Regression
```
nothern = clean_city_df.loc[clean_city_df["Latitude"] >= 0.0]
nothern.reset_index(inplace=False)
southern = clean_city_df.loc[clean_city_df["Latitude"] < 0.0]
southern.reset_index(inplace=False)
def plotLinearRegression(xdata,ydata,xlbl,ylbl,lblpos,ifig):
(slope, intercept, rvalue, pvalue, stderr) = linregress(xdata, ydata)
print(f"The r-squared is: {rvalue}")
regress_values = xdata * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(xdata,ydata)
plt.plot(xdata,regress_values,"r-")
plt.annotate(line_eq,lblpos,fontsize=15,color="red")
plt.xlabel(xlbl)
plt.ylabel(ylbl)
plt.show()
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Temperature"
lblpos = (0,25)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,5)
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Temperature"
lblpos = (-55,90)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,6)
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Humidity"
lblpos = (45,10)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,7)
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Humidity"
lblpos = (-55,15)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,8)
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Cloudiness"
lblpos = (20,40)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,9)
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Cloudiness"
lblpos = (-55,50)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,10)
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Wind Speed"
lblpos = (0,30)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,11)
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
xlbl = "Latitude"
ylbl = "Wind Speed"
lblpos = (-25,33)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,12)
print('The tempurature with any given area does seem to have a coorelation with the latitude. There does not seem to be a coorelation between the latitude and the cloud coverage in a given region. Windspeed is also negligable when compared to latitude.')
```
|
github_jupyter
|
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# Save config information
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
number = 1
city_name = []
lat = []
lng = []
temp = []
humid = []
clouds = []
wind = []
for city in cities:
try:
city_data = (requests.get(url + '&q=' + city + '&q=' + units + '&appid=' + weather_api_key)).json()
city_name.append(city_data['name'])
lat.append(city_data['coord']['lat'])
lng.append(city_data['coord']['lon'])
temp.append(city_data['main']['temp'])
humid.append(city_data['main']['humidity'])
clouds.append(city_data['clouds']['all'])
wind.append(city_data['wind']['speed'])
print(f'City number {number} of {len(cities)} complete. | Added {city}')
number = number + 1
except KeyError:
print(f'Missing data in city number {number} of {len(cities)}. | Skipping {city}')
number = number + 1
city_data_df = pd.DataFrame({'City': city_name,
'Latitude': lat,
'Longitude': lng,
'Temperature': temp,
'Humidity': humid,
'Cloudiness': clouds,
'Wind Speed': wind})
pd.DataFrame.to_csv(city_data_df, 'city_data.csv')
city_data_df.head()
print(city_data_df)
# Get the indices of cities that have humidity over 100%.
humid_cities = city_data_df[city_data_df['Humidity'] > 100].index
print(humid_cities)
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
clean_city_df = city_data_df.drop(humid_cities, inplace=False)
clean_city_df.head()
from datetime import date
plt.scatter(clean_city_df['Latitude'], clean_city_df['Temperature'])
plt.title(f'City Latitude vs. Temperature {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Temperature (F)')
plt.grid(True)
plt.savefig('lat_temp.png', bbox_inches='tight')
plt.scatter(clean_city_df['Latitude'], clean_city_df['Humidity'])
plt.title(f'City Latitude vs. Humidity {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Humidity (%)')
plt.grid(True)
plt.savefig('lat_humid.png', bbox_inches='tight')
plt.scatter(clean_city_df['Latitude'], clean_city_df['Cloudiness'])
plt.title(f'City Latitude vs. Cloudiness {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Cloudiness (%)')
plt.grid(True)
plt.savefig('lat_cloud.png', bbox_inches='tight')
plt.scatter(clean_city_df['Latitude'], clean_city_df['Wind Speed'])
plt.title(f'City Latitude vs. Wind Speed {date.today()}')
plt.xlabel('Latitude')
plt.ylabel('Wind Speed (mph)')
plt.grid(True)
plt.savefig('lat_wind.png', bbox_inches='tight')
nothern = clean_city_df.loc[clean_city_df["Latitude"] >= 0.0]
nothern.reset_index(inplace=False)
southern = clean_city_df.loc[clean_city_df["Latitude"] < 0.0]
southern.reset_index(inplace=False)
def plotLinearRegression(xdata,ydata,xlbl,ylbl,lblpos,ifig):
(slope, intercept, rvalue, pvalue, stderr) = linregress(xdata, ydata)
print(f"The r-squared is: {rvalue}")
regress_values = xdata * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(xdata,ydata)
plt.plot(xdata,regress_values,"r-")
plt.annotate(line_eq,lblpos,fontsize=15,color="red")
plt.xlabel(xlbl)
plt.ylabel(ylbl)
plt.show()
xlbl = "Latitude"
ylbl = "Temperature"
lblpos = (0,25)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,5)
xlbl = "Latitude"
ylbl = "Temperature"
lblpos = (-55,90)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,6)
xlbl = "Latitude"
ylbl = "Humidity"
lblpos = (45,10)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,7)
xlbl = "Latitude"
ylbl = "Humidity"
lblpos = (-55,15)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,8)
xlbl = "Latitude"
ylbl = "Cloudiness"
lblpos = (20,40)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,9)
xlbl = "Latitude"
ylbl = "Cloudiness"
lblpos = (-55,50)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,10)
xlbl = "Latitude"
ylbl = "Wind Speed"
lblpos = (0,30)
plotLinearRegression(nothern[xlbl],nothern[ylbl],xlbl,ylbl,lblpos,11)
xlbl = "Latitude"
ylbl = "Wind Speed"
lblpos = (-25,33)
plotLinearRegression(southern[xlbl],southern[ylbl],xlbl,ylbl,lblpos,12)
print('The tempurature with any given area does seem to have a coorelation with the latitude. There does not seem to be a coorelation between the latitude and the cloud coverage in a given region. Windspeed is also negligable when compared to latitude.')
| 0.386185 | 0.830113 |
# Direct Marketing with Amazon SageMaker Autopilot
---
---
## Contents
1. [Introduction](#Introduction)
1. [Prerequisites](#Prerequisites)
1. [Downloading the dataset](#Downloading)
1. [Upload the dataset to Amazon S3](#Uploading)
1. [Setting up the SageMaker Autopilot Job](#Settingup)
1. [Launching the SageMaker Autopilot Job](#Launching)
1. [Tracking Sagemaker Autopilot Job Progress](#Tracking)
1. [Results](#Results)
1. [Cleanup](#Cleanup)
## Introduction
Amazon SageMaker Autopilot is an automated machine learning (commonly referred to as AutoML) solution for tabular datasets. You can use SageMaker Autopilot in different ways: on autopilot (hence the name) or with human guidance, without code through SageMaker Studio, or using the AWS SDKs. This notebook, as a first glimpse, will use the AWS SDKs to simply create and deploy a machine learning model.
A typical introductory task in machine learning (the "Hello World" equivalent) is one that uses a dataset to predict whether a customer will enroll for a term deposit at a bank, after one or more phone calls. For more information about the task and the dataset used, see [Bank Marketing Data Set](https://archive.ics.uci.edu/ml/datasets/bank+marketing).
Direct marketing, through mail, email, phone, etc., is a common tactic to acquire customers. Because resources and a customer's attention are limited, the goal is to only target the subset of prospects who are likely to engage with a specific offer. Predicting those potential customers based on readily available information like demographics, past interactions, and environmental factors is a common machine learning problem. You can imagine that this task would readily translate to marketing lead prioritization in your own organization.
This notebook demonstrates how you can use Autopilot on this dataset to get the most accurate ML pipeline through exploring a number of potential options, or "candidates". Each candidate generated by Autopilot consists of two steps. The first step performs automated feature engineering on the dataset and the second step trains and tunes an algorithm to produce a model. When you deploy this model, it follows similar steps. Feature engineering followed by inference, to decide whether the lead is worth pursuing or not. The notebook contains instructions on how to train the model as well as to deploy the model to perform batch predictions on a set of leads. Where it is possible, use the Amazon SageMaker Python SDK, a high level SDK, to simplify the way you interact with Amazon SageMaker.
Other examples demonstrate how to customize models in various ways. For instance, models deployed to devices typically have memory constraints that need to be satisfied as well as accuracy. Other use cases have real-time deployment requirements and latency constraints. For now, keep it simple.
## Prerequisites
Before you start the tasks in this tutorial, do the following:
- The Amazon Simple Storage Service (Amazon S3) bucket and prefix that you want to use for training and model data. This should be within the same Region as Amazon SageMaker training. The code below will create, or if it exists, use, the default bucket.
- The IAM role to give Autopilot access to your data. See the Amazon SageMaker documentation for more information on IAM roles: https://docs.aws.amazon.com/sagemaker/latest/dg/security-iam.html
```
import sagemaker
import boto3
from sagemaker import get_execution_role
region = boto3.Session().region_name
session = sagemaker.Session()
bucket = session.default_bucket()
prefix = 'sagemaker/autopilot-dm'
role = get_execution_role()
sm = boto3.Session().client(service_name='sagemaker',region_name=region)
```
## Downloading the dataset<a name="Downloading"></a>
Download the [direct marketing dataset](!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip) from the sample data s3 bucket.
\[Moro et al., 2014\] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
```
!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
!unzip -o bank-additional.zip
local_data_path = './bank-additional/bank-additional-full.csv'
```
## Upload the dataset to Amazon S3<a name="Uploading"></a>
Before you run Autopilot on the dataset, first perform a check of the dataset to make sure that it has no obvious errors. The Autopilot process can take long time, and it's generally a good practice to inspect the dataset before you start a job. This particular dataset is small, so you can inspect it in the notebook instance itself. If you have a larger dataset that will not fit in a notebook instance memory, inspect the dataset offline using a big data analytics tool like Apache Spark. [Deequ](https://github.com/awslabs/deequ) is a library built on top of Apache Spark that can be helpful for performing checks on large datasets. Autopilot is capable of handling datasets up to 5 GB.
Read the data into a Pandas data frame and take a look.
```
import pandas as pd
data = pd.read_csv(local_data_path)
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 10) # Keep the output on one page
data
```
Note that there are 20 features to help predict the target column 'y'.
Amazon SageMaker Autopilot takes care of preprocessing your data for you. You do not need to perform conventional data preprocssing techniques such as handling missing values, converting categorical features to numeric features, scaling data, and handling more complicated data types.
Moreover, splitting the dataset into training and validation splits is not necessary. Autopilot takes care of this for you. You may, however, want to split out a test set. That's next, although you use it for batch inference at the end instead of testing the model.
### Reserve some data for calling batch inference on the model
Divide the data into training and testing splits. The training split is used by SageMaker Autopilot. The testing split is reserved to perform inference using the suggested model.
```
train_data = data.sample(frac=0.8,random_state=200)
test_data = data.drop(train_data.index)
test_data_no_target = test_data.drop(columns=['y'])
```
### Upload the dataset to Amazon S3
Copy the file to Amazon Simple Storage Service (Amazon S3) in a .csv format for Amazon SageMaker training to use.
```
train_file = 'train_data.csv';
train_data.to_csv(train_file, index=False, header=True)
train_data_s3_path = session.upload_data(path=train_file, key_prefix=prefix + "/train")
print('Train data uploaded to: ' + train_data_s3_path)
test_file = 'test_data.csv';
test_data_no_target.to_csv(test_file, index=False, header=False)
test_data_s3_path = session.upload_data(path=test_file, key_prefix=prefix + "/test")
print('Test data uploaded to: ' + test_data_s3_path)
```
## Setting up the SageMaker Autopilot Job<a name="Settingup"></a>
After uploading the dataset to Amazon S3, you can invoke Autopilot to find the best ML pipeline to train a model on this dataset.
The required inputs for invoking a Autopilot job are:
* Amazon S3 location for input dataset and for all output artifacts
* Name of the column of the dataset you want to predict (`y` in this case)
* An IAM role
Currently Autopilot supports only tabular datasets in CSV format. Either all files should have a header row, or the first file of the dataset, when sorted in alphabetical/lexical order, is expected to have a header row.
```
input_data_config = [{
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': 's3://{}/{}/train'.format(bucket,prefix)
}
},
'TargetAttributeName': 'y'
}
]
output_data_config = {
'S3OutputPath': 's3://{}/{}/output'.format(bucket,prefix)
}
```
You can also specify the type of problem you want to solve with your dataset (`Regression, MulticlassClassification, BinaryClassification`). In case you are not sure, SageMaker Autopilot will infer the problem type based on statistics of the target column (the column you want to predict).
You have the option to limit the running time of a SageMaker Autopilot job by providing either the maximum number of pipeline evaluations or candidates (one pipeline evaluation is called a `Candidate` because it generates a candidate model) or providing the total time allocated for the overall Autopilot job. Under default settings, this job takes about four hours to run. This varies between runs because of the nature of the exploratory process Autopilot uses to find optimal training parameters.
## Launching the SageMaker Autopilot Job<a name="Launching"></a>
You can now launch the Autopilot job by calling the `create_auto_ml_job` API.
```
from time import gmtime, strftime, sleep
timestamp_suffix = strftime('%d-%H-%M-%S', gmtime())
auto_ml_job_name = 'automl-banking-' + timestamp_suffix
print('AutoMLJobName: ' + auto_ml_job_name)
sm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name,
InputDataConfig=input_data_config,
OutputDataConfig=output_data_config,
RoleArn=role)
```
## Tracking SageMaker Autopilot job progress<a name="Tracking"></a>
SageMaker Autopilot job consists of the following high-level steps :
* Analyzing Data, where the dataset is analyzed and Autopilot comes up with a list of ML pipelines that should be tried out on the dataset. The dataset is also split into train and validation sets.
* Feature Engineering, where Autopilot performs feature transformation on individual features of the dataset as well as at an aggregate level.
* Model Tuning, where the top performing pipeline is selected along with the optimal hyperparameters for the training algorithm (the last stage of the pipeline).
```
print ('JobStatus - Secondary Status')
print('------------------------------')
describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)
print (describe_response['AutoMLJobStatus'] + " - " + describe_response['AutoMLJobSecondaryStatus'])
job_run_status = describe_response['AutoMLJobStatus']
while job_run_status not in ('Failed', 'Completed', 'Stopped'):
describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)
job_run_status = describe_response['AutoMLJobStatus']
print (describe_response['AutoMLJobStatus'] + " - " + describe_response['AutoMLJobSecondaryStatus'])
sleep(30)
```
## Results
Now use the describe_auto_ml_job API to look up the best candidate selected by the SageMaker Autopilot job.
```
best_candidate = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['BestCandidate']
best_candidate_name = best_candidate['CandidateName']
print(best_candidate)
print('\n')
print("CandidateName: " + best_candidate_name)
print("FinalAutoMLJobObjectiveMetricName: " + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])
print("FinalAutoMLJobObjectiveMetricValue: " + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))
```
### Perform batch inference using the best candidate
Now that you have successfully completed the SageMaker Autopilot job on the dataset, create a model from any of the candidates by using [Inference Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html).
```
model_name = 'automl-banking-model-' + timestamp_suffix
model = sm.create_model(Containers=best_candidate['InferenceContainers'],
ModelName=model_name,
ExecutionRoleArn=role)
print('Model ARN corresponding to the best candidate is : {}'.format(model['ModelArn']))
```
You can use batch inference by using Amazon SageMaker batch transform. The same model can also be deployed to perform online inference using Amazon SageMaker hosting.
```
transform_job_name = 'automl-banking-transform-' + timestamp_suffix
transform_input = {
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': test_data_s3_path
}
},
'ContentType': 'text/csv',
'CompressionType': 'None',
'SplitType': 'Line'
}
transform_output = {
'S3OutputPath': 's3://{}/{}/inference-results'.format(bucket,prefix),
}
transform_resources = {
'InstanceType': 'ml.m5.4xlarge',
'InstanceCount': 1
}
sm.create_transform_job(TransformJobName = transform_job_name,
ModelName = model_name,
TransformInput = transform_input,
TransformOutput = transform_output,
TransformResources = transform_resources
)
```
Watch the transform job for completion.
```
print ('JobStatus')
print('----------')
describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)
job_run_status = describe_response['TransformJobStatus']
print (job_run_status)
while job_run_status not in ('Failed', 'Completed', 'Stopped'):
describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)
job_run_status = describe_response['TransformJobStatus']
print (job_run_status)
sleep(30)
```
Now let's view the results of the transform job:
```
s3_output_key = '{}/inference-results/test_data.csv.out'.format(prefix);
local_inference_results_path = 'inference_results.csv'
s3 = boto3.resource('s3')
inference_results_bucket = s3.Bucket(session.default_bucket())
inference_results_bucket.download_file(s3_output_key, local_inference_results_path);
data = pd.read_csv(local_inference_results_path, sep=';')
pd.set_option('display.max_rows', 10) # Keep the output on one page
data
```
### View other candidates explored by SageMaker Autopilot
You can view all the candidates (pipeline evaluations with different hyperparameter combinations) that were explored by SageMaker Autopilot and sort them by their final performance metric.
```
candidates = sm.list_candidates_for_auto_ml_job(AutoMLJobName=auto_ml_job_name, SortBy='FinalObjectiveMetricValue')['Candidates']
index = 1
for candidate in candidates:
print (str(index) + " " + candidate['CandidateName'] + " " + str(candidate['FinalAutoMLJobObjectiveMetric']['Value']))
index += 1
```
### Candidate Generation Notebook
Sagemaker AutoPilot also auto-generates a Candidate Definitions notebook. This notebook can be used to interactively step through the various steps taken by the Sagemaker Autopilot to arrive at the best candidate. This notebook can also be used to override various runtime parameters like parallelism, hardware used, algorithms explored, feature extraction scripts and more.
The notebook can be downloaded from the following Amazon S3 location:
```
sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['CandidateDefinitionNotebookLocation']
```
### Data Exploration Notebook
Sagemaker Autopilot also auto-generates a Data Exploration notebook, which can be downloaded from the following Amazon S3 location:
```
sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['DataExplorationNotebookLocation']
```
## Cleanup
The Autopilot job creates many underlying artifacts such as dataset splits, preprocessing scripts, or preprocessed data, etc. This code, when un-commented, deletes them. This operation deletes all the generated models and the auto-generated notebooks as well.
```
#s3 = boto3.resource('s3')
#bucket = s3.Bucket(bucket)
#job_outputs_prefix = '{}/output/{}'.format(prefix,auto_ml_job_name)
#bucket.objects.filter(Prefix=job_outputs_prefix).delete()
```
|
github_jupyter
|
import sagemaker
import boto3
from sagemaker import get_execution_role
region = boto3.Session().region_name
session = sagemaker.Session()
bucket = session.default_bucket()
prefix = 'sagemaker/autopilot-dm'
role = get_execution_role()
sm = boto3.Session().client(service_name='sagemaker',region_name=region)
!wget -N https://sagemaker-sample-data-us-west-2.s3-us-west-2.amazonaws.com/autopilot/direct_marketing/bank-additional.zip
!unzip -o bank-additional.zip
local_data_path = './bank-additional/bank-additional-full.csv'
import pandas as pd
data = pd.read_csv(local_data_path)
pd.set_option('display.max_columns', 500) # Make sure we can see all of the columns
pd.set_option('display.max_rows', 10) # Keep the output on one page
data
train_data = data.sample(frac=0.8,random_state=200)
test_data = data.drop(train_data.index)
test_data_no_target = test_data.drop(columns=['y'])
train_file = 'train_data.csv';
train_data.to_csv(train_file, index=False, header=True)
train_data_s3_path = session.upload_data(path=train_file, key_prefix=prefix + "/train")
print('Train data uploaded to: ' + train_data_s3_path)
test_file = 'test_data.csv';
test_data_no_target.to_csv(test_file, index=False, header=False)
test_data_s3_path = session.upload_data(path=test_file, key_prefix=prefix + "/test")
print('Test data uploaded to: ' + test_data_s3_path)
input_data_config = [{
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': 's3://{}/{}/train'.format(bucket,prefix)
}
},
'TargetAttributeName': 'y'
}
]
output_data_config = {
'S3OutputPath': 's3://{}/{}/output'.format(bucket,prefix)
}
from time import gmtime, strftime, sleep
timestamp_suffix = strftime('%d-%H-%M-%S', gmtime())
auto_ml_job_name = 'automl-banking-' + timestamp_suffix
print('AutoMLJobName: ' + auto_ml_job_name)
sm.create_auto_ml_job(AutoMLJobName=auto_ml_job_name,
InputDataConfig=input_data_config,
OutputDataConfig=output_data_config,
RoleArn=role)
print ('JobStatus - Secondary Status')
print('------------------------------')
describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)
print (describe_response['AutoMLJobStatus'] + " - " + describe_response['AutoMLJobSecondaryStatus'])
job_run_status = describe_response['AutoMLJobStatus']
while job_run_status not in ('Failed', 'Completed', 'Stopped'):
describe_response = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)
job_run_status = describe_response['AutoMLJobStatus']
print (describe_response['AutoMLJobStatus'] + " - " + describe_response['AutoMLJobSecondaryStatus'])
sleep(30)
best_candidate = sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['BestCandidate']
best_candidate_name = best_candidate['CandidateName']
print(best_candidate)
print('\n')
print("CandidateName: " + best_candidate_name)
print("FinalAutoMLJobObjectiveMetricName: " + best_candidate['FinalAutoMLJobObjectiveMetric']['MetricName'])
print("FinalAutoMLJobObjectiveMetricValue: " + str(best_candidate['FinalAutoMLJobObjectiveMetric']['Value']))
model_name = 'automl-banking-model-' + timestamp_suffix
model = sm.create_model(Containers=best_candidate['InferenceContainers'],
ModelName=model_name,
ExecutionRoleArn=role)
print('Model ARN corresponding to the best candidate is : {}'.format(model['ModelArn']))
transform_job_name = 'automl-banking-transform-' + timestamp_suffix
transform_input = {
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': test_data_s3_path
}
},
'ContentType': 'text/csv',
'CompressionType': 'None',
'SplitType': 'Line'
}
transform_output = {
'S3OutputPath': 's3://{}/{}/inference-results'.format(bucket,prefix),
}
transform_resources = {
'InstanceType': 'ml.m5.4xlarge',
'InstanceCount': 1
}
sm.create_transform_job(TransformJobName = transform_job_name,
ModelName = model_name,
TransformInput = transform_input,
TransformOutput = transform_output,
TransformResources = transform_resources
)
print ('JobStatus')
print('----------')
describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)
job_run_status = describe_response['TransformJobStatus']
print (job_run_status)
while job_run_status not in ('Failed', 'Completed', 'Stopped'):
describe_response = sm.describe_transform_job(TransformJobName = transform_job_name)
job_run_status = describe_response['TransformJobStatus']
print (job_run_status)
sleep(30)
s3_output_key = '{}/inference-results/test_data.csv.out'.format(prefix);
local_inference_results_path = 'inference_results.csv'
s3 = boto3.resource('s3')
inference_results_bucket = s3.Bucket(session.default_bucket())
inference_results_bucket.download_file(s3_output_key, local_inference_results_path);
data = pd.read_csv(local_inference_results_path, sep=';')
pd.set_option('display.max_rows', 10) # Keep the output on one page
data
candidates = sm.list_candidates_for_auto_ml_job(AutoMLJobName=auto_ml_job_name, SortBy='FinalObjectiveMetricValue')['Candidates']
index = 1
for candidate in candidates:
print (str(index) + " " + candidate['CandidateName'] + " " + str(candidate['FinalAutoMLJobObjectiveMetric']['Value']))
index += 1
sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['CandidateDefinitionNotebookLocation']
sm.describe_auto_ml_job(AutoMLJobName=auto_ml_job_name)['AutoMLJobArtifacts']['DataExplorationNotebookLocation']
#s3 = boto3.resource('s3')
#bucket = s3.Bucket(bucket)
#job_outputs_prefix = '{}/output/{}'.format(prefix,auto_ml_job_name)
#bucket.objects.filter(Prefix=job_outputs_prefix).delete()
| 0.206174 | 0.98679 |
## 캐글 데이터셋 링크
+ original: https://www.kaggle.com/trolukovich/apparel-images-dataset
+ me: https://www.kaggle.com/airplane2230/apparel-image-dataset-2
```
import numpy as np
import pandas as pd
import tensorflow as tf
import glob as glob
import cv2
all_data = np.array(glob.glob('./clothes_dataset/*/*.jpg', recursive=True))
# 색과 옷의 종류를 구별하기 위해 해당되는 label에 1을 삽입합니다.
def check_cc(color, clothes):
labels = np.zeros(11,)
# color check
if(color == 'black'):
labels[0] = 1
color_index = 0
elif(color == 'blue'):
labels[1] = 1
color_index = 1
elif(color == 'brown'):
labels[2] = 1
color_index = 2
elif(color == 'green'):
labels[3] = 1
color_index = 3
elif(color == 'red'):
labels[4] = 1
color_index = 4
elif(color == 'white'):
labels[5] = 1
color_index = 5
# clothes check
if(clothes == 'dress'):
labels[6] = 1
elif(clothes == 'shirt'):
labels[7] = 1
elif(clothes == 'pants'):
labels[8] = 1
elif(clothes == 'shorts'):
labels[9] = 1
elif(clothes == 'shoes'):
labels[10] = 1
return labels, color_index
# label과 color_label을 담을 배열을 선언합니다.
all_labels = np.empty((all_data.shape[0], 11))
all_color_labels = np.empty((all_data.shape[0], 1))
for i, data in enumerate(all_data):
color_and_clothes = all_data[i].split('\\')[1].split('_')
color = color_and_clothes[0]
clothes = color_and_clothes[1]
labels, color_index = check_cc(color, clothes)
all_labels[i] = labels; all_color_labels[i] = color_index
all_labels = np.concatenate((all_labels, all_color_labels), axis = -1)
from sklearn.model_selection import train_test_split
# 훈련, 검증, 테스트 데이터셋으로 나눕니다.
train_x, test_x, train_y, test_y = train_test_split(all_data, all_labels, shuffle = True, test_size = 0.3,
random_state = 99)
train_x, val_x, train_y, val_y = train_test_split(train_x, train_y, shuffle = True, test_size = 0.3,
random_state = 99)
train_df = pd.DataFrame({'image':train_x, 'black':train_y[:, 0], 'blue':train_y[:, 1],
'brown':train_y[:, 2], 'green':train_y[:, 3], 'red':train_y[:, 4],
'white':train_y[:, 5], 'dress':train_y[:, 6], 'shirt':train_y[:, 7],
'pants':train_y[:, 8], 'shorts':train_y[:, 9], 'shoes':train_y[:, 10],
'color':train_y[:, 11]})
val_df = pd.DataFrame({'image':val_x, 'black':val_y[:, 0], 'blue':val_y[:, 1],
'brown':val_y[:, 2], 'green':val_y[:, 3], 'red':val_y[:, 4],
'white':val_y[:, 5], 'dress':val_y[:, 6], 'shirt':val_y[:, 7],
'pants':val_y[:, 8], 'shorts':val_y[:, 9], 'shoes':val_y[:, 10],
'color':val_y[:, 11]})
test_df = pd.DataFrame({'image':test_x, 'black':test_y[:, 0], 'blue':test_y[:, 1],
'brown':test_y[:, 2], 'green':test_y[:, 3], 'red':test_y[:, 4],
'white':test_y[:, 5], 'dress':test_y[:, 6], 'shirt':test_y[:, 7],
'pants':test_y[:, 8], 'shorts':test_y[:, 9], 'shoes':test_y[:, 10],
'color':test_y[:, 11]})
```
## 색 정보 제공 X
```
train_df.to_csv('./csv_data/nocolorinfo/train.csv')
val_df.to_csv('./csv_data/nocolorinfo/val.csv')
test_df.to_csv('./csv_data/nocolorinfo/test.csv')
```
## 색 정보 제공 O
```
train_df.to_csv('./csv_data/colorinfo/train_color.csv')
val_df.to_csv('./csv_data/colorinfo/val_color.csv')
test_df.to_csv('./csv_data/colorinfo/test_color.csv')
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import tensorflow as tf
import glob as glob
import cv2
all_data = np.array(glob.glob('./clothes_dataset/*/*.jpg', recursive=True))
# 색과 옷의 종류를 구별하기 위해 해당되는 label에 1을 삽입합니다.
def check_cc(color, clothes):
labels = np.zeros(11,)
# color check
if(color == 'black'):
labels[0] = 1
color_index = 0
elif(color == 'blue'):
labels[1] = 1
color_index = 1
elif(color == 'brown'):
labels[2] = 1
color_index = 2
elif(color == 'green'):
labels[3] = 1
color_index = 3
elif(color == 'red'):
labels[4] = 1
color_index = 4
elif(color == 'white'):
labels[5] = 1
color_index = 5
# clothes check
if(clothes == 'dress'):
labels[6] = 1
elif(clothes == 'shirt'):
labels[7] = 1
elif(clothes == 'pants'):
labels[8] = 1
elif(clothes == 'shorts'):
labels[9] = 1
elif(clothes == 'shoes'):
labels[10] = 1
return labels, color_index
# label과 color_label을 담을 배열을 선언합니다.
all_labels = np.empty((all_data.shape[0], 11))
all_color_labels = np.empty((all_data.shape[0], 1))
for i, data in enumerate(all_data):
color_and_clothes = all_data[i].split('\\')[1].split('_')
color = color_and_clothes[0]
clothes = color_and_clothes[1]
labels, color_index = check_cc(color, clothes)
all_labels[i] = labels; all_color_labels[i] = color_index
all_labels = np.concatenate((all_labels, all_color_labels), axis = -1)
from sklearn.model_selection import train_test_split
# 훈련, 검증, 테스트 데이터셋으로 나눕니다.
train_x, test_x, train_y, test_y = train_test_split(all_data, all_labels, shuffle = True, test_size = 0.3,
random_state = 99)
train_x, val_x, train_y, val_y = train_test_split(train_x, train_y, shuffle = True, test_size = 0.3,
random_state = 99)
train_df = pd.DataFrame({'image':train_x, 'black':train_y[:, 0], 'blue':train_y[:, 1],
'brown':train_y[:, 2], 'green':train_y[:, 3], 'red':train_y[:, 4],
'white':train_y[:, 5], 'dress':train_y[:, 6], 'shirt':train_y[:, 7],
'pants':train_y[:, 8], 'shorts':train_y[:, 9], 'shoes':train_y[:, 10],
'color':train_y[:, 11]})
val_df = pd.DataFrame({'image':val_x, 'black':val_y[:, 0], 'blue':val_y[:, 1],
'brown':val_y[:, 2], 'green':val_y[:, 3], 'red':val_y[:, 4],
'white':val_y[:, 5], 'dress':val_y[:, 6], 'shirt':val_y[:, 7],
'pants':val_y[:, 8], 'shorts':val_y[:, 9], 'shoes':val_y[:, 10],
'color':val_y[:, 11]})
test_df = pd.DataFrame({'image':test_x, 'black':test_y[:, 0], 'blue':test_y[:, 1],
'brown':test_y[:, 2], 'green':test_y[:, 3], 'red':test_y[:, 4],
'white':test_y[:, 5], 'dress':test_y[:, 6], 'shirt':test_y[:, 7],
'pants':test_y[:, 8], 'shorts':test_y[:, 9], 'shoes':test_y[:, 10],
'color':test_y[:, 11]})
train_df.to_csv('./csv_data/nocolorinfo/train.csv')
val_df.to_csv('./csv_data/nocolorinfo/val.csv')
test_df.to_csv('./csv_data/nocolorinfo/test.csv')
train_df.to_csv('./csv_data/colorinfo/train_color.csv')
val_df.to_csv('./csv_data/colorinfo/val_color.csv')
test_df.to_csv('./csv_data/colorinfo/test_color.csv')
| 0.183484 | 0.79736 |
```
from PIL import Image
img = Image.open("cat.jpg")
# 아래에 명시한 위치들을 기반으로 짤려서 나오게 된다.
# (0, 0)이 좌측 상단임을 기억하도록 하자!
dim = (0, 0, 400, 400)
crop_img = img.crop(dim)
crop_img.show()
from PIL import Image
img = Image.open("cat.jpg")
# Image에는 Color Space 라는 공간이 있는데
# 이 공간에서 Color 값들을 빼고 조도로만
# 이미지를 재구성하면 GrayScale이 된다.
# 조도로만 재구성하면 좋은것이
# 컴퓨터가 색상에 민감하게 반응하지 않게 된다.
# Luminance(조도)
grayscale = img.convert("L")
grayscale.show()
from PIL import Image
img = Image.open("cat.jpg")
# 이미지를 resize 할 때는 반드시 아래와 같이
# 타입을 튜플 형태로 넣어줘야 한다.
resized_img = img.resize((200, 400))
resized_img.show()
from PIL import Image
from PIL import ImageEnhance
img = Image.open("cat.jpg")
# ImageEnhance로 밝기를 조절해서
# 사진상에 나와있는 잡티등을 없애주게 만들 수 있다.
enhanced_img = ImageEnhance.Brightness(img)
# enhance 부분에서 숫자가 높을수록
# 강렬하게 빛을 밝혀 잡티를 없애준다.
# 그러므로 위의 crop과 함께 써서 부분적으로 없애자 ^^ ㅋ
enhanced_img.enhance(3).show()
from PIL import Image
img = Image.open("cat.jpg")
# 반시계 방향으로 회전시킨다.
# 인자는 radian 표현이 아닌 degree(각도) 표현을 사용한다.
rotated_img = img.rotate(90)
rotated_img.show()
from PIL import Image
from PIL import ImageEnhance
img = Image.open("cat.jpg")
# Contrast는 말 그대로 이미지의 대조분을 강화시킴
contrasted_img = ImageEnhance.Contrast(img)
# 강화하는 수치는 enahnce에 1, 2, 3 등등으로 조정이 가능하다.
# 숫자 높을수록 대조를 많이 강화시킨다.
contrasted_img.enhance(3).show()
from skimage import io
img = io.imread('cat.jpg')
io.imshow(img)
from skimage import io
# 이미지 읽어오기(형태는 행렬 형태임)
img = io.imread('cat.jpg')
# 이름을 new_cat.jpg로 저장하는 작업
io.imsave('new_cat.jpg', img)
# 그리고 다시 불러와서 저장이 잘 되었는지 확인한다.
img = io.imread('new_cat.jpg')
io.imshow(img)
from skimage import data, io
# 글자 인식(OCR)에 활용하는 예제중 하나
io.imshow(data.text())
io.show()
from skimage import color, io
img = io.imread('cat.jpg')
# 위의 Pillow에서 사용한 convert("L")과 동일하다.
gray = color.rgb2gray(img)
io.imshow(gray)
io.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
# 가우시안 블러는 차량에서는 잡음 제거용으로 사용되며
# 실시간 영상에서는 특정 인물의 모자이크로 활용된다.
blur_img = img.filter(ImageFilter.GaussianBlur(5))
blur_img.show()
from skimage import io
from skimage import filters
img = io.imread('cat.jpg')
# 가우시안 통계함수가 라플라시안 적분을 기반으로 산출된다.
# 그래서 sigma 값이 별도로 존재하는데
# 이 값이 높으면 높을수록 분산이 커지기 때문에
# 숫자가 크면 클수록 모자이크가 강화된다.
out = filters.gaussian(img, sigma = 5)
io.imshow(out)
io.show()
from skimage import io
from skimage.morphology import disk
from skimage import color
from skimage import filters
img = io.imread('cat.jpg')
img = color.rgb2gray(img)
out = filters.median(img, disk(15))
io.imshow(out)
io.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
# grayscale 작업을 해서 영상 잡음을 최소화시킴
img = img.convert("L")
# 전용 필터를 만들기 위한 커스텀 연산 커널을 만들었다.
new_img = img.filter(
ImageFilter.Kernel(
# 3 by 3 행렬의 연산 커널이며
# 연산 대상은 [1, 2, 3]
# [4, 5, 6]
# [7, 8, 9]
# 위의 행렬이 이미지 행렬과
# Convolution 연산을 수행하게 된다.
# 그러면 결국 미분이 진행된다.
# 첫번째 인자는 행렬의 차원
# 두번째 인자는 해당 행렬에 배치된 값들
(3, 3), [1, 2, 3, 4, 5, 6, 7, 8, 9]
)
)
new_img.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
img = img.convert("L")
new_img = img.filter(
ImageFilter.Kernel(
# 소벨 필터를 살짝 가공한 연산 커널임
# 소벨 필터를 공부하기전에
# 공업수학과 벡터의 미분인 편도함수를 공부해야함
(3, 3), [1, 0, -1, 5, 0, -5, 1, 0, 1]
)
)
# 필터 이론에 대해 조금만 설명하자면
# 철수가 A 지점에 있다.
# 철수는 B 지점을 가려고 한다.
# A에서 B 사이의 거리는 10m 이고
# 철수가 A에서 B를 갔다가 돌아오는데 100분이 걸렸다.
# 철수의 이동 속도는 ?
# 20m, 100분 s = vt -> 20 / 100 분 = 속도
# 여기서 봤던 이 속도라는 개념이 순수한 속도 ?
# 아니면 평균 속도 개념인가 ? 평균속도
# 결국 컴퓨터가 limit x -> 0을 표현할 수 없기 때문에
# 미분 또한 평균으로 접근하게 된다.
# 즉 단순히 삼각형의 기울기 구하기 문제가 된다는 의미다.
new_img.show()
import matplotlib.pyplot as plt
from skimage import data
from skimage.filters import threshold_otsu, threshold_local
from skimage.io import imread
from skimage.color import rgb2gray
img = imread('highway.png')
img = rgb2gray(img)
# Threshold(임계치)를 잡아오는 함수(threshold_otsu)
thresh_value = threshold_otsu(img)
# 지정한 임계치 보다 작은 값을 흰색이나 검정색으로 배치할려는 것
thresh_img = img > thresh_value
print("img =", img)
print("thresh_img =", thresh_img)
# 영역을 지정해서 반복적으로 패턴을 검색
# 적정 블록 크기 35 와 offset 이동값 10을 가지고
# 반복적으로 Thresholding 작업을 진행함
block_size = 35
adaptive_img = threshold_local(thresh_img, block_size, offset = 10)
# 행이 3개다 - 즉 그림을 3행으로 배치하기 위함
fig, axes = plt.subplots(nrows = 3, figsize = (20, 10))
# 각각의 그래프 축들이 생기는데
# 첫번째 그림, 두번째 그림, 세번째 그림
ax0, ax1, ax2 = axes
# 각각의 그림에 자동으로 Grayscale 처리를 해줌
plt.gray()
ax0.imshow(img)
ax0.set_title('Origin')
ax1.imshow(thresh_img)
ax1.set_title('Global Thresholding')
ax2.imshow(adaptive_img)
ax2.set_title('Adaptive Thresholding')
from skimage import io
from skimage import feature
from skimage import color
img = io.imread('highway.jpg')
img = color.rgb2gray(img)
# Canny Edge 라는 알고리즘이 있어서 해당 알고리즘을 사용한것
# 이 알고리즘은 Edge를 검출하는데 사용된다.
edge = feature.canny(img, 3)
io.imshow(edge)
io.show()
import matplotlib.pyplot as plt
from skimage.transform import(hough_line, probabilistic_hough_line)
from skimage.feature import canny
from skimage import io, color
img = io.imread('highway.jpg')
img = color.rgb2gray(img)
edges = canny(img, 3)
io.imshow(edges)
io.show()
# 허프라인이라고 하는 녀석이 있는데
# 내부 계산에는 삼각함수가 사용되며
# 통계적 추론이 같이 적용됨
# threshold는 임계치(threshold_otsu, threshold_local)\
# 결국 threshold는 어떤값을 버릴지 결정하는 수치
# 이 값은 최소치는 0이고 최대치는 255에 해당
# color(색상) 비트가 8 비트 - 2^8 = 256개 - 0 ~ 255
lines = probabilistic_hough_line(
edges, threshold = 10, line_length = 5, line_gap = 3
)
fig, axes = plt.subplots(
1, 3, figsize = (15, 5), sharex = True, sharey = True
)
ax = axes.ravel()
ax[0].imshow(img, cmap = plt.cm.gray)
ax[0].set_title('Origin')
ax[1].imshow(edges, cmap = plt.cm.gray)
ax[1].set_title('Canny Edge')
ax[2].imshow(edges * 0)
for line in lines:
p0, p1 = line
ax[2].plot(
(p0[0], p1[0]), (p0[1], p1[1])
)
ax[2].set_xlim(0, img.shape[1])
ax[2].set_ylim(img.shape[0], 0)
ax[2].set_title('Probabilistic Hough')
for a in ax:
a.set_axis_off()
plt.tight_layout()
plt.show()
from sklearn import datasets, metrics
from sklearn.linear_model import LogisticRegression
mnist = datasets.load_digits()
imgs = mnist.images
data_size = len(imgs)
io.imshow(imgs[3])
io.show()
# Image 전처리
imgs = imgs.reshape(len(imgs), -1)
labels = mnist.target
# 로지스틱 회귀 분석 준비
LR_classifier = LogisticRegression(
C = 0.01, penalty = 'l2', tol = 0.01
)
# 3/4 는 학습에 활용, 1/4은 평가용으로 활용
LR_classifier.fit(
imgs[:int((data_size / 4) * 3)],
labels[:int((data_size / 4) * 3)]
)
# 평가 진행
predictions = LR_classifier.predict((imgs[int((data_size / 4)):]))
target = labels[int((data_size / 4)):]
# 성능 측정
print("Performance Report: \n%s\n" %
(metrics.classification_report(target, predictions))
)
from sklearn import datasets, metrics
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from skimage import io, color, feature, transform
mnist = datasets.load_digits()
imgs = mnist.images
data_size = len(imgs)
# Image 전처리
imgs = imgs.reshape(len(imgs), -1)
labels = mnist.target
# 로지스틱 회귀 분석 준비
LR_classifier = LogisticRegression(
C = 0.01, penalty = 'l2', tol = 0.01, max_iter = 1000000000
)
# 3/4 는 학습에 활용, 1/4은 평가용으로 활용
LR_classifier.fit(
imgs[:int((data_size / 4) * 3)],
labels[:int((data_size / 4) * 3)]
)
# 사용자가 지정한 이미지를 넣어서
# 실제로 이미지의 숫자를 판별하는지 검사해보도록 한다.
digit_img = io.imread('digit.jpg')
digit_img = color.rgb2gray(digit_img)
# MNIST 사용시 주의할점: 이미지 크기를 28 x 28 보다 작게 맞춰야함
digit_img = transform.resize(digit_img, (8, 8), mode="wrap")
digit_edge = feature.canny(digit_img, sigma = 1)
io.imshow(digit_edge)
# 딥러닝 하는 프로세스
# 마지막에 무조건 한 번 flatten()을 해줘야 한다.
# 자료구조 = 그래프 이론
digit_edge = [digit_edge.flatten()]
# 평가 진행
predictions = LR_classifier.predict(digit_edge)
print(predictions)
import cv2
img = cv2.imread('cat.jpg')
cv2.imshow("image", img)
cv2.waitKey()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
# OpenCV가 처리하는 Color Space 방식과
# MatplotLib이 처리하는 Color Space 방식이 다르다.
# 그래서 이것을 다시 잘 나오게 할려면
# Color Space를 서로에 맞게 다시 컨버팅 해줘야 한다.
plt.imshow(img)
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
# cv2.cvtColor는 ConvertColor의 약자
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2, numpy as np
cv2.namedWindow('Test')
fill_val = np.array([255, 255, 255], np.uint8)
def trackbar_callback(idx, val):
fill_val[idx] = val
cv2.createTrackbar(
'R', 'Test', 255, 255, lambda v: trackbar_callback(2, v)
)
cv2.createTrackbar(
'G', 'Test', 255, 255, lambda v: trackbar_callback(1, v)
)
cv2.createTrackbar(
'B', 'Test', 255, 255, lambda v: trackbar_callback(0, v)
)
while True:
img = np.full((500, 500, 3), fill_val)
cv2.imshow('Test', img)
key = cv2.waitKey(3)
# ESC
if key == 27:
break
cv2.destroyAllWindows()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
cv2.imwrite('test_cat.jpg', img)
test_img = cv2.imread('test_cat.jpg')
plt.imshow(cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(cv2.cvtColor(gray_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('highway.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, new_img = cv2.threshold(gray, 180, 245, cv2.THRESH_BINARY)
print(ret)
plt.imshow(cv2.cvtColor(new_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
cam = cv2.VideoCapture(0)
while(cam.isOpened()):
ret, frame = cam.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
cv2.imshow('frame', frame)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
cv2.imshow('frame', gray)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
cv2.imshow('frame', edges)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
# Region of Interest(관심 영역)
# 첫번째 인자는 영상 프레임
# 두번째 관심영역에 해당하는 정점(좌표)
def roi(img, vertices):
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255, ) * channel_count
else:
ignore_mask_color = 255
#print(ignore_mask_color)
#print(mask)
# mask는 현재 0
# ignore_mask_color는 현재 255
# vertices라는 것은 우리의 관심 영역
# vertices에 해당하는 영역은 원본값을 유지
# vertices에 해당하지 않는 영역은 전부 제거됨
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_img = cv2.bitwise_and(img, mask)
# 최종적으로 roi 영역이 지정된 영상을 획득한다.
return masked_img
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 영상 프레임을 가져오면
# 해당 영상의 높이값과 폭을 얻을 수 있다.
height = frame.shape[0]
width = frame.shape[1]
# 우리가 관심을 가지려고 하는 영역을 지정(삼각형)
region_of_interest_vertices = [
(0, height),
(width / 2, height / 2),
(width, height)
]
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
# 관심 영역을 제외한 영상의 나머지 부분을 잘라버린다.
cropped_img = roi(
edges,
np.array(
[region_of_interest_vertices], np.int32
)
)
cv2.imshow('frame', cropped_img)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import math
import numpy as np
# Region of Interest(관심 영역)
# 첫번째 인자는 영상 프레임
# 두번째 관심영역에 해당하는 정점(좌표)
def roi(img, vertices):
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255, ) * channel_count
else:
ignore_mask_color = 255
#print(ignore_mask_color)
#print(mask)
# mask는 현재 0
# ignore_mask_color는 현재 255
# vertices라는 것은 우리의 관심 영역
# vertices에 해당하는 영역은 원본값을 유지
# vertices에 해당하지 않는 영역은 전부 제거됨
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_img = cv2.bitwise_and(img, mask)
# 최종적으로 roi 영역이 지정된 영상을 획득한다.
return masked_img
def draw_lines(img, lines, color=[0, 255, 0], thickness=3):
line_img = np.zeros(
(
img.shape[0],
img.shape[1],
3
),
dtype=np.uint8
)
img = np.copy(img)
if lines is None:
return
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(
line_img, (x1, y1), (x2, y2), color, thickness
)
img = cv2.addWeighted(img, 0.8, line_img, 1.0, 0.0)
return img
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 영상 프레임을 가져오면
# 해당 영상의 높이값과 폭을 얻을 수 있다.
height = frame.shape[0]
width = frame.shape[1]
# 우리가 관심을 가지려고 하는 영역을 지정(삼각형)
region_of_interest_vertices = [
(0, height),
(width / 2, height / 2),
(width, height)
]
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
# 관심 영역을 제외한 영상의 나머지 부분을 잘라버린다.
cropped_img = roi(
edges,
np.array(
[region_of_interest_vertices], np.int32
)
)
# 주행을 보조할 선을 그리도록 한다.
lines = cv2.HoughLinesP(
cropped_img,
rho = 6,
theta = np.pi / 180,
threshold = 160,
lines = np.array([]),
minLineLength = 40,
maxLineGap = 25
)
left_line_x = []
left_line_y = []
right_line_x = []
right_line_y = []
# 이 부분에서 기울기를 계산한다.
# 기울기는 tan 이므로 y / x 이고
# 두 점을 알고 있으므로 두 점의 기울기는
# 아래와 같은 형식으로 구할 수 있다.
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2 - y1) / (x2 - x1)
if math.fabs(slope) < 0.5:
continue
if slope <= 0:
left_line_x.extend([x1, x2])
left_line_y.extend([y1, y2])
else:
right_line_x.extend([x1, x2])
right_line_y.extend([y1, y2])
min_y = int(frame.shape[0] * (3 / 5))
max_y = int(frame.shape[0])
# np.poly1d 를 통해 1차선을 만듬
poly_left = np.poly1d(np.polyfit(
left_line_y,
left_line_x,
deg = 1
))
left_x_start = int(poly_left(max_y))
left_x_end = int(poly_left(min_y))
poly_right = np.poly1d(np.polyfit(
right_line_y,
right_line_x,
deg = 1
))
right_x_start = int(poly_right(max_y))
right_x_end = int(poly_right(min_y))
# 실제 영상에 표기할 선을 그린다.
line_img = draw_lines(
frame,
[[
[left_x_start, max_y, left_x_end, min_y],
[right_x_start, max_y, right_x_end, min_y],
]],
thickness = 5
)
cv2.imshow('frame', line_img)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import os
print(os.sys.path)
```
|
github_jupyter
|
from PIL import Image
img = Image.open("cat.jpg")
# 아래에 명시한 위치들을 기반으로 짤려서 나오게 된다.
# (0, 0)이 좌측 상단임을 기억하도록 하자!
dim = (0, 0, 400, 400)
crop_img = img.crop(dim)
crop_img.show()
from PIL import Image
img = Image.open("cat.jpg")
# Image에는 Color Space 라는 공간이 있는데
# 이 공간에서 Color 값들을 빼고 조도로만
# 이미지를 재구성하면 GrayScale이 된다.
# 조도로만 재구성하면 좋은것이
# 컴퓨터가 색상에 민감하게 반응하지 않게 된다.
# Luminance(조도)
grayscale = img.convert("L")
grayscale.show()
from PIL import Image
img = Image.open("cat.jpg")
# 이미지를 resize 할 때는 반드시 아래와 같이
# 타입을 튜플 형태로 넣어줘야 한다.
resized_img = img.resize((200, 400))
resized_img.show()
from PIL import Image
from PIL import ImageEnhance
img = Image.open("cat.jpg")
# ImageEnhance로 밝기를 조절해서
# 사진상에 나와있는 잡티등을 없애주게 만들 수 있다.
enhanced_img = ImageEnhance.Brightness(img)
# enhance 부분에서 숫자가 높을수록
# 강렬하게 빛을 밝혀 잡티를 없애준다.
# 그러므로 위의 crop과 함께 써서 부분적으로 없애자 ^^ ㅋ
enhanced_img.enhance(3).show()
from PIL import Image
img = Image.open("cat.jpg")
# 반시계 방향으로 회전시킨다.
# 인자는 radian 표현이 아닌 degree(각도) 표현을 사용한다.
rotated_img = img.rotate(90)
rotated_img.show()
from PIL import Image
from PIL import ImageEnhance
img = Image.open("cat.jpg")
# Contrast는 말 그대로 이미지의 대조분을 강화시킴
contrasted_img = ImageEnhance.Contrast(img)
# 강화하는 수치는 enahnce에 1, 2, 3 등등으로 조정이 가능하다.
# 숫자 높을수록 대조를 많이 강화시킨다.
contrasted_img.enhance(3).show()
from skimage import io
img = io.imread('cat.jpg')
io.imshow(img)
from skimage import io
# 이미지 읽어오기(형태는 행렬 형태임)
img = io.imread('cat.jpg')
# 이름을 new_cat.jpg로 저장하는 작업
io.imsave('new_cat.jpg', img)
# 그리고 다시 불러와서 저장이 잘 되었는지 확인한다.
img = io.imread('new_cat.jpg')
io.imshow(img)
from skimage import data, io
# 글자 인식(OCR)에 활용하는 예제중 하나
io.imshow(data.text())
io.show()
from skimage import color, io
img = io.imread('cat.jpg')
# 위의 Pillow에서 사용한 convert("L")과 동일하다.
gray = color.rgb2gray(img)
io.imshow(gray)
io.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
# 가우시안 블러는 차량에서는 잡음 제거용으로 사용되며
# 실시간 영상에서는 특정 인물의 모자이크로 활용된다.
blur_img = img.filter(ImageFilter.GaussianBlur(5))
blur_img.show()
from skimage import io
from skimage import filters
img = io.imread('cat.jpg')
# 가우시안 통계함수가 라플라시안 적분을 기반으로 산출된다.
# 그래서 sigma 값이 별도로 존재하는데
# 이 값이 높으면 높을수록 분산이 커지기 때문에
# 숫자가 크면 클수록 모자이크가 강화된다.
out = filters.gaussian(img, sigma = 5)
io.imshow(out)
io.show()
from skimage import io
from skimage.morphology import disk
from skimage import color
from skimage import filters
img = io.imread('cat.jpg')
img = color.rgb2gray(img)
out = filters.median(img, disk(15))
io.imshow(out)
io.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
# grayscale 작업을 해서 영상 잡음을 최소화시킴
img = img.convert("L")
# 전용 필터를 만들기 위한 커스텀 연산 커널을 만들었다.
new_img = img.filter(
ImageFilter.Kernel(
# 3 by 3 행렬의 연산 커널이며
# 연산 대상은 [1, 2, 3]
# [4, 5, 6]
# [7, 8, 9]
# 위의 행렬이 이미지 행렬과
# Convolution 연산을 수행하게 된다.
# 그러면 결국 미분이 진행된다.
# 첫번째 인자는 행렬의 차원
# 두번째 인자는 해당 행렬에 배치된 값들
(3, 3), [1, 2, 3, 4, 5, 6, 7, 8, 9]
)
)
new_img.show()
from PIL import Image
from PIL import ImageFilter
img = Image.open('cat.jpg')
img = img.convert("L")
new_img = img.filter(
ImageFilter.Kernel(
# 소벨 필터를 살짝 가공한 연산 커널임
# 소벨 필터를 공부하기전에
# 공업수학과 벡터의 미분인 편도함수를 공부해야함
(3, 3), [1, 0, -1, 5, 0, -5, 1, 0, 1]
)
)
# 필터 이론에 대해 조금만 설명하자면
# 철수가 A 지점에 있다.
# 철수는 B 지점을 가려고 한다.
# A에서 B 사이의 거리는 10m 이고
# 철수가 A에서 B를 갔다가 돌아오는데 100분이 걸렸다.
# 철수의 이동 속도는 ?
# 20m, 100분 s = vt -> 20 / 100 분 = 속도
# 여기서 봤던 이 속도라는 개념이 순수한 속도 ?
# 아니면 평균 속도 개념인가 ? 평균속도
# 결국 컴퓨터가 limit x -> 0을 표현할 수 없기 때문에
# 미분 또한 평균으로 접근하게 된다.
# 즉 단순히 삼각형의 기울기 구하기 문제가 된다는 의미다.
new_img.show()
import matplotlib.pyplot as plt
from skimage import data
from skimage.filters import threshold_otsu, threshold_local
from skimage.io import imread
from skimage.color import rgb2gray
img = imread('highway.png')
img = rgb2gray(img)
# Threshold(임계치)를 잡아오는 함수(threshold_otsu)
thresh_value = threshold_otsu(img)
# 지정한 임계치 보다 작은 값을 흰색이나 검정색으로 배치할려는 것
thresh_img = img > thresh_value
print("img =", img)
print("thresh_img =", thresh_img)
# 영역을 지정해서 반복적으로 패턴을 검색
# 적정 블록 크기 35 와 offset 이동값 10을 가지고
# 반복적으로 Thresholding 작업을 진행함
block_size = 35
adaptive_img = threshold_local(thresh_img, block_size, offset = 10)
# 행이 3개다 - 즉 그림을 3행으로 배치하기 위함
fig, axes = plt.subplots(nrows = 3, figsize = (20, 10))
# 각각의 그래프 축들이 생기는데
# 첫번째 그림, 두번째 그림, 세번째 그림
ax0, ax1, ax2 = axes
# 각각의 그림에 자동으로 Grayscale 처리를 해줌
plt.gray()
ax0.imshow(img)
ax0.set_title('Origin')
ax1.imshow(thresh_img)
ax1.set_title('Global Thresholding')
ax2.imshow(adaptive_img)
ax2.set_title('Adaptive Thresholding')
from skimage import io
from skimage import feature
from skimage import color
img = io.imread('highway.jpg')
img = color.rgb2gray(img)
# Canny Edge 라는 알고리즘이 있어서 해당 알고리즘을 사용한것
# 이 알고리즘은 Edge를 검출하는데 사용된다.
edge = feature.canny(img, 3)
io.imshow(edge)
io.show()
import matplotlib.pyplot as plt
from skimage.transform import(hough_line, probabilistic_hough_line)
from skimage.feature import canny
from skimage import io, color
img = io.imread('highway.jpg')
img = color.rgb2gray(img)
edges = canny(img, 3)
io.imshow(edges)
io.show()
# 허프라인이라고 하는 녀석이 있는데
# 내부 계산에는 삼각함수가 사용되며
# 통계적 추론이 같이 적용됨
# threshold는 임계치(threshold_otsu, threshold_local)\
# 결국 threshold는 어떤값을 버릴지 결정하는 수치
# 이 값은 최소치는 0이고 최대치는 255에 해당
# color(색상) 비트가 8 비트 - 2^8 = 256개 - 0 ~ 255
lines = probabilistic_hough_line(
edges, threshold = 10, line_length = 5, line_gap = 3
)
fig, axes = plt.subplots(
1, 3, figsize = (15, 5), sharex = True, sharey = True
)
ax = axes.ravel()
ax[0].imshow(img, cmap = plt.cm.gray)
ax[0].set_title('Origin')
ax[1].imshow(edges, cmap = plt.cm.gray)
ax[1].set_title('Canny Edge')
ax[2].imshow(edges * 0)
for line in lines:
p0, p1 = line
ax[2].plot(
(p0[0], p1[0]), (p0[1], p1[1])
)
ax[2].set_xlim(0, img.shape[1])
ax[2].set_ylim(img.shape[0], 0)
ax[2].set_title('Probabilistic Hough')
for a in ax:
a.set_axis_off()
plt.tight_layout()
plt.show()
from sklearn import datasets, metrics
from sklearn.linear_model import LogisticRegression
mnist = datasets.load_digits()
imgs = mnist.images
data_size = len(imgs)
io.imshow(imgs[3])
io.show()
# Image 전처리
imgs = imgs.reshape(len(imgs), -1)
labels = mnist.target
# 로지스틱 회귀 분석 준비
LR_classifier = LogisticRegression(
C = 0.01, penalty = 'l2', tol = 0.01
)
# 3/4 는 학습에 활용, 1/4은 평가용으로 활용
LR_classifier.fit(
imgs[:int((data_size / 4) * 3)],
labels[:int((data_size / 4) * 3)]
)
# 평가 진행
predictions = LR_classifier.predict((imgs[int((data_size / 4)):]))
target = labels[int((data_size / 4)):]
# 성능 측정
print("Performance Report: \n%s\n" %
(metrics.classification_report(target, predictions))
)
from sklearn import datasets, metrics
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from skimage import io, color, feature, transform
mnist = datasets.load_digits()
imgs = mnist.images
data_size = len(imgs)
# Image 전처리
imgs = imgs.reshape(len(imgs), -1)
labels = mnist.target
# 로지스틱 회귀 분석 준비
LR_classifier = LogisticRegression(
C = 0.01, penalty = 'l2', tol = 0.01, max_iter = 1000000000
)
# 3/4 는 학습에 활용, 1/4은 평가용으로 활용
LR_classifier.fit(
imgs[:int((data_size / 4) * 3)],
labels[:int((data_size / 4) * 3)]
)
# 사용자가 지정한 이미지를 넣어서
# 실제로 이미지의 숫자를 판별하는지 검사해보도록 한다.
digit_img = io.imread('digit.jpg')
digit_img = color.rgb2gray(digit_img)
# MNIST 사용시 주의할점: 이미지 크기를 28 x 28 보다 작게 맞춰야함
digit_img = transform.resize(digit_img, (8, 8), mode="wrap")
digit_edge = feature.canny(digit_img, sigma = 1)
io.imshow(digit_edge)
# 딥러닝 하는 프로세스
# 마지막에 무조건 한 번 flatten()을 해줘야 한다.
# 자료구조 = 그래프 이론
digit_edge = [digit_edge.flatten()]
# 평가 진행
predictions = LR_classifier.predict(digit_edge)
print(predictions)
import cv2
img = cv2.imread('cat.jpg')
cv2.imshow("image", img)
cv2.waitKey()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
# OpenCV가 처리하는 Color Space 방식과
# MatplotLib이 처리하는 Color Space 방식이 다르다.
# 그래서 이것을 다시 잘 나오게 할려면
# Color Space를 서로에 맞게 다시 컨버팅 해줘야 한다.
plt.imshow(img)
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
# cv2.cvtColor는 ConvertColor의 약자
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2, numpy as np
cv2.namedWindow('Test')
fill_val = np.array([255, 255, 255], np.uint8)
def trackbar_callback(idx, val):
fill_val[idx] = val
cv2.createTrackbar(
'R', 'Test', 255, 255, lambda v: trackbar_callback(2, v)
)
cv2.createTrackbar(
'G', 'Test', 255, 255, lambda v: trackbar_callback(1, v)
)
cv2.createTrackbar(
'B', 'Test', 255, 255, lambda v: trackbar_callback(0, v)
)
while True:
img = np.full((500, 500, 3), fill_val)
cv2.imshow('Test', img)
key = cv2.waitKey(3)
# ESC
if key == 27:
break
cv2.destroyAllWindows()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
cv2.imwrite('test_cat.jpg', img)
test_img = cv2.imread('test_cat.jpg')
plt.imshow(cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('cat.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
plt.imshow(cv2.cvtColor(gray_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('highway.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, new_img = cv2.threshold(gray, 180, 245, cv2.THRESH_BINARY)
print(ret)
plt.imshow(cv2.cvtColor(new_img, cv2.COLOR_BGR2RGB))
plt.show()
import cv2
cam = cv2.VideoCapture(0)
while(cam.isOpened()):
ret, frame = cam.read()
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
cv2.imshow('frame', frame)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
cv2.imshow('frame', gray)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
cv2.imshow('frame', edges)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import numpy as np
# Region of Interest(관심 영역)
# 첫번째 인자는 영상 프레임
# 두번째 관심영역에 해당하는 정점(좌표)
def roi(img, vertices):
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255, ) * channel_count
else:
ignore_mask_color = 255
#print(ignore_mask_color)
#print(mask)
# mask는 현재 0
# ignore_mask_color는 현재 255
# vertices라는 것은 우리의 관심 영역
# vertices에 해당하는 영역은 원본값을 유지
# vertices에 해당하지 않는 영역은 전부 제거됨
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_img = cv2.bitwise_and(img, mask)
# 최종적으로 roi 영역이 지정된 영상을 획득한다.
return masked_img
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 영상 프레임을 가져오면
# 해당 영상의 높이값과 폭을 얻을 수 있다.
height = frame.shape[0]
width = frame.shape[1]
# 우리가 관심을 가지려고 하는 영역을 지정(삼각형)
region_of_interest_vertices = [
(0, height),
(width / 2, height / 2),
(width, height)
]
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
# 관심 영역을 제외한 영상의 나머지 부분을 잘라버린다.
cropped_img = roi(
edges,
np.array(
[region_of_interest_vertices], np.int32
)
)
cv2.imshow('frame', cropped_img)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import cv2
import math
import numpy as np
# Region of Interest(관심 영역)
# 첫번째 인자는 영상 프레임
# 두번째 관심영역에 해당하는 정점(좌표)
def roi(img, vertices):
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255, ) * channel_count
else:
ignore_mask_color = 255
#print(ignore_mask_color)
#print(mask)
# mask는 현재 0
# ignore_mask_color는 현재 255
# vertices라는 것은 우리의 관심 영역
# vertices에 해당하는 영역은 원본값을 유지
# vertices에 해당하지 않는 영역은 전부 제거됨
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_img = cv2.bitwise_and(img, mask)
# 최종적으로 roi 영역이 지정된 영상을 획득한다.
return masked_img
def draw_lines(img, lines, color=[0, 255, 0], thickness=3):
line_img = np.zeros(
(
img.shape[0],
img.shape[1],
3
),
dtype=np.uint8
)
img = np.copy(img)
if lines is None:
return
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(
line_img, (x1, y1), (x2, y2), color, thickness
)
img = cv2.addWeighted(img, 0.8, line_img, 1.0, 0.0)
return img
fps = 30
title = 'normal speed video'
delay = int(1000 / fps)
cam = cv2.VideoCapture("challenge.mp4")
while(cam.isOpened()):
ret, frame = cam.read()
if ret != True:
break
# 영상 프레임을 가져오면
# 해당 영상의 높이값과 폭을 얻을 수 있다.
height = frame.shape[0]
width = frame.shape[1]
# 우리가 관심을 가지려고 하는 영역을 지정(삼각형)
region_of_interest_vertices = [
(0, height),
(width / 2, height / 2),
(width, height)
]
# 여기에 추가적으로 영상내에 적용할 함수들을 작성하면 된다.
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY);
# Canny 까지 넣어주면 실시간 처리 관점에서
# 벌써 살짝 지연되는 것이 느껴지게 된다.
# 그만큼 영상 처리라는 것이 굉장히 무거운 작업이다.
# 그래서 무조건적으로 해당 작업들은
# 멀티 프로세스, 스레드 기반으로 동작시켜야 한다.
edges = cv2.Canny(gray, 235, 243, 3)
# 관심 영역을 제외한 영상의 나머지 부분을 잘라버린다.
cropped_img = roi(
edges,
np.array(
[region_of_interest_vertices], np.int32
)
)
# 주행을 보조할 선을 그리도록 한다.
lines = cv2.HoughLinesP(
cropped_img,
rho = 6,
theta = np.pi / 180,
threshold = 160,
lines = np.array([]),
minLineLength = 40,
maxLineGap = 25
)
left_line_x = []
left_line_y = []
right_line_x = []
right_line_y = []
# 이 부분에서 기울기를 계산한다.
# 기울기는 tan 이므로 y / x 이고
# 두 점을 알고 있으므로 두 점의 기울기는
# 아래와 같은 형식으로 구할 수 있다.
for line in lines:
for x1, y1, x2, y2 in line:
slope = (y2 - y1) / (x2 - x1)
if math.fabs(slope) < 0.5:
continue
if slope <= 0:
left_line_x.extend([x1, x2])
left_line_y.extend([y1, y2])
else:
right_line_x.extend([x1, x2])
right_line_y.extend([y1, y2])
min_y = int(frame.shape[0] * (3 / 5))
max_y = int(frame.shape[0])
# np.poly1d 를 통해 1차선을 만듬
poly_left = np.poly1d(np.polyfit(
left_line_y,
left_line_x,
deg = 1
))
left_x_start = int(poly_left(max_y))
left_x_end = int(poly_left(min_y))
poly_right = np.poly1d(np.polyfit(
right_line_y,
right_line_x,
deg = 1
))
right_x_start = int(poly_right(max_y))
right_x_end = int(poly_right(min_y))
# 실제 영상에 표기할 선을 그린다.
line_img = draw_lines(
frame,
[[
[left_x_start, max_y, left_x_end, min_y],
[right_x_start, max_y, right_x_end, min_y],
]],
thickness = 5
)
cv2.imshow('frame', line_img)
if cv2.waitKey(delay) & 0xFF == ord('q'):
break
cam.release()
cv2.destroyAllWindows()
import os
print(os.sys.path)
| 0.328637 | 0.663083 |
```
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
accuracy = [
['MobileNetV2', 1, 0.9890, 0.3676],
['MobileNetV2', 2, 0.9916, 0.5084],
['MobileNetV2', 3, 0.9926, 0.4996],
['MobileNetV2', 4, 0.9939, 0.7712],
['MobileNetV2', 5, 0.9945, 0.6520],
['MobileNetV2', 6, 0.9953, 0.9567],
['MobileNetV2', 7, 0.9961, 0.7668],
['MobileNetV2', 8, 0.9960, 0.5078],
['MobileNetV2', 9, 0.9960, 0.9556],
['MobileNetV2', 10, 0.9972, 0.9949],
['MobileNetV2', 11, 0.9967, 0.9935],
['MobileNetV2', 12, 0.9970, 0.9733],
['MobileNetV2', 13, 0.9970, 0.9776],
['MobileNetV2', 14, 0.9975, 0.9413],
['MobileNetV2', 15, 0.9974, 0.9810],
['MobileNetV2', 16, 0.9988, 0.9984],
['MobileNetV2', 17, 0.9994, 0.9987],
['MobileNetV2', 18, 0.9993, 0.9987],
['MobileNetV2', 19, 0.9993, 0.9982],
['MobileNetV2', 20, 0.9995, 0.9986],
['MobileNetV2', 21, 0.9996, 0.9987],
['MobileNetV2', 22, 0.9995, 0.9984],
['MobileNetV2', 23, 0.9995, 0.9987],
['MobileNetV2', 24, 0.9996, 0.9989],
['MobileNetV2', 25, 0.9996, 0.9996],
['MobileNetV2', 26, 0.9996, 0.9992],
['MobileNetV2', 27, 0.9996, 0.9988],
['MobileNetV2', 28, 0.9996, 0.9991],
['MobileNetV2', 29, 0.9996, 0.9973],
['MobileNetV2', 30, 0.9996, 0.9987],
### InceptionV3
['InceptionV3', 1, 0.9249, 0.9731],
['InceptionV3', 2, 0.9634, 0.9726],
['InceptionV3', 3, 0.9701, 0.9539],
['InceptionV3', 4, 0.9774, 0.8868],
['InceptionV3', 5, 0.9807, 0.9831],
['InceptionV3', 6, 0.9867, 0.9649],
['InceptionV3', 7, 0.9894, 0.9736],
['InceptionV3', 8, 0.9908, 0.9842],
['InceptionV3', 9, 0.9911, 0.9581],
['InceptionV3', 10, 0.9920, 0.8751],
['InceptionV3', 11, 0.9871, 0.9820],
['InceptionV3', 12, 0.9928, 0.9874],
['InceptionV3', 13, 0.9934, 0.4180],
['InceptionV3', 14, 0.9905, 0.9881],
['InceptionV3', 15, 0.9941, 0.8944],
['InceptionV3', 16, 0.9946, 0.9901],
['InceptionV3', 17, 0.9938, 0.9943],
['InceptionV3', 18, 0.9957, 0.9917],
['InceptionV3', 19, 0.9953, 0.9481],
['InceptionV3', 20, 0.9958, 0.9858],
['InceptionV3', 21, 0.9955, 0.9927],
['InceptionV3', 22, 0.9955, 0.9938],
['InceptionV3', 23, 0.9973, 0.9979],
['InceptionV3', 24, 0.9978, 0.9982],
['InceptionV3', 25, 0.9984, 0.9983],
['InceptionV3', 26, 0.9984, 0.9982],
['InceptionV3', 27, 0.9989, 0.9979],
['InceptionV3', 28, 0.9986, 0.9982],
['InceptionV3', 29, 0.9990, 0.9983],
['InceptionV3', 30, 0.9991, 0.9982],
### ResNet50
['ResNet50', 1, 0.9678, 0.6914],
['ResNet50', 2, 0.9780, 0.8896],
['ResNet50', 3, 0.9799, 0.9220],
['ResNet50', 4, 0.9840, 0.7072],
['ResNet50', 5, 0.9863, 0.9812],
['ResNet50', 6, 0.9850, 0.9225],
['ResNet50', 7, 0.9876, 0.7351],
['ResNet50', 8, 0.9897, 0.7879],
['ResNet50', 9, 0.9902, 0.5196],
['ResNet50', 10, 0.9898, 0.9628],
['ResNet50', 11, 0.9951, 0.9931],
['ResNet50', 12, 0.9962, 0.9959],
['ResNet50', 13, 0.9966, 0.9969],
['ResNet50', 14, 0.9972, 0.9964],
['ResNet50', 15, 0.9973, 0.9969],
['ResNet50', 16, 0.9976, 0.9942],
['ResNet50', 17, 0.9981, 0.9966],
['ResNet50', 18, 0.9978, 0.9976],
['ResNet50', 19, 0.9980, 0.9977],
['ResNet50', 20, 0.9982, 0.9971],
['ResNet50', 21, 0.9978, 0.9943],
['ResNet50', 22, 0.9982, 0.9962],
['ResNet50', 23, 0.9983, 0.9961],
['ResNet50', 24, 0.9984, 0.9965],
['ResNet50', 25, 0.9985, 0.9978],
['ResNet50', 26, 0.9988, 0.9983],
['ResNet50', 27, 0.9988, 0.9984],
['ResNet50', 28, 0.9988, 0.9977],
['ResNet50', 29, 0.9989, 0.9981],
['ResNet50', 30, 0.9987, 0.9982],
]
loss = [
['MobileNetV2', 1, 0.0328, 16.2260],
['MobileNetV2', 2, 0.0265, 9.2775],
['MobileNetV2', 3, 0.0232, 10.2493], ## 3
['MobileNetV2', 4, 0.0180, 2.2250], ## 4
['MobileNetV2', 5, 0.0163, 3.3614], ## 5
['MobileNetV2', 6, 0.0149, 0.1620], ## 6
['MobileNetV2', 7, 0.0128, 1.6062], ## 7
['MobileNetV2', 8, 0.0122, 2.7681], ## 8
['MobileNetV2', 9, 0.0127, 0.1952], ## 9
['MobileNetV2', 10, 0.0093, 0.0191], ## 10
['MobileNetV2', 11, 0.0106, 0.0193], ## 1
['MobileNetV2', 12, 0.0096, 0.0728], ## 2
['MobileNetV2', 13, 0.0092, 0.0683], ## 3
['MobileNetV2', 14, 0.0075, 0.1830], ## 4
['MobileNetV2', 15, 0.0074, 0.0676], ## 5
['MobileNetV2', 16, 0.0031, 0.0051], ## 6
['MobileNetV2', 17, 0.0020, 0.0033], ## 7
['MobileNetV2', 18, 0.0019, 0.0037], ## 8
['MobileNetV2', 19, 0.0018, 0.0054], ## 9
['MobileNetV2', 20, 0.0016, 0.0045], ## 10
['MobileNetV2', 21, 0.0014, 0.0043], ## 1
['MobileNetV2', 22, 0.0014, 0.0049], ## 2
['MobileNetV2', 23, 0.0012, 0.0050], ## 3
['MobileNetV2', 24, 0.0013, 0.0058], ## 4
['MobileNetV2', 25, 9.1785e-04, 0.0012],
['MobileNetV2', 26, 0.0014, 0.0023],
['MobileNetV2', 27, 0.0011, 0.0047],
['MobileNetV2', 28, 0.0010, 0.0032],
['MobileNetV2', 29, 0.0011, 0.0106],
['MobileNetV2', 30, 0.0012, 0.0038],
### InceptionV3
['InceptionV3', 1, 0.2072, 0.1038],
['InceptionV3', 2, 0.1054, 0.1128],
['InceptionV3', 3, 0.0886, 0.1255],
['InceptionV3', 4, 0.0652, 0.2765],
['InceptionV3', 5, 0.0586, 0.0467],
['InceptionV3', 6, 0.0380, 0.0980],
['InceptionV3', 7, 0.0330, 0.0759],
['InceptionV3', 8, 0.0274, 0.0499],
['InceptionV3', 9, 0.0270, 0.1046],
['InceptionV3', 10, 0.0250, 0.3535],
['InceptionV3', 11, 0.0350, 0.0524],
['InceptionV3', 12, 0.0186, 0.0330],
['InceptionV3', 13, 0.0186, 2.3618],
['InceptionV3', 14, 0.0271, 0.0413],
['InceptionV3', 15, 0.0175, 0.3454],
['InceptionV3', 16, 0.0161, 0.0321],
['InceptionV3', 17, 0.0177, 0.0193],
['InceptionV3', 18, 0.0127, 0.0260],
['InceptionV3', 19, 0.0138, 0.1391],
['InceptionV3', 20, 0.0125, 0.0433],
['InceptionV3', 21, 0.0128, 0.0171],
['InceptionV3', 22, 0.0139, 0.0184],
['InceptionV3', 23, 0.0083, 0.0065],
['InceptionV3', 24, 0.0057, 0.0065],
['InceptionV3', 25, 0.0045, 0.0057],
['InceptionV3', 26, 0.0044, 0.0060],
['InceptionV3', 27, 0.0029, 0.0073],
['InceptionV3', 28, 0.0035, 0.0054],
['InceptionV3', 29, 0.0027, 0.0052],
['InceptionV3', 30, 0.0026, 0.0064],
### ResNet50
['ResNet50', 1, 0.0920, 1.3306], ## 1
['ResNet50', 2, 0.0622, 0.2611], ## 2
['ResNet50', 3, 0.0570, 0.1880], ## 3
['ResNet50', 4, 0.0460, 0.5956], ## 4
['ResNet50', 5, 0.0405, 0.0632], ## 5
['ResNet50', 6, 0.0431, 0.2366], ## 6
['ResNet50', 7, 0.0360, 1.1562], ## 7
['ResNet50', 8, 0.0298, 0.6376], ## 8
['ResNet50', 9, 0.0297, 1.3452], ## 9
['ResNet50', 10, 0.0295, 0.1477], ## 10
['ResNet50', 11, 0.0144, 0.0219], ## 1
['ResNet50', 12, 0.0105, 0.0110], ## 2
['ResNet50', 13, 0.0085, 0.0098], ## 3
['ResNet50', 14, 0.0079, 0.0105], ## 4
['ResNet50', 15, 0.0076, 0.0088], ## 5
['ResNet50', 16, 0.0070, 0.0190], ## 6
['ResNet50', 17, 0.0057, 0.0096], ## 7
['ResNet50', 18, 0.0062, 0.0068], ## 8
['ResNet50', 19, 0.0058, 0.0073], ## 9
['ResNet50', 20, 0.0053, 0.0101], ## 10
['ResNet50', 21, 0.0059, 0.0160], ## 1
['ResNet50', 22, 0.0050, 0.0153], ## 2
['ResNet50', 23, 0.0048, 0.0117], ## 3
['ResNet50', 24, 0.0047, 0.0091], ## 4
['ResNet50', 25, 0.0043, 0.0066], ## 5
['ResNet50', 26, 0.0034, 0.0058], ## 6
['ResNet50', 27, 0.0033, 0.0058], ## 7
['ResNet50', 28, 0.0033, 0.0071], ## 8
['ResNet50', 29, 0.0030, 0.0070], ## 9
['ResNet50', 30, 0.0039, 0.0063], ## 10
]
accuracy_df = pd.DataFrame(accuracy, columns=['Model', 'Epoch', 'Training Accuracy', 'Validation Accuracy'])
loss_df = pd.DataFrame(loss, columns=['Model', 'Epoch','Training Loss', 'Validation Loss'])
accuracy_df
loss_df
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Training dan Validation Accuracy Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
plt.title('Training Accuracy')
sns.lineplot(data=accuracy_df, x="Epoch", y="Training Accuracy", hue="Model")
plt.subplot(1, 2, 2)
sns.lineplot(data=accuracy_df, x="Epoch", y="Validation Accuracy", hue="Model")
plt.title('Validation Accuracy')
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Training dan Validation Loss Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
sns.lineplot(data=loss_df, x="Epoch", y="Training Loss", hue="Model")
plt.title('Training Loss')
plt.subplot(1, 2, 2)
sns.lineplot(data=loss_df, x="Epoch", y="Validation Loss", hue="Model")
plt.title('Validation Loss')
plt.show()
```
## New Approach
```
new_accuracy = [
['MobileNetV2', 1, 0.9890, "Training"],
['MobileNetV2', 2, 0.9916, "Training"],
['MobileNetV2', 3, 0.9926, "Training"],
['MobileNetV2', 4, 0.9939, "Training"],
['MobileNetV2', 5, 0.9945, "Training"],
['MobileNetV2', 6, 0.9953, "Training"],
['MobileNetV2', 7, 0.9961, "Training"],
['MobileNetV2', 8, 0.9960, "Training"],
['MobileNetV2', 9, 0.9960, "Training"],
['MobileNetV2', 10, 0.9972,"Training"],
['MobileNetV2', 11, 0.9967,"Training"],
['MobileNetV2', 12, 0.9970,"Training"],
['MobileNetV2', 13, 0.9970,"Training"],
['MobileNetV2', 14, 0.9975,"Training"],
['MobileNetV2', 15, 0.9974,"Training"],
['MobileNetV2', 16, 0.9988,"Training"],
['MobileNetV2', 17, 0.9994,"Training"],
['MobileNetV2', 18, 0.9993,"Training"],
['MobileNetV2', 19, 0.9993,"Training"],
['MobileNetV2', 20, 0.9995,"Training"],
['MobileNetV2', 21, 0.9996,"Training"],
['MobileNetV2', 22, 0.9995,"Training"],
['MobileNetV2', 23, 0.9995,"Training"],
['MobileNetV2', 24, 0.9996,"Training"],
['MobileNetV2', 25, 0.9996,"Training"],
['MobileNetV2', 26, 0.9996,"Training"],
['MobileNetV2', 27, 0.9996,"Training"],
['MobileNetV2', 28, 0.9996,"Training"],
['MobileNetV2', 29, 0.9996,"Training"],
['MobileNetV2', 30, 0.9996,"Training"],
['MobileNetV2', 1, 0.3676, "Validation"],
['MobileNetV2', 2, 0.5084, "Validation"],
['MobileNetV2', 3, 0.4996, "Validation"],
['MobileNetV2', 4, 0.7712, "Validation"],
['MobileNetV2', 5, 0.6520, "Validation"],
['MobileNetV2', 6, 0.9567, "Validation"],
['MobileNetV2', 7, 0.7668, "Validation"],
['MobileNetV2', 8, 0.5078, "Validation"],
['MobileNetV2', 9, 0.9556, "Validation"],
['MobileNetV2', 10, 0.9949, "Validation"],
['MobileNetV2', 11, 0.9935, "Validation"],
['MobileNetV2', 12, 0.9733, "Validation"],
['MobileNetV2', 13, 0.9776, "Validation"],
['MobileNetV2', 14, 0.9413, "Validation"],
['MobileNetV2', 15, 0.9810, "Validation"],
['MobileNetV2', 16, 0.9984, "Validation"],
['MobileNetV2', 17, 0.9987, "Validation"],
['MobileNetV2', 18, 0.9987, "Validation"],
['MobileNetV2', 19, 0.9982, "Validation"],
['MobileNetV2', 20, 0.9986, "Validation"],
['MobileNetV2', 21, 0.9987, "Validation"],
['MobileNetV2', 22, 0.9984, "Validation"],
['MobileNetV2', 23, 0.9987, "Validation"],
['MobileNetV2', 24, 0.9989, "Validation"],
['MobileNetV2', 25, 0.9996, "Validation"],
['MobileNetV2', 26, 0.9992, "Validation"],
['MobileNetV2', 27, 0.9988, "Validation"],
['MobileNetV2', 28, 0.9991, "Validation"],
['MobileNetV2', 29, 0.9973, "Validation"],
['MobileNetV2', 30, 0.9987, "Validation"],
### InceptionV3
['InceptionV3', 1, 0.9249, 'Training'],
['InceptionV3', 2, 0.9634, 'Training'],
['InceptionV3', 3, 0.9701, 'Training'],
['InceptionV3', 4, 0.9774, 'Training'],
['InceptionV3', 5, 0.9807, 'Training'],
['InceptionV3', 6, 0.9867, 'Training'],
['InceptionV3', 7, 0.9894, 'Training'],
['InceptionV3', 8, 0.9908, 'Training'],
['InceptionV3', 9, 0.9911, 'Training'],
['InceptionV3', 10, 0.9920,'Training'],
['InceptionV3', 11, 0.9871,'Training'],
['InceptionV3', 12, 0.9928,'Training'],
['InceptionV3', 13, 0.9934,'Training'],
['InceptionV3', 14, 0.9905,'Training'],
['InceptionV3', 15, 0.9941,'Training'],
['InceptionV3', 16, 0.9946,'Training'],
['InceptionV3', 17, 0.9938,'Training'],
['InceptionV3', 18, 0.9957,'Training'],
['InceptionV3', 19, 0.9953,'Training'],
['InceptionV3', 20, 0.9958,'Training'],
['InceptionV3', 21, 0.9955,'Training'],
['InceptionV3', 22, 0.9955,'Training'],
['InceptionV3', 23, 0.9973,'Training'],
['InceptionV3', 24, 0.9978,'Training'],
['InceptionV3', 25, 0.9984,'Training'],
['InceptionV3', 26, 0.9984,'Training'],
['InceptionV3', 27, 0.9989,'Training'],
['InceptionV3', 28, 0.9986,'Training'],
['InceptionV3', 29, 0.9990,'Training'],
['InceptionV3', 30, 0.9991,'Training'],
['InceptionV3', 1, 0.9731, "Validation"],
['InceptionV3', 2, 0.9726, "Validation"],
['InceptionV3', 3, 0.9539, "Validation"],
['InceptionV3', 4, 0.8868, "Validation"],
['InceptionV3', 5, 0.9831, "Validation"],
['InceptionV3', 6, 0.9649, "Validation"],
['InceptionV3', 7, 0.9736, "Validation"],
['InceptionV3', 8, 0.9842, "Validation"],
['InceptionV3', 9, 0.9581, "Validation"],
['InceptionV3', 10, 0.8751, "Validation"],
['InceptionV3', 11, 0.9820, "Validation"],
['InceptionV3', 12, 0.9874, "Validation"],
['InceptionV3', 13, 0.4180, "Validation"],
['InceptionV3', 14, 0.9881, "Validation"],
['InceptionV3', 15, 0.8944, "Validation"],
['InceptionV3', 16, 0.9901, "Validation"],
['InceptionV3', 17, 0.9943, "Validation"],
['InceptionV3', 18, 0.9917, "Validation"],
['InceptionV3', 19, 0.9481, "Validation"],
['InceptionV3', 20, 0.9858, "Validation"],
['InceptionV3', 21, 0.9927, "Validation"],
['InceptionV3', 22, 0.9938, "Validation"],
['InceptionV3', 23, 0.9979, "Validation"],
['InceptionV3', 24, 0.9982, "Validation"],
['InceptionV3', 25, 0.9983, "Validation"],
['InceptionV3', 26, 0.9982, "Validation"],
['InceptionV3', 27, 0.9979, "Validation"],
['InceptionV3', 28, 0.9982, "Validation"],
['InceptionV3', 29, 0.9983, "Validation"],
['InceptionV3', 30, 0.9982, "Validation"],
### ResNet50
['ResNet50', 1, 0.9678, 'Training'],
['ResNet50', 2, 0.9780, 'Training'],
['ResNet50', 3, 0.9799, 'Training'],
['ResNet50', 4, 0.9840, 'Training'],
['ResNet50', 5, 0.9863, 'Training'],
['ResNet50', 6, 0.9850, 'Training'],
['ResNet50', 7, 0.9876, 'Training'],
['ResNet50', 8, 0.9897, 'Training'],
['ResNet50', 9, 0.9902, 'Training'],
['ResNet50', 10, 0.9898,'Training'],
['ResNet50', 11, 0.9951,'Training'],
['ResNet50', 12, 0.9962,'Training'],
['ResNet50', 13, 0.9966,'Training'],
['ResNet50', 14, 0.9972,'Training'],
['ResNet50', 15, 0.9973,'Training'],
['ResNet50', 16, 0.9976,'Training'],
['ResNet50', 17, 0.9981,'Training'],
['ResNet50', 18, 0.9978,'Training'],
['ResNet50', 19, 0.9980,'Training'],
['ResNet50', 20, 0.9982,'Training'],
['ResNet50', 21, 0.9978,'Training'],
['ResNet50', 22, 0.9982,'Training'],
['ResNet50', 23, 0.9983,'Training'],
['ResNet50', 24, 0.9984,'Training'],
['ResNet50', 25, 0.9985,'Training'],
['ResNet50', 26, 0.9988,'Training'],
['ResNet50', 27, 0.9988,'Training'],
['ResNet50', 28, 0.9988,'Training'],
['ResNet50', 29, 0.9989,'Training'],
['ResNet50', 30, 0.9987,'Training'],
['ResNet50', 1, 0.6914, "Validation"],
['ResNet50', 2, 0.8896, "Validation"],
['ResNet50', 3, 0.9220, "Validation"],
['ResNet50', 4, 0.7072, "Validation"],
['ResNet50', 5, 0.9812, "Validation"],
['ResNet50', 6, 0.9225, "Validation"],
['ResNet50', 7, 0.7351, "Validation"],
['ResNet50', 8, 0.7879, "Validation"],
['ResNet50', 9, 0.5196, "Validation"],
['ResNet50', 10, 0.9628, "Validation"],
['ResNet50', 11, 0.9931, "Validation"],
['ResNet50', 12, 0.9959, "Validation"],
['ResNet50', 13, 0.9969, "Validation"],
['ResNet50', 14, 0.9964, "Validation"],
['ResNet50', 15, 0.9969, "Validation"],
['ResNet50', 16, 0.9942, "Validation"],
['ResNet50', 17, 0.9966, "Validation"],
['ResNet50', 18, 0.9976, "Validation"],
['ResNet50', 19, 0.9977, "Validation"],
['ResNet50', 20, 0.9971, "Validation"],
['ResNet50', 21, 0.9943, "Validation"],
['ResNet50', 22, 0.9962, "Validation"],
['ResNet50', 23, 0.9961, "Validation"],
['ResNet50', 24, 0.9965, "Validation"],
['ResNet50', 25, 0.9978, "Validation"],
['ResNet50', 26, 0.9983, "Validation"],
['ResNet50', 27, 0.9984, "Validation"],
['ResNet50', 28, 0.9977, "Validation"],
['ResNet50', 29, 0.9981, "Validation"],
['ResNet50', 30, 0.9982, "Validation"],
]
new_loss = [
['MobileNetV2', 1, 0.0328, 'Training'],
['MobileNetV2', 2, 0.0265, 'Training'],
['MobileNetV2', 3, 0.0232, 'Training'], ## 3
['MobileNetV2', 4, 0.0180, 'Training'], ## 4
['MobileNetV2', 5, 0.0163, 'Training'], ## 5
['MobileNetV2', 6, 0.0149, 'Training'], ## 6
['MobileNetV2', 7, 0.0128, 'Training'], ## 7
['MobileNetV2', 8, 0.0122, 'Training'], ## 8
['MobileNetV2', 9, 0.0127, 'Training'], ## 9
['MobileNetV2', 10, 0.0093,'Training'], ## 10
['MobileNetV2', 11, 0.0106,'Training'], ## 1
['MobileNetV2', 12, 0.0096,'Training'], ## 2
['MobileNetV2', 13, 0.0092,'Training'], ## 3
['MobileNetV2', 14, 0.0075,'Training'], ## 4
['MobileNetV2', 15, 0.0074,'Training'], ## 5
['MobileNetV2', 16, 0.0031,'Training'], ## 6
['MobileNetV2', 17, 0.0020,'Training'], ## 7
['MobileNetV2', 18, 0.0019,'Training'], ## 8
['MobileNetV2', 19, 0.0018,'Training'], ## 9
['MobileNetV2', 20, 0.0016,'Training'], ## 10
['MobileNetV2', 21, 0.0014,'Training'], ## 1
['MobileNetV2', 22, 0.0014,'Training'], ## 2
['MobileNetV2', 23, 0.0012,'Training'], ## 3
['MobileNetV2', 24, 0.0013,'Training'], ## 4
['MobileNetV2', 25, 9.1785e-04, 'Training'],
['MobileNetV2', 26, 0.0014, 'Training'],
['MobileNetV2', 27, 0.0011, 'Training'],
['MobileNetV2', 28, 0.0010, 'Training'],
['MobileNetV2', 29, 0.0011, 'Training'],
['MobileNetV2', 30, 0.0012, 'Training'],
['MobileNetV2', 1, 16.2260, 'Validation'],
['MobileNetV2', 2, 9.2775, 'Validation'],
['MobileNetV2', 3, 10.2493, 'Validation'], ## 3
['MobileNetV2', 4, 2.2250, 'Validation'], ## 4
['MobileNetV2', 5, 3.3614, 'Validation'], ## 5
['MobileNetV2', 6, 0.1620, 'Validation'], ## 6
['MobileNetV2', 7, 1.6062, 'Validation'], ## 7
['MobileNetV2', 8, 2.7681, 'Validation'], ## 8
['MobileNetV2', 9, 0.1952, 'Validation'], ## 9
['MobileNetV2', 10, 0.0191, 'Validation'], ## 10
['MobileNetV2', 11, 0.0193, 'Validation'], ## 1
['MobileNetV2', 12, 0.0728, 'Validation'], ## 2
['MobileNetV2', 13, 0.0683, 'Validation'], ## 3
['MobileNetV2', 14, 0.1830, 'Validation'], ## 4
['MobileNetV2', 15, 0.0676, 'Validation'], ## 5
['MobileNetV2', 16, 0.0051, 'Validation'], ## 6
['MobileNetV2', 17, 0.0033, 'Validation'], ## 7
['MobileNetV2', 18, 0.0037, 'Validation'], ## 8
['MobileNetV2', 19, 0.0054, 'Validation'], ## 9
['MobileNetV2', 20, 0.0045, 'Validation'], ## 10
['MobileNetV2', 21, 0.0043, 'Validation'], ## 1
['MobileNetV2', 22, 0.0049, 'Validation'], ## 2
['MobileNetV2', 23, 0.0050, 'Validation'], ## 3
['MobileNetV2', 24, 0.0058, 'Validation'], ## 4
['MobileNetV2', 25, 0.0012, 'Validation'],
['MobileNetV2', 26, 0.0023, 'Validation'],
['MobileNetV2', 27, 0.0047, 'Validation'],
['MobileNetV2', 28, 0.0032, 'Validation'],
['MobileNetV2', 29, 0.0106, 'Validation'],
['MobileNetV2', 30, 0.0038, 'Validation'],
### InceptionV3
['InceptionV3', 1, 0.2072, 'Training'],
['InceptionV3', 2, 0.1054, 'Training'],
['InceptionV3', 3, 0.0886, 'Training'],
['InceptionV3', 4, 0.0652, 'Training'],
['InceptionV3', 5, 0.0586, 'Training'],
['InceptionV3', 6, 0.0380, 'Training'],
['InceptionV3', 7, 0.0330, 'Training'],
['InceptionV3', 8, 0.0274, 'Training'],
['InceptionV3', 9, 0.0270, 'Training'],
['InceptionV3', 10, 0.0250,'Training'],
['InceptionV3', 11, 0.0350,'Training'],
['InceptionV3', 12, 0.0186,'Training'],
['InceptionV3', 13, 0.0186,'Training'],
['InceptionV3', 14, 0.0271,'Training'],
['InceptionV3', 15, 0.0175,'Training'],
['InceptionV3', 16, 0.0161,'Training'],
['InceptionV3', 17, 0.0177,'Training'],
['InceptionV3', 18, 0.0127,'Training'],
['InceptionV3', 19, 0.0138,'Training'],
['InceptionV3', 20, 0.0125,'Training'],
['InceptionV3', 21, 0.0128,'Training'],
['InceptionV3', 22, 0.0139,'Training'],
['InceptionV3', 23, 0.0083,'Training'],
['InceptionV3', 24, 0.0057,'Training'],
['InceptionV3', 25, 0.0045,'Training'],
['InceptionV3', 26, 0.0044,'Training'],
['InceptionV3', 27, 0.0029,'Training'],
['InceptionV3', 28, 0.0035,'Training'],
['InceptionV3', 29, 0.0027,'Training'],
['InceptionV3', 30, 0.0026,'Training'],
['InceptionV3', 1, 0.1038, 'Validation'],
['InceptionV3', 2, 0.1128, 'Validation'],
['InceptionV3', 3, 0.1255, 'Validation'],
['InceptionV3', 4, 0.2765, 'Validation'],
['InceptionV3', 5, 0.0467, 'Validation'],
['InceptionV3', 6, 0.0980, 'Validation'],
['InceptionV3', 7, 0.0759, 'Validation'],
['InceptionV3', 8, 0.0499, 'Validation'],
['InceptionV3', 9, 0.1046, 'Validation'],
['InceptionV3', 10, 0.3535, 'Validation'],
['InceptionV3', 11, 0.0524, 'Validation'],
['InceptionV3', 12, 0.0330, 'Validation'],
['InceptionV3', 13, 2.3618, 'Validation'],
['InceptionV3', 14, 0.0413, 'Validation'],
['InceptionV3', 15, 0.3454, 'Validation'],
['InceptionV3', 16, 0.0321, 'Validation'],
['InceptionV3', 17, 0.0193, 'Validation'],
['InceptionV3', 18, 0.0260, 'Validation'],
['InceptionV3', 19, 0.1391, 'Validation'],
['InceptionV3', 20, 0.0433, 'Validation'],
['InceptionV3', 21, 0.0171, 'Validation'],
['InceptionV3', 22, 0.0184, 'Validation'],
['InceptionV3', 23, 0.0065, 'Validation'],
['InceptionV3', 24, 0.0065, 'Validation'],
['InceptionV3', 25, 0.0057, 'Validation'],
['InceptionV3', 26, 0.0060, 'Validation'],
['InceptionV3', 27, 0.0073, 'Validation'],
['InceptionV3', 28, 0.0054, 'Validation'],
['InceptionV3', 29, 0.0052, 'Validation'],
['InceptionV3', 30, 0.0064, 'Validation'],
### ResNet50
['ResNet50', 1, 0.0920, 'Training'], ## 1
['ResNet50', 2, 0.0622, 'Training'], ## 2
['ResNet50', 3, 0.0570, 'Training'], ## 3
['ResNet50', 4, 0.0460, 'Training'], ## 4
['ResNet50', 5, 0.0405, 'Training'], ## 5
['ResNet50', 6, 0.0431, 'Training'], ## 6
['ResNet50', 7, 0.0360, 'Training'], ## 7
['ResNet50', 8, 0.0298, 'Training'], ## 8
['ResNet50', 9, 0.0297, 'Training'], ## 9
['ResNet50', 10, 0.0295,'Training'], ## 10
['ResNet50', 11, 0.0144,'Training'], ## 1
['ResNet50', 12, 0.0105,'Training'], ## 2
['ResNet50', 13, 0.0085,'Training'], ## 3
['ResNet50', 14, 0.0079,'Training'], ## 4
['ResNet50', 15, 0.0076,'Training'], ## 5
['ResNet50', 16, 0.0070,'Training'], ## 6
['ResNet50', 17, 0.0057,'Training'], ## 7
['ResNet50', 18, 0.0062,'Training'], ## 8
['ResNet50', 19, 0.0058,'Training'], ## 9
['ResNet50', 20, 0.0053,'Training'], ## 10
['ResNet50', 21, 0.0059,'Training'], ## 1
['ResNet50', 22, 0.0050,'Training'], ## 2
['ResNet50', 23, 0.0048,'Training'], ## 3
['ResNet50', 24, 0.0047,'Training'], ## 4
['ResNet50', 25, 0.0043,'Training'], ## 5
['ResNet50', 26, 0.0034,'Training'], ## 6
['ResNet50', 27, 0.0033,'Training'], ## 7
['ResNet50', 28, 0.0033,'Training'], ## 8
['ResNet50', 29, 0.0030,'Training'], ## 9
['ResNet50', 30, 0.0039,'Training'], ## 10
['ResNet50', 1, 1.3306, 'Validation'], ## 1
['ResNet50', 2, 0.2611, 'Validation'], ## 2
['ResNet50', 3, 0.1880, 'Validation'], ## 3
['ResNet50', 4, 0.5956, 'Validation'], ## 4
['ResNet50', 5, 0.0632, 'Validation'], ## 5
['ResNet50', 6, 0.2366, 'Validation'], ## 6
['ResNet50', 7, 1.1562, 'Validation'], ## 7
['ResNet50', 8, 0.6376, 'Validation'], ## 8
['ResNet50', 9, 1.3452, 'Validation'], ## 9
['ResNet50', 10, 0.1477, 'Validation'], ## 10
['ResNet50', 11, 0.0219, 'Validation'], ## 1
['ResNet50', 12, 0.0110, 'Validation'], ## 2
['ResNet50', 13, 0.0098, 'Validation'], ## 3
['ResNet50', 14, 0.0105, 'Validation'], ## 4
['ResNet50', 15, 0.0088, 'Validation'], ## 5
['ResNet50', 16, 0.0190, 'Validation'], ## 6
['ResNet50', 17, 0.0096, 'Validation'], ## 7
['ResNet50', 18, 0.0068, 'Validation'], ## 8
['ResNet50', 19, 0.0073, 'Validation'], ## 9
['ResNet50', 20, 0.0101, 'Validation'], ## 10
['ResNet50', 21, 0.0160, 'Validation'], ## 1
['ResNet50', 22, 0.0153, 'Validation'], ## 2
['ResNet50', 23, 0.0117, 'Validation'], ## 3
['ResNet50', 24, 0.0091, 'Validation'], ## 4
['ResNet50', 25, 0.0066, 'Validation'], ## 5
['ResNet50', 26, 0.0058, 'Validation'], ## 6
['ResNet50', 27, 0.0058, 'Validation'], ## 7
['ResNet50', 28, 0.0071, 'Validation'], ## 8
['ResNet50', 29, 0.0070, 'Validation'], ## 9
['ResNet50', 30, 0.0063, 'Validation'], ## 10
]
new_acc_df = pd.DataFrame(new_accuracy, columns=['Model', 'Epoch', 'Accuracy', 'Training/Validation'])
new_loss_df = pd.DataFrame(new_loss, columns=['Model', 'Epoch', 'Loss', 'Training/Validation'])
new_acc_df
new_loss_df
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Nilai Akurasi dan Loss pada Data Training dan Validasi Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
plt.title('Accuracy')
sns.lineplot(data=new_acc_df, x="Epoch", y="Accuracy", hue="Model", style='Training/Validation')
plt.subplot(1, 2, 2)
sns.lineplot(data=new_loss_df, x="Epoch", y="Loss", hue="Model", style='Training/Validation')
plt.title('Loss')
```
|
github_jupyter
|
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
accuracy = [
['MobileNetV2', 1, 0.9890, 0.3676],
['MobileNetV2', 2, 0.9916, 0.5084],
['MobileNetV2', 3, 0.9926, 0.4996],
['MobileNetV2', 4, 0.9939, 0.7712],
['MobileNetV2', 5, 0.9945, 0.6520],
['MobileNetV2', 6, 0.9953, 0.9567],
['MobileNetV2', 7, 0.9961, 0.7668],
['MobileNetV2', 8, 0.9960, 0.5078],
['MobileNetV2', 9, 0.9960, 0.9556],
['MobileNetV2', 10, 0.9972, 0.9949],
['MobileNetV2', 11, 0.9967, 0.9935],
['MobileNetV2', 12, 0.9970, 0.9733],
['MobileNetV2', 13, 0.9970, 0.9776],
['MobileNetV2', 14, 0.9975, 0.9413],
['MobileNetV2', 15, 0.9974, 0.9810],
['MobileNetV2', 16, 0.9988, 0.9984],
['MobileNetV2', 17, 0.9994, 0.9987],
['MobileNetV2', 18, 0.9993, 0.9987],
['MobileNetV2', 19, 0.9993, 0.9982],
['MobileNetV2', 20, 0.9995, 0.9986],
['MobileNetV2', 21, 0.9996, 0.9987],
['MobileNetV2', 22, 0.9995, 0.9984],
['MobileNetV2', 23, 0.9995, 0.9987],
['MobileNetV2', 24, 0.9996, 0.9989],
['MobileNetV2', 25, 0.9996, 0.9996],
['MobileNetV2', 26, 0.9996, 0.9992],
['MobileNetV2', 27, 0.9996, 0.9988],
['MobileNetV2', 28, 0.9996, 0.9991],
['MobileNetV2', 29, 0.9996, 0.9973],
['MobileNetV2', 30, 0.9996, 0.9987],
### InceptionV3
['InceptionV3', 1, 0.9249, 0.9731],
['InceptionV3', 2, 0.9634, 0.9726],
['InceptionV3', 3, 0.9701, 0.9539],
['InceptionV3', 4, 0.9774, 0.8868],
['InceptionV3', 5, 0.9807, 0.9831],
['InceptionV3', 6, 0.9867, 0.9649],
['InceptionV3', 7, 0.9894, 0.9736],
['InceptionV3', 8, 0.9908, 0.9842],
['InceptionV3', 9, 0.9911, 0.9581],
['InceptionV3', 10, 0.9920, 0.8751],
['InceptionV3', 11, 0.9871, 0.9820],
['InceptionV3', 12, 0.9928, 0.9874],
['InceptionV3', 13, 0.9934, 0.4180],
['InceptionV3', 14, 0.9905, 0.9881],
['InceptionV3', 15, 0.9941, 0.8944],
['InceptionV3', 16, 0.9946, 0.9901],
['InceptionV3', 17, 0.9938, 0.9943],
['InceptionV3', 18, 0.9957, 0.9917],
['InceptionV3', 19, 0.9953, 0.9481],
['InceptionV3', 20, 0.9958, 0.9858],
['InceptionV3', 21, 0.9955, 0.9927],
['InceptionV3', 22, 0.9955, 0.9938],
['InceptionV3', 23, 0.9973, 0.9979],
['InceptionV3', 24, 0.9978, 0.9982],
['InceptionV3', 25, 0.9984, 0.9983],
['InceptionV3', 26, 0.9984, 0.9982],
['InceptionV3', 27, 0.9989, 0.9979],
['InceptionV3', 28, 0.9986, 0.9982],
['InceptionV3', 29, 0.9990, 0.9983],
['InceptionV3', 30, 0.9991, 0.9982],
### ResNet50
['ResNet50', 1, 0.9678, 0.6914],
['ResNet50', 2, 0.9780, 0.8896],
['ResNet50', 3, 0.9799, 0.9220],
['ResNet50', 4, 0.9840, 0.7072],
['ResNet50', 5, 0.9863, 0.9812],
['ResNet50', 6, 0.9850, 0.9225],
['ResNet50', 7, 0.9876, 0.7351],
['ResNet50', 8, 0.9897, 0.7879],
['ResNet50', 9, 0.9902, 0.5196],
['ResNet50', 10, 0.9898, 0.9628],
['ResNet50', 11, 0.9951, 0.9931],
['ResNet50', 12, 0.9962, 0.9959],
['ResNet50', 13, 0.9966, 0.9969],
['ResNet50', 14, 0.9972, 0.9964],
['ResNet50', 15, 0.9973, 0.9969],
['ResNet50', 16, 0.9976, 0.9942],
['ResNet50', 17, 0.9981, 0.9966],
['ResNet50', 18, 0.9978, 0.9976],
['ResNet50', 19, 0.9980, 0.9977],
['ResNet50', 20, 0.9982, 0.9971],
['ResNet50', 21, 0.9978, 0.9943],
['ResNet50', 22, 0.9982, 0.9962],
['ResNet50', 23, 0.9983, 0.9961],
['ResNet50', 24, 0.9984, 0.9965],
['ResNet50', 25, 0.9985, 0.9978],
['ResNet50', 26, 0.9988, 0.9983],
['ResNet50', 27, 0.9988, 0.9984],
['ResNet50', 28, 0.9988, 0.9977],
['ResNet50', 29, 0.9989, 0.9981],
['ResNet50', 30, 0.9987, 0.9982],
]
loss = [
['MobileNetV2', 1, 0.0328, 16.2260],
['MobileNetV2', 2, 0.0265, 9.2775],
['MobileNetV2', 3, 0.0232, 10.2493], ## 3
['MobileNetV2', 4, 0.0180, 2.2250], ## 4
['MobileNetV2', 5, 0.0163, 3.3614], ## 5
['MobileNetV2', 6, 0.0149, 0.1620], ## 6
['MobileNetV2', 7, 0.0128, 1.6062], ## 7
['MobileNetV2', 8, 0.0122, 2.7681], ## 8
['MobileNetV2', 9, 0.0127, 0.1952], ## 9
['MobileNetV2', 10, 0.0093, 0.0191], ## 10
['MobileNetV2', 11, 0.0106, 0.0193], ## 1
['MobileNetV2', 12, 0.0096, 0.0728], ## 2
['MobileNetV2', 13, 0.0092, 0.0683], ## 3
['MobileNetV2', 14, 0.0075, 0.1830], ## 4
['MobileNetV2', 15, 0.0074, 0.0676], ## 5
['MobileNetV2', 16, 0.0031, 0.0051], ## 6
['MobileNetV2', 17, 0.0020, 0.0033], ## 7
['MobileNetV2', 18, 0.0019, 0.0037], ## 8
['MobileNetV2', 19, 0.0018, 0.0054], ## 9
['MobileNetV2', 20, 0.0016, 0.0045], ## 10
['MobileNetV2', 21, 0.0014, 0.0043], ## 1
['MobileNetV2', 22, 0.0014, 0.0049], ## 2
['MobileNetV2', 23, 0.0012, 0.0050], ## 3
['MobileNetV2', 24, 0.0013, 0.0058], ## 4
['MobileNetV2', 25, 9.1785e-04, 0.0012],
['MobileNetV2', 26, 0.0014, 0.0023],
['MobileNetV2', 27, 0.0011, 0.0047],
['MobileNetV2', 28, 0.0010, 0.0032],
['MobileNetV2', 29, 0.0011, 0.0106],
['MobileNetV2', 30, 0.0012, 0.0038],
### InceptionV3
['InceptionV3', 1, 0.2072, 0.1038],
['InceptionV3', 2, 0.1054, 0.1128],
['InceptionV3', 3, 0.0886, 0.1255],
['InceptionV3', 4, 0.0652, 0.2765],
['InceptionV3', 5, 0.0586, 0.0467],
['InceptionV3', 6, 0.0380, 0.0980],
['InceptionV3', 7, 0.0330, 0.0759],
['InceptionV3', 8, 0.0274, 0.0499],
['InceptionV3', 9, 0.0270, 0.1046],
['InceptionV3', 10, 0.0250, 0.3535],
['InceptionV3', 11, 0.0350, 0.0524],
['InceptionV3', 12, 0.0186, 0.0330],
['InceptionV3', 13, 0.0186, 2.3618],
['InceptionV3', 14, 0.0271, 0.0413],
['InceptionV3', 15, 0.0175, 0.3454],
['InceptionV3', 16, 0.0161, 0.0321],
['InceptionV3', 17, 0.0177, 0.0193],
['InceptionV3', 18, 0.0127, 0.0260],
['InceptionV3', 19, 0.0138, 0.1391],
['InceptionV3', 20, 0.0125, 0.0433],
['InceptionV3', 21, 0.0128, 0.0171],
['InceptionV3', 22, 0.0139, 0.0184],
['InceptionV3', 23, 0.0083, 0.0065],
['InceptionV3', 24, 0.0057, 0.0065],
['InceptionV3', 25, 0.0045, 0.0057],
['InceptionV3', 26, 0.0044, 0.0060],
['InceptionV3', 27, 0.0029, 0.0073],
['InceptionV3', 28, 0.0035, 0.0054],
['InceptionV3', 29, 0.0027, 0.0052],
['InceptionV3', 30, 0.0026, 0.0064],
### ResNet50
['ResNet50', 1, 0.0920, 1.3306], ## 1
['ResNet50', 2, 0.0622, 0.2611], ## 2
['ResNet50', 3, 0.0570, 0.1880], ## 3
['ResNet50', 4, 0.0460, 0.5956], ## 4
['ResNet50', 5, 0.0405, 0.0632], ## 5
['ResNet50', 6, 0.0431, 0.2366], ## 6
['ResNet50', 7, 0.0360, 1.1562], ## 7
['ResNet50', 8, 0.0298, 0.6376], ## 8
['ResNet50', 9, 0.0297, 1.3452], ## 9
['ResNet50', 10, 0.0295, 0.1477], ## 10
['ResNet50', 11, 0.0144, 0.0219], ## 1
['ResNet50', 12, 0.0105, 0.0110], ## 2
['ResNet50', 13, 0.0085, 0.0098], ## 3
['ResNet50', 14, 0.0079, 0.0105], ## 4
['ResNet50', 15, 0.0076, 0.0088], ## 5
['ResNet50', 16, 0.0070, 0.0190], ## 6
['ResNet50', 17, 0.0057, 0.0096], ## 7
['ResNet50', 18, 0.0062, 0.0068], ## 8
['ResNet50', 19, 0.0058, 0.0073], ## 9
['ResNet50', 20, 0.0053, 0.0101], ## 10
['ResNet50', 21, 0.0059, 0.0160], ## 1
['ResNet50', 22, 0.0050, 0.0153], ## 2
['ResNet50', 23, 0.0048, 0.0117], ## 3
['ResNet50', 24, 0.0047, 0.0091], ## 4
['ResNet50', 25, 0.0043, 0.0066], ## 5
['ResNet50', 26, 0.0034, 0.0058], ## 6
['ResNet50', 27, 0.0033, 0.0058], ## 7
['ResNet50', 28, 0.0033, 0.0071], ## 8
['ResNet50', 29, 0.0030, 0.0070], ## 9
['ResNet50', 30, 0.0039, 0.0063], ## 10
]
accuracy_df = pd.DataFrame(accuracy, columns=['Model', 'Epoch', 'Training Accuracy', 'Validation Accuracy'])
loss_df = pd.DataFrame(loss, columns=['Model', 'Epoch','Training Loss', 'Validation Loss'])
accuracy_df
loss_df
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Training dan Validation Accuracy Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
plt.title('Training Accuracy')
sns.lineplot(data=accuracy_df, x="Epoch", y="Training Accuracy", hue="Model")
plt.subplot(1, 2, 2)
sns.lineplot(data=accuracy_df, x="Epoch", y="Validation Accuracy", hue="Model")
plt.title('Validation Accuracy')
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Training dan Validation Loss Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
sns.lineplot(data=loss_df, x="Epoch", y="Training Loss", hue="Model")
plt.title('Training Loss')
plt.subplot(1, 2, 2)
sns.lineplot(data=loss_df, x="Epoch", y="Validation Loss", hue="Model")
plt.title('Validation Loss')
plt.show()
new_accuracy = [
['MobileNetV2', 1, 0.9890, "Training"],
['MobileNetV2', 2, 0.9916, "Training"],
['MobileNetV2', 3, 0.9926, "Training"],
['MobileNetV2', 4, 0.9939, "Training"],
['MobileNetV2', 5, 0.9945, "Training"],
['MobileNetV2', 6, 0.9953, "Training"],
['MobileNetV2', 7, 0.9961, "Training"],
['MobileNetV2', 8, 0.9960, "Training"],
['MobileNetV2', 9, 0.9960, "Training"],
['MobileNetV2', 10, 0.9972,"Training"],
['MobileNetV2', 11, 0.9967,"Training"],
['MobileNetV2', 12, 0.9970,"Training"],
['MobileNetV2', 13, 0.9970,"Training"],
['MobileNetV2', 14, 0.9975,"Training"],
['MobileNetV2', 15, 0.9974,"Training"],
['MobileNetV2', 16, 0.9988,"Training"],
['MobileNetV2', 17, 0.9994,"Training"],
['MobileNetV2', 18, 0.9993,"Training"],
['MobileNetV2', 19, 0.9993,"Training"],
['MobileNetV2', 20, 0.9995,"Training"],
['MobileNetV2', 21, 0.9996,"Training"],
['MobileNetV2', 22, 0.9995,"Training"],
['MobileNetV2', 23, 0.9995,"Training"],
['MobileNetV2', 24, 0.9996,"Training"],
['MobileNetV2', 25, 0.9996,"Training"],
['MobileNetV2', 26, 0.9996,"Training"],
['MobileNetV2', 27, 0.9996,"Training"],
['MobileNetV2', 28, 0.9996,"Training"],
['MobileNetV2', 29, 0.9996,"Training"],
['MobileNetV2', 30, 0.9996,"Training"],
['MobileNetV2', 1, 0.3676, "Validation"],
['MobileNetV2', 2, 0.5084, "Validation"],
['MobileNetV2', 3, 0.4996, "Validation"],
['MobileNetV2', 4, 0.7712, "Validation"],
['MobileNetV2', 5, 0.6520, "Validation"],
['MobileNetV2', 6, 0.9567, "Validation"],
['MobileNetV2', 7, 0.7668, "Validation"],
['MobileNetV2', 8, 0.5078, "Validation"],
['MobileNetV2', 9, 0.9556, "Validation"],
['MobileNetV2', 10, 0.9949, "Validation"],
['MobileNetV2', 11, 0.9935, "Validation"],
['MobileNetV2', 12, 0.9733, "Validation"],
['MobileNetV2', 13, 0.9776, "Validation"],
['MobileNetV2', 14, 0.9413, "Validation"],
['MobileNetV2', 15, 0.9810, "Validation"],
['MobileNetV2', 16, 0.9984, "Validation"],
['MobileNetV2', 17, 0.9987, "Validation"],
['MobileNetV2', 18, 0.9987, "Validation"],
['MobileNetV2', 19, 0.9982, "Validation"],
['MobileNetV2', 20, 0.9986, "Validation"],
['MobileNetV2', 21, 0.9987, "Validation"],
['MobileNetV2', 22, 0.9984, "Validation"],
['MobileNetV2', 23, 0.9987, "Validation"],
['MobileNetV2', 24, 0.9989, "Validation"],
['MobileNetV2', 25, 0.9996, "Validation"],
['MobileNetV2', 26, 0.9992, "Validation"],
['MobileNetV2', 27, 0.9988, "Validation"],
['MobileNetV2', 28, 0.9991, "Validation"],
['MobileNetV2', 29, 0.9973, "Validation"],
['MobileNetV2', 30, 0.9987, "Validation"],
### InceptionV3
['InceptionV3', 1, 0.9249, 'Training'],
['InceptionV3', 2, 0.9634, 'Training'],
['InceptionV3', 3, 0.9701, 'Training'],
['InceptionV3', 4, 0.9774, 'Training'],
['InceptionV3', 5, 0.9807, 'Training'],
['InceptionV3', 6, 0.9867, 'Training'],
['InceptionV3', 7, 0.9894, 'Training'],
['InceptionV3', 8, 0.9908, 'Training'],
['InceptionV3', 9, 0.9911, 'Training'],
['InceptionV3', 10, 0.9920,'Training'],
['InceptionV3', 11, 0.9871,'Training'],
['InceptionV3', 12, 0.9928,'Training'],
['InceptionV3', 13, 0.9934,'Training'],
['InceptionV3', 14, 0.9905,'Training'],
['InceptionV3', 15, 0.9941,'Training'],
['InceptionV3', 16, 0.9946,'Training'],
['InceptionV3', 17, 0.9938,'Training'],
['InceptionV3', 18, 0.9957,'Training'],
['InceptionV3', 19, 0.9953,'Training'],
['InceptionV3', 20, 0.9958,'Training'],
['InceptionV3', 21, 0.9955,'Training'],
['InceptionV3', 22, 0.9955,'Training'],
['InceptionV3', 23, 0.9973,'Training'],
['InceptionV3', 24, 0.9978,'Training'],
['InceptionV3', 25, 0.9984,'Training'],
['InceptionV3', 26, 0.9984,'Training'],
['InceptionV3', 27, 0.9989,'Training'],
['InceptionV3', 28, 0.9986,'Training'],
['InceptionV3', 29, 0.9990,'Training'],
['InceptionV3', 30, 0.9991,'Training'],
['InceptionV3', 1, 0.9731, "Validation"],
['InceptionV3', 2, 0.9726, "Validation"],
['InceptionV3', 3, 0.9539, "Validation"],
['InceptionV3', 4, 0.8868, "Validation"],
['InceptionV3', 5, 0.9831, "Validation"],
['InceptionV3', 6, 0.9649, "Validation"],
['InceptionV3', 7, 0.9736, "Validation"],
['InceptionV3', 8, 0.9842, "Validation"],
['InceptionV3', 9, 0.9581, "Validation"],
['InceptionV3', 10, 0.8751, "Validation"],
['InceptionV3', 11, 0.9820, "Validation"],
['InceptionV3', 12, 0.9874, "Validation"],
['InceptionV3', 13, 0.4180, "Validation"],
['InceptionV3', 14, 0.9881, "Validation"],
['InceptionV3', 15, 0.8944, "Validation"],
['InceptionV3', 16, 0.9901, "Validation"],
['InceptionV3', 17, 0.9943, "Validation"],
['InceptionV3', 18, 0.9917, "Validation"],
['InceptionV3', 19, 0.9481, "Validation"],
['InceptionV3', 20, 0.9858, "Validation"],
['InceptionV3', 21, 0.9927, "Validation"],
['InceptionV3', 22, 0.9938, "Validation"],
['InceptionV3', 23, 0.9979, "Validation"],
['InceptionV3', 24, 0.9982, "Validation"],
['InceptionV3', 25, 0.9983, "Validation"],
['InceptionV3', 26, 0.9982, "Validation"],
['InceptionV3', 27, 0.9979, "Validation"],
['InceptionV3', 28, 0.9982, "Validation"],
['InceptionV3', 29, 0.9983, "Validation"],
['InceptionV3', 30, 0.9982, "Validation"],
### ResNet50
['ResNet50', 1, 0.9678, 'Training'],
['ResNet50', 2, 0.9780, 'Training'],
['ResNet50', 3, 0.9799, 'Training'],
['ResNet50', 4, 0.9840, 'Training'],
['ResNet50', 5, 0.9863, 'Training'],
['ResNet50', 6, 0.9850, 'Training'],
['ResNet50', 7, 0.9876, 'Training'],
['ResNet50', 8, 0.9897, 'Training'],
['ResNet50', 9, 0.9902, 'Training'],
['ResNet50', 10, 0.9898,'Training'],
['ResNet50', 11, 0.9951,'Training'],
['ResNet50', 12, 0.9962,'Training'],
['ResNet50', 13, 0.9966,'Training'],
['ResNet50', 14, 0.9972,'Training'],
['ResNet50', 15, 0.9973,'Training'],
['ResNet50', 16, 0.9976,'Training'],
['ResNet50', 17, 0.9981,'Training'],
['ResNet50', 18, 0.9978,'Training'],
['ResNet50', 19, 0.9980,'Training'],
['ResNet50', 20, 0.9982,'Training'],
['ResNet50', 21, 0.9978,'Training'],
['ResNet50', 22, 0.9982,'Training'],
['ResNet50', 23, 0.9983,'Training'],
['ResNet50', 24, 0.9984,'Training'],
['ResNet50', 25, 0.9985,'Training'],
['ResNet50', 26, 0.9988,'Training'],
['ResNet50', 27, 0.9988,'Training'],
['ResNet50', 28, 0.9988,'Training'],
['ResNet50', 29, 0.9989,'Training'],
['ResNet50', 30, 0.9987,'Training'],
['ResNet50', 1, 0.6914, "Validation"],
['ResNet50', 2, 0.8896, "Validation"],
['ResNet50', 3, 0.9220, "Validation"],
['ResNet50', 4, 0.7072, "Validation"],
['ResNet50', 5, 0.9812, "Validation"],
['ResNet50', 6, 0.9225, "Validation"],
['ResNet50', 7, 0.7351, "Validation"],
['ResNet50', 8, 0.7879, "Validation"],
['ResNet50', 9, 0.5196, "Validation"],
['ResNet50', 10, 0.9628, "Validation"],
['ResNet50', 11, 0.9931, "Validation"],
['ResNet50', 12, 0.9959, "Validation"],
['ResNet50', 13, 0.9969, "Validation"],
['ResNet50', 14, 0.9964, "Validation"],
['ResNet50', 15, 0.9969, "Validation"],
['ResNet50', 16, 0.9942, "Validation"],
['ResNet50', 17, 0.9966, "Validation"],
['ResNet50', 18, 0.9976, "Validation"],
['ResNet50', 19, 0.9977, "Validation"],
['ResNet50', 20, 0.9971, "Validation"],
['ResNet50', 21, 0.9943, "Validation"],
['ResNet50', 22, 0.9962, "Validation"],
['ResNet50', 23, 0.9961, "Validation"],
['ResNet50', 24, 0.9965, "Validation"],
['ResNet50', 25, 0.9978, "Validation"],
['ResNet50', 26, 0.9983, "Validation"],
['ResNet50', 27, 0.9984, "Validation"],
['ResNet50', 28, 0.9977, "Validation"],
['ResNet50', 29, 0.9981, "Validation"],
['ResNet50', 30, 0.9982, "Validation"],
]
new_loss = [
['MobileNetV2', 1, 0.0328, 'Training'],
['MobileNetV2', 2, 0.0265, 'Training'],
['MobileNetV2', 3, 0.0232, 'Training'], ## 3
['MobileNetV2', 4, 0.0180, 'Training'], ## 4
['MobileNetV2', 5, 0.0163, 'Training'], ## 5
['MobileNetV2', 6, 0.0149, 'Training'], ## 6
['MobileNetV2', 7, 0.0128, 'Training'], ## 7
['MobileNetV2', 8, 0.0122, 'Training'], ## 8
['MobileNetV2', 9, 0.0127, 'Training'], ## 9
['MobileNetV2', 10, 0.0093,'Training'], ## 10
['MobileNetV2', 11, 0.0106,'Training'], ## 1
['MobileNetV2', 12, 0.0096,'Training'], ## 2
['MobileNetV2', 13, 0.0092,'Training'], ## 3
['MobileNetV2', 14, 0.0075,'Training'], ## 4
['MobileNetV2', 15, 0.0074,'Training'], ## 5
['MobileNetV2', 16, 0.0031,'Training'], ## 6
['MobileNetV2', 17, 0.0020,'Training'], ## 7
['MobileNetV2', 18, 0.0019,'Training'], ## 8
['MobileNetV2', 19, 0.0018,'Training'], ## 9
['MobileNetV2', 20, 0.0016,'Training'], ## 10
['MobileNetV2', 21, 0.0014,'Training'], ## 1
['MobileNetV2', 22, 0.0014,'Training'], ## 2
['MobileNetV2', 23, 0.0012,'Training'], ## 3
['MobileNetV2', 24, 0.0013,'Training'], ## 4
['MobileNetV2', 25, 9.1785e-04, 'Training'],
['MobileNetV2', 26, 0.0014, 'Training'],
['MobileNetV2', 27, 0.0011, 'Training'],
['MobileNetV2', 28, 0.0010, 'Training'],
['MobileNetV2', 29, 0.0011, 'Training'],
['MobileNetV2', 30, 0.0012, 'Training'],
['MobileNetV2', 1, 16.2260, 'Validation'],
['MobileNetV2', 2, 9.2775, 'Validation'],
['MobileNetV2', 3, 10.2493, 'Validation'], ## 3
['MobileNetV2', 4, 2.2250, 'Validation'], ## 4
['MobileNetV2', 5, 3.3614, 'Validation'], ## 5
['MobileNetV2', 6, 0.1620, 'Validation'], ## 6
['MobileNetV2', 7, 1.6062, 'Validation'], ## 7
['MobileNetV2', 8, 2.7681, 'Validation'], ## 8
['MobileNetV2', 9, 0.1952, 'Validation'], ## 9
['MobileNetV2', 10, 0.0191, 'Validation'], ## 10
['MobileNetV2', 11, 0.0193, 'Validation'], ## 1
['MobileNetV2', 12, 0.0728, 'Validation'], ## 2
['MobileNetV2', 13, 0.0683, 'Validation'], ## 3
['MobileNetV2', 14, 0.1830, 'Validation'], ## 4
['MobileNetV2', 15, 0.0676, 'Validation'], ## 5
['MobileNetV2', 16, 0.0051, 'Validation'], ## 6
['MobileNetV2', 17, 0.0033, 'Validation'], ## 7
['MobileNetV2', 18, 0.0037, 'Validation'], ## 8
['MobileNetV2', 19, 0.0054, 'Validation'], ## 9
['MobileNetV2', 20, 0.0045, 'Validation'], ## 10
['MobileNetV2', 21, 0.0043, 'Validation'], ## 1
['MobileNetV2', 22, 0.0049, 'Validation'], ## 2
['MobileNetV2', 23, 0.0050, 'Validation'], ## 3
['MobileNetV2', 24, 0.0058, 'Validation'], ## 4
['MobileNetV2', 25, 0.0012, 'Validation'],
['MobileNetV2', 26, 0.0023, 'Validation'],
['MobileNetV2', 27, 0.0047, 'Validation'],
['MobileNetV2', 28, 0.0032, 'Validation'],
['MobileNetV2', 29, 0.0106, 'Validation'],
['MobileNetV2', 30, 0.0038, 'Validation'],
### InceptionV3
['InceptionV3', 1, 0.2072, 'Training'],
['InceptionV3', 2, 0.1054, 'Training'],
['InceptionV3', 3, 0.0886, 'Training'],
['InceptionV3', 4, 0.0652, 'Training'],
['InceptionV3', 5, 0.0586, 'Training'],
['InceptionV3', 6, 0.0380, 'Training'],
['InceptionV3', 7, 0.0330, 'Training'],
['InceptionV3', 8, 0.0274, 'Training'],
['InceptionV3', 9, 0.0270, 'Training'],
['InceptionV3', 10, 0.0250,'Training'],
['InceptionV3', 11, 0.0350,'Training'],
['InceptionV3', 12, 0.0186,'Training'],
['InceptionV3', 13, 0.0186,'Training'],
['InceptionV3', 14, 0.0271,'Training'],
['InceptionV3', 15, 0.0175,'Training'],
['InceptionV3', 16, 0.0161,'Training'],
['InceptionV3', 17, 0.0177,'Training'],
['InceptionV3', 18, 0.0127,'Training'],
['InceptionV3', 19, 0.0138,'Training'],
['InceptionV3', 20, 0.0125,'Training'],
['InceptionV3', 21, 0.0128,'Training'],
['InceptionV3', 22, 0.0139,'Training'],
['InceptionV3', 23, 0.0083,'Training'],
['InceptionV3', 24, 0.0057,'Training'],
['InceptionV3', 25, 0.0045,'Training'],
['InceptionV3', 26, 0.0044,'Training'],
['InceptionV3', 27, 0.0029,'Training'],
['InceptionV3', 28, 0.0035,'Training'],
['InceptionV3', 29, 0.0027,'Training'],
['InceptionV3', 30, 0.0026,'Training'],
['InceptionV3', 1, 0.1038, 'Validation'],
['InceptionV3', 2, 0.1128, 'Validation'],
['InceptionV3', 3, 0.1255, 'Validation'],
['InceptionV3', 4, 0.2765, 'Validation'],
['InceptionV3', 5, 0.0467, 'Validation'],
['InceptionV3', 6, 0.0980, 'Validation'],
['InceptionV3', 7, 0.0759, 'Validation'],
['InceptionV3', 8, 0.0499, 'Validation'],
['InceptionV3', 9, 0.1046, 'Validation'],
['InceptionV3', 10, 0.3535, 'Validation'],
['InceptionV3', 11, 0.0524, 'Validation'],
['InceptionV3', 12, 0.0330, 'Validation'],
['InceptionV3', 13, 2.3618, 'Validation'],
['InceptionV3', 14, 0.0413, 'Validation'],
['InceptionV3', 15, 0.3454, 'Validation'],
['InceptionV3', 16, 0.0321, 'Validation'],
['InceptionV3', 17, 0.0193, 'Validation'],
['InceptionV3', 18, 0.0260, 'Validation'],
['InceptionV3', 19, 0.1391, 'Validation'],
['InceptionV3', 20, 0.0433, 'Validation'],
['InceptionV3', 21, 0.0171, 'Validation'],
['InceptionV3', 22, 0.0184, 'Validation'],
['InceptionV3', 23, 0.0065, 'Validation'],
['InceptionV3', 24, 0.0065, 'Validation'],
['InceptionV3', 25, 0.0057, 'Validation'],
['InceptionV3', 26, 0.0060, 'Validation'],
['InceptionV3', 27, 0.0073, 'Validation'],
['InceptionV3', 28, 0.0054, 'Validation'],
['InceptionV3', 29, 0.0052, 'Validation'],
['InceptionV3', 30, 0.0064, 'Validation'],
### ResNet50
['ResNet50', 1, 0.0920, 'Training'], ## 1
['ResNet50', 2, 0.0622, 'Training'], ## 2
['ResNet50', 3, 0.0570, 'Training'], ## 3
['ResNet50', 4, 0.0460, 'Training'], ## 4
['ResNet50', 5, 0.0405, 'Training'], ## 5
['ResNet50', 6, 0.0431, 'Training'], ## 6
['ResNet50', 7, 0.0360, 'Training'], ## 7
['ResNet50', 8, 0.0298, 'Training'], ## 8
['ResNet50', 9, 0.0297, 'Training'], ## 9
['ResNet50', 10, 0.0295,'Training'], ## 10
['ResNet50', 11, 0.0144,'Training'], ## 1
['ResNet50', 12, 0.0105,'Training'], ## 2
['ResNet50', 13, 0.0085,'Training'], ## 3
['ResNet50', 14, 0.0079,'Training'], ## 4
['ResNet50', 15, 0.0076,'Training'], ## 5
['ResNet50', 16, 0.0070,'Training'], ## 6
['ResNet50', 17, 0.0057,'Training'], ## 7
['ResNet50', 18, 0.0062,'Training'], ## 8
['ResNet50', 19, 0.0058,'Training'], ## 9
['ResNet50', 20, 0.0053,'Training'], ## 10
['ResNet50', 21, 0.0059,'Training'], ## 1
['ResNet50', 22, 0.0050,'Training'], ## 2
['ResNet50', 23, 0.0048,'Training'], ## 3
['ResNet50', 24, 0.0047,'Training'], ## 4
['ResNet50', 25, 0.0043,'Training'], ## 5
['ResNet50', 26, 0.0034,'Training'], ## 6
['ResNet50', 27, 0.0033,'Training'], ## 7
['ResNet50', 28, 0.0033,'Training'], ## 8
['ResNet50', 29, 0.0030,'Training'], ## 9
['ResNet50', 30, 0.0039,'Training'], ## 10
['ResNet50', 1, 1.3306, 'Validation'], ## 1
['ResNet50', 2, 0.2611, 'Validation'], ## 2
['ResNet50', 3, 0.1880, 'Validation'], ## 3
['ResNet50', 4, 0.5956, 'Validation'], ## 4
['ResNet50', 5, 0.0632, 'Validation'], ## 5
['ResNet50', 6, 0.2366, 'Validation'], ## 6
['ResNet50', 7, 1.1562, 'Validation'], ## 7
['ResNet50', 8, 0.6376, 'Validation'], ## 8
['ResNet50', 9, 1.3452, 'Validation'], ## 9
['ResNet50', 10, 0.1477, 'Validation'], ## 10
['ResNet50', 11, 0.0219, 'Validation'], ## 1
['ResNet50', 12, 0.0110, 'Validation'], ## 2
['ResNet50', 13, 0.0098, 'Validation'], ## 3
['ResNet50', 14, 0.0105, 'Validation'], ## 4
['ResNet50', 15, 0.0088, 'Validation'], ## 5
['ResNet50', 16, 0.0190, 'Validation'], ## 6
['ResNet50', 17, 0.0096, 'Validation'], ## 7
['ResNet50', 18, 0.0068, 'Validation'], ## 8
['ResNet50', 19, 0.0073, 'Validation'], ## 9
['ResNet50', 20, 0.0101, 'Validation'], ## 10
['ResNet50', 21, 0.0160, 'Validation'], ## 1
['ResNet50', 22, 0.0153, 'Validation'], ## 2
['ResNet50', 23, 0.0117, 'Validation'], ## 3
['ResNet50', 24, 0.0091, 'Validation'], ## 4
['ResNet50', 25, 0.0066, 'Validation'], ## 5
['ResNet50', 26, 0.0058, 'Validation'], ## 6
['ResNet50', 27, 0.0058, 'Validation'], ## 7
['ResNet50', 28, 0.0071, 'Validation'], ## 8
['ResNet50', 29, 0.0070, 'Validation'], ## 9
['ResNet50', 30, 0.0063, 'Validation'], ## 10
]
new_acc_df = pd.DataFrame(new_accuracy, columns=['Model', 'Epoch', 'Accuracy', 'Training/Validation'])
new_loss_df = pd.DataFrame(new_loss, columns=['Model', 'Epoch', 'Loss', 'Training/Validation'])
new_acc_df
new_loss_df
import seaborn as sns
plt.figure(figsize=(16, 8))
plt.suptitle('Perbandingan Nilai Akurasi dan Loss pada Data Training dan Validasi Terhadap 3 Model Arsitektur', fontsize=16)
plt.subplot(1, 2, 1)
plt.title('Accuracy')
sns.lineplot(data=new_acc_df, x="Epoch", y="Accuracy", hue="Model", style='Training/Validation')
plt.subplot(1, 2, 2)
sns.lineplot(data=new_loss_df, x="Epoch", y="Loss", hue="Model", style='Training/Validation')
plt.title('Loss')
| 0.122327 | 0.68911 |
# Check `GDS` Python stack
This notebook checks all software requirements for the course Geographic Data Science are correctly installed.
A successful run of the notebook implies no errors returned in any cell *and* every cell beyond the first one returning a printout of `True`. This ensures a correct environment installed.
```
import black
import bokeh
import cenpy
import colorama
import contextily
import cython
import dask
import dask_ml
import datashader
import dill
import geopandas
import geopy
import hdbscan
import ipyleaflet
import ipyparallel
import ipywidgets
import mplleaflet
import nbdime
import networkx
import osmnx
import palettable
import pandana
import polyline
try:
import pygeoda
except:
import warnings
warnings.warn("pygeoda not installed. This may be "\
"because the check it's not running on the "\
"official container")
import pysal
import qgrid
import rasterio
import rasterstats
import skimage
import sklearn
import seaborn
import spatialpandas
import statsmodels
import urbanaccess
import xlrd
import xlsxwriter
```
---
**Legacy checks** (in some ways superseded by those above but in some still useful)
```
import bokeh as bk
float(bk.__version__[:1]) >= 1
import matplotlib as mpl
float(mpl.__version__[:3]) >= 1.5
import mplleaflet as mpll
import seaborn as sns
float(sns.__version__[:3]) >= 0.6
import datashader as ds
float(ds.__version__[:3]) >= 0.6
import palettable as pltt
float(pltt.__version__[:3]) >= 3.1
sns.palplot(pltt.matplotlib.Viridis_10.hex_colors)
```
---
```
import pandas as pd
float(pd.__version__[:3]) >= 1
import dask
float(dask.__version__[:1]) >= 1
import sklearn
float(sklearn.__version__[:4]) >= 0.20
import statsmodels.api as sm
float(sm.__version__[2:4]) >= 10
```
---
```
import fiona
float(fiona.__version__[:3]) >= 1.8
import geopandas as gpd
float(gpd.__version__[:3]) >= 0.4
import pysal as ps
float(ps.__version__[:1]) >= 2
import rasterio as rio
float(rio.__version__[:1]) >= 1
```
# Test
```
shp = pysal.lib.examples.get_path('columbus.shp')
db = geopandas.read_file(shp)
db.head()
db[['AREA', 'PERIMETER']].to_feather('db.feather')
tst = pd.read_feather('db.feather')
! rm db.feather
import matplotlib.pyplot as plt
%matplotlib inline
f, ax = plt.subplots(1)
db.plot(facecolor='yellow', ax=ax)
ax.set_axis_off()
plt.show()
db.crs = 'EPSG:26918'
db_wgs84 = db.to_crs(epsg=4326)
db_wgs84.plot()
plt.show()
from pysal.viz import splot
from splot.mapping import vba_choropleth
f, ax = vba_choropleth(db['INC'], db['HOVAL'], db)
db.plot(column='INC', scheme='fisher_jenks', cmap=plt.matplotlib.cm.Blues)
plt.show()
city = osmnx.gdf_from_place('Berkeley, California')
osmnx.plot_shape(osmnx.project_gdf(city));
import numpy as np
import contextily as ctx
tl = ctx.providers.CartoDB.Positron
db = geopandas.read_file(ps.lib.examples.get_path('us48.shp'))
db.crs = "EPSG:4326"
dbp = db.to_crs(epsg=3857)
w, s, e, n = dbp.total_bounds
# Download raster
_ = ctx.bounds2raster(w, s, e, n, 'us.tif', url=tl)
# Load up and plot
source = rio.open('us.tif', 'r')
red = source.read(1)
green = source.read(2)
blue = source.read(3)
pix = np.dstack((red, green, blue))
bounds = (source.bounds.left, source.bounds.right, \
source.bounds.bottom, source.bounds.top)
f = plt.figure(figsize=(6, 6))
ax = plt.imshow(pix, extent=bounds)
! rm us.tif
ax = db.plot()
ctx.add_basemap(ax, crs=db.crs.to_string())
from ipyleaflet import Map, basemaps, basemap_to_tiles, SplitMapControl
m = Map(center=(42.6824, 365.581), zoom=5)
right_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisTerraTrueColorCR, "2017-11-11")
left_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisAquaBands721CR, "2017-11-11")
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
m.add_control(control)
m
from IPython.display import GeoJSON
GeoJSON({
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [-118.4563712, 34.0163116]
}
})
```
|
github_jupyter
|
import black
import bokeh
import cenpy
import colorama
import contextily
import cython
import dask
import dask_ml
import datashader
import dill
import geopandas
import geopy
import hdbscan
import ipyleaflet
import ipyparallel
import ipywidgets
import mplleaflet
import nbdime
import networkx
import osmnx
import palettable
import pandana
import polyline
try:
import pygeoda
except:
import warnings
warnings.warn("pygeoda not installed. This may be "\
"because the check it's not running on the "\
"official container")
import pysal
import qgrid
import rasterio
import rasterstats
import skimage
import sklearn
import seaborn
import spatialpandas
import statsmodels
import urbanaccess
import xlrd
import xlsxwriter
import bokeh as bk
float(bk.__version__[:1]) >= 1
import matplotlib as mpl
float(mpl.__version__[:3]) >= 1.5
import mplleaflet as mpll
import seaborn as sns
float(sns.__version__[:3]) >= 0.6
import datashader as ds
float(ds.__version__[:3]) >= 0.6
import palettable as pltt
float(pltt.__version__[:3]) >= 3.1
sns.palplot(pltt.matplotlib.Viridis_10.hex_colors)
import pandas as pd
float(pd.__version__[:3]) >= 1
import dask
float(dask.__version__[:1]) >= 1
import sklearn
float(sklearn.__version__[:4]) >= 0.20
import statsmodels.api as sm
float(sm.__version__[2:4]) >= 10
import fiona
float(fiona.__version__[:3]) >= 1.8
import geopandas as gpd
float(gpd.__version__[:3]) >= 0.4
import pysal as ps
float(ps.__version__[:1]) >= 2
import rasterio as rio
float(rio.__version__[:1]) >= 1
shp = pysal.lib.examples.get_path('columbus.shp')
db = geopandas.read_file(shp)
db.head()
db[['AREA', 'PERIMETER']].to_feather('db.feather')
tst = pd.read_feather('db.feather')
! rm db.feather
import matplotlib.pyplot as plt
%matplotlib inline
f, ax = plt.subplots(1)
db.plot(facecolor='yellow', ax=ax)
ax.set_axis_off()
plt.show()
db.crs = 'EPSG:26918'
db_wgs84 = db.to_crs(epsg=4326)
db_wgs84.plot()
plt.show()
from pysal.viz import splot
from splot.mapping import vba_choropleth
f, ax = vba_choropleth(db['INC'], db['HOVAL'], db)
db.plot(column='INC', scheme='fisher_jenks', cmap=plt.matplotlib.cm.Blues)
plt.show()
city = osmnx.gdf_from_place('Berkeley, California')
osmnx.plot_shape(osmnx.project_gdf(city));
import numpy as np
import contextily as ctx
tl = ctx.providers.CartoDB.Positron
db = geopandas.read_file(ps.lib.examples.get_path('us48.shp'))
db.crs = "EPSG:4326"
dbp = db.to_crs(epsg=3857)
w, s, e, n = dbp.total_bounds
# Download raster
_ = ctx.bounds2raster(w, s, e, n, 'us.tif', url=tl)
# Load up and plot
source = rio.open('us.tif', 'r')
red = source.read(1)
green = source.read(2)
blue = source.read(3)
pix = np.dstack((red, green, blue))
bounds = (source.bounds.left, source.bounds.right, \
source.bounds.bottom, source.bounds.top)
f = plt.figure(figsize=(6, 6))
ax = plt.imshow(pix, extent=bounds)
! rm us.tif
ax = db.plot()
ctx.add_basemap(ax, crs=db.crs.to_string())
from ipyleaflet import Map, basemaps, basemap_to_tiles, SplitMapControl
m = Map(center=(42.6824, 365.581), zoom=5)
right_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisTerraTrueColorCR, "2017-11-11")
left_layer = basemap_to_tiles(basemaps.NASAGIBS.ModisAquaBands721CR, "2017-11-11")
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
m.add_control(control)
m
from IPython.display import GeoJSON
GeoJSON({
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [-118.4563712, 34.0163116]
}
})
| 0.458106 | 0.826046 |
# Covid-19 Pandemic Analysis on April 3rd
## Import Data
The dataset can be download at [Kaggle](https://www.kaggle.com/sudalairajkumar/novel-corona-virus-2019-dataset).
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import re
import math
%matplotlib inline
# Explore the data files
for dirname, _, filenames in os.walk('~/Data/novel-corona-virus-2019-dataset'):
for filename in filenames:
print(os.path.join(dirname, filename))
#os.listdir("/Users/yuc/Data/novel-corona-virus-2019-dataset")
# Explore the covid-19 dataset
df = pd.read_csv("~/Data/novel-corona-virus-2019-dataset/covid_19_data.csv", parse_dates=["ObservationDate"])
df.info()
# Country-level Data
countries = df.groupby(["ObservationDate", "Country/Region"])[["Confirmed", "Deaths", "Recovered"]].sum().reset_index()
countries.info()
```
## Worldwide Pandemic
### How bad is the world-wide covid-19 pandemic?
```
# World-level covid-19 data
world = df.groupby("ObservationDate")[["Confirmed", "Deaths", "Recovered"]].sum()
world.head()
# Confirmed case plot
fig, ax = plt.subplots()
ax.plot(world.index, world["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("World Pandemic")
#ax.set_yscale("log")
ax.grid()
plt.show()
```
### The pandemic peak is still not comming.
According to the [WHO phases of pandemic alert for H1N1](https://www.who.int/csr/disease/swineflu/phase/en/), the peak of pandemic is still not comming. The growth curve looks like an exponential increase.
#### Is the growth of pandemic exponential?
```
# Simple Linear Regression for log-scale growth curve
X = np.array(range(world.shape[0])).reshape(-1, 1)
y = world["Confirmed"].apply(math.log)
lm = LinearRegression()
lm.fit(X, y)
y_pred = lm.predict(X)
# Plot regression versus log-scale growth curve
fig, ax = plt.subplots()
ax.plot(world.index, world["Confirmed"].apply(math.log), label="Real Trend")
ax.plot(world.index, y_pred, color='r', label='Linear Regression')
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case (Log)")
ax.set_title("World Pandemic_Log")
#ax.set_yscale("log")
ax.grid()
ax.legend()
plt.show()
```
#### The worldwide case increases exponentially and it can be simply predicted by Linear Regression on Log-scale. It is because linear regressive prediction is similar to the log-scale growth curve.
## Additional Data
Theoretically, we consider the population as an important factor for the pandmeic. Therefore, we found an additional population dataset from [Kaggle](https://www.kaggle.com/tanuprabhu/population-by-country-2020) to support our analysis.
```
# Load additional data
pop = pd.read_csv("~/Data/population_by_country_2020.csv")
# Uniform country names
countries["Country"] = countries["Country/Region"].replace(
["Mainland China","US", "UK", "Macau"], ["China", "United States", "United Kingdom", "Macao"])
countries["Country"].value_counts().head(30)
```
We select the countries with more than 46-day records to analyze country-level Covid-19 pandemic because it was recorded as "others".
```
country_list = list(countries["Country"].value_counts().index[0:29])
country_list
# Select related features
country_pop = pop[pop["Country (or dependency)"].isin(country_list)].copy()
country_pop.info()
# String transformation
country_pop = country_pop.drop(columns=["Yearly Change", "Net Change", "Fert. Rate"])
country_pop["World Share"] = country_pop["World Share"].str.strip("%").astype(float) / 100
country_pop["Urban Pop %"] = country_pop["Urban Pop %"].replace('N.A.', '100%')
country_pop["Urban Pop %"] = country_pop["Urban Pop %"].str.strip("%").astype(float) / 100
country_pop["Med. Age"] = country_pop["Med. Age"].astype(int)
country_pop.rename(columns={"Country (or dependency)": "Country"}, inplace=True)
country_pop.info()
# Join Data
country_df = countries.merge(country_pop, how='inner', on = "Country")
country_df.info()
country_df.head()
# Scaling features to percentage
country = country_df.drop(columns=["Country/Region", "Land Area (Km²)"]).copy()
country["Confirmed_Rate"] = country["Confirmed"] / country_df["Population (2020)"]
country["Death_Rate"] = country["Deaths"] / country_df["Population (2020)"]
country["Recover_Rate"] = country["Recovered"] / country_df["Population (2020)"]
country["Density"] = country["Density (P/Km²)"] / country_df["Population (2020)"]
country["Migrants"] = country["Migrants (net)"] / country_df["Population (2020)"]
country.drop(columns=["Density (P/Km²)", "Migrants (net)"], inplace=True)
country.head()
```
## EDA based on SIR Model Parameters: China, South Korea, US, and Italy
### China
```
cn = country[country["Country"] == 'China'].copy()
cn.tail()
# Scaling features to percentage
cn["DeathRate"] = cn["Deaths"] / cn["Confirmed"]
###'''
cn["RecoverRate"] = cn["Recovered"] / cn["Confirmed"]
cn["St"] = 1 - cn["Confirmed_Rate"]
cn["Rt"] = cn["Death_Rate"] + cn["Recover_Rate"]
cn["It"] = cn["Confirmed_Rate"] - cn["Rt"]
cn["Diff_Confirmed"] = cn["Confirmed_Rate"].diff(1)
cn["Diff_Rt"] = cn["Rt"].diff(1)
#cn = cn.drop(columns=["Country", "Deaths", "Recovered"])
cn = cn.fillna(0)
###'''
cn.head()
# SIR parameter calculation
cn["Beta"] = cn["Diff_Confirmed"] / (cn["St"] * cn["It"])
cn["Gamma"] = cn["Diff_Rt"] / cn["It"]
cn.tail()
```
### Is pandemic in China post-peak?
```
# Growth Plot
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in China")
ax.grid()
plt.show()
# Daily Increase
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Diff_Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Daily Increasing Case")
ax.set_title("Daily Increasing Case in China")
ax.grid()
plt.show()
```
### The pandemic in China is post-peak because the growth seems stationary and daily increase decreases to stationary after peak.
```
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(cn.ObservationDate, cn["St"], color = 'b', label = 'S(t)')
ax.plot(cn.ObservationDate, cn["It"], color = 'orange', label = 'I(t)')
ax.plot(cn.ObservationDate, cn["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in China")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
# Recovered Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Gamma"])
ax.set_xlabel("Date")
ax.set_ylabel("Recovered Rate")
ax.set_title("Recovered (Recovered + Dead) Rate in China")
ax.grid()
plt.show()
# Infective Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Beta"])
ax.set_xlabel("Date")
ax.set_ylabel("Infective Rate")
ax.set_title("Infective Rate in China")
ax.grid()
plt.show()
```
### The SIR Model also demonstrates that pandemic in China is post-peak and becomes better. It is because the trend of recovery rate continues to increase and infective rate continue to decrease to stationary.
### South Korea
### Is pandemic in South Korea post-peak?
```
sk = country[country["Country"] == 'South Korea'].copy()
sk.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in South Korea")
#ax.set_yscale("log")
ax.grid()
plt.show()
```
### The growth plot shows pandemic seems to be post-peak.
```
# Scaling features to percentage
sk["DeathRate"] = sk["Deaths"] / sk["Confirmed"]
###'''
sk["RecoverRate"] = sk["Recovered"] / sk["Confirmed"]
sk["St"] = 1 - sk["Confirmed_Rate"]
sk["Rt"] = sk["Death_Rate"] + sk["Recover_Rate"]
sk["It"] = sk["Confirmed_Rate"] - sk["Rt"]
sk["Diff_Confirmed"] = sk["Confirmed_Rate"].diff(1)
sk["Diff_Rt"] = sk["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
sk = sk.fillna(0)
###'''
sk.head()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(sk.ObservationDate, sk["It"], color = 'orange', label = 'I(t)')
ax.plot(sk.ObservationDate, sk["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in South Korea")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
```
### The SIR model shows pandemic in South Korea is close to post-peak.
```
# SIR Model Parameter Calculation
sk["Beta"] = sk["Diff_Confirmed"] / (sk["St"] * sk["It"])
sk["Gamma"] = sk["Diff_Rt"] / sk["It"]
sk.tail()
# Recovery Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Gamma"])
ax.set_xlabel("Date")
ax.set_ylabel("Recovered Rate")
ax.set_title("Recovered (Recovered + Dead) Rate in South Korea")
ax.grid()
plt.show()
# Infective Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Beta"])
ax.set_xlabel("Date")
ax.set_ylabel("Infective Rate")
ax.grid()
ax.set_title("Infective Rate in South Korea")
plt.show()
```
### The infective rate decreases to stationary but fluctuant trend of recovery rate shows that pandemic in South Korea is not 100% post-peak.
- The infective rate beta of Covid-19 in both China and South Korea are stationary.
- Beta in China is around **1%**.
- Beta in South Korea is around **2%**.
### We can conclude that both bountries have controled pandemic of covid-19 successfully.
### US
### Is pandemic in US post-peak?
```
us = country[country["Country"] == 'United States'].copy()
us.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(us.ObservationDate, us["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in US")
#ax.set_yscale("log")
ax.grid()
plt.show()
# Scaling features to percentage
us["DeathRate"] = us["Deaths"] / us["Confirmed"]
###'''
us["RecoverRate"] = us["Recovered"] / us["Confirmed"]
us["St"] = 1 - us["Confirmed_Rate"]
us["Rt"] = us["Death_Rate"] + us["Recover_Rate"]
us["It"] = us["Confirmed_Rate"] - us["Rt"]
us["Diff_Confirmed"] = us["Confirmed_Rate"].diff(1)
us["Diff_Rt"] = us["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
us = us.fillna(0)
###'''
us["Beta"] = us["Diff_Confirmed"] / (us["St"] * us["It"])
us["Gamma"] = us["Diff_Rt"] / us["It"]
us.tail()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(us.ObservationDate, us["It"], color = 'orange', label = 'I(t)')
ax.plot(us.ObservationDate, us["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in United States")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
```
### Obviously, pandemic in US is serious and not post-peak.
### Italy
### Is pandemic in Italy post-peak?
```
ita = country[country["Country"] == 'Italy'].copy()
ita.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(ita.ObservationDate, ita["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in Italy")
#ax.set_yscale("log")
plt.xticks(rotation=45)
ax.grid()
plt.show()
# Scaling features to percentage
ita["DeathRate"] = ita["Deaths"] / ita["Confirmed"]
###'''
ita["RecoverRate"] = ita["Recovered"] / ita["Confirmed"]
ita["St"] = 1 - ita["Confirmed_Rate"]
ita["Rt"] = ita["Death_Rate"] + ita["Recover_Rate"]
ita["It"] = ita["Confirmed_Rate"] - ita["Rt"]
ita["Diff_Confirmed"] = ita["Confirmed_Rate"].diff(1)
ita["Diff_Rt"] = ita["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
ita = ita.fillna(0)
###'''
ita["Beta"] = ita["Diff_Confirmed"] / (ita["St"] * ita["It"])
ita["Gamma"] = ita["Diff_Rt"] / ita["It"]
ita.tail()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(ita.ObservationDate, ita["It"], color = 'orange', label = 'I(t)')
ax.plot(ita.ObservationDate, ita["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in Italy")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.xticks(rotation=45)
plt.show()
```
### Obviously, pandemic in US is serious and not post-peak.
- Both Italy and U.S are suffering from the explosion of covid-19 pandemic.
- The condition in Italy looks even better with a high death rate.
### Is the dataset trustful? It's hard to say no. The analysis on these 4 countries accords with their situation.
## Baseline Model
### Is population density an important factor impact pandemic?
```
# Correlation Heatmap
reg_df = country.copy()
reg_df["Month"] = reg_df["ObservationDate"].map(lambda x: x.month)
reg_df["Weekday"] = reg_df["ObservationDate"].map(lambda x: x.dayofweek)
reg_df["Day"] = reg_df["ObservationDate"].map(lambda x: x.day)
reg_df.drop(columns=["Confirmed", "Deaths", "Recovered", "Population (2020)",
"Death_Rate", "Recover_Rate"], inplace=True)
sns.heatmap(reg_df.corr(), annot=True, fmt=".2f")
# Linear Regression Model
X = reg_df.drop(columns=["ObservationDate", "Country", "Confirmed_Rate"])
y = (reg_df["Confirmed_Rate"] + 0.0000000001).apply(math.log)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42)
lm_model = LinearRegression()
lm_model.fit(X_train, y_train)
y_pred = lm_model.predict(X_test)
rmse = mean_squared_error(y_test, y_pred)
print("R2:", lm_model.score(X_test, y_test))
print("RMSE:", rmse)
# Feature Importance
imp = pd.DataFrame(lm_model.coef_, columns=["Importance"])
imp.index = X.columns
imp
sns.barplot(x=imp.Importance, y=imp.index, data=imp)
```
### Population density is the most signficant factor to impact pandemic.
We found population density, urban population rate, world share and month are positively significant to confirmed case rate in country level. In addition, net migrant rate is negatively important to confirmed case rate in country level.
|
github_jupyter
|
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import re
import math
%matplotlib inline
# Explore the data files
for dirname, _, filenames in os.walk('~/Data/novel-corona-virus-2019-dataset'):
for filename in filenames:
print(os.path.join(dirname, filename))
#os.listdir("/Users/yuc/Data/novel-corona-virus-2019-dataset")
# Explore the covid-19 dataset
df = pd.read_csv("~/Data/novel-corona-virus-2019-dataset/covid_19_data.csv", parse_dates=["ObservationDate"])
df.info()
# Country-level Data
countries = df.groupby(["ObservationDate", "Country/Region"])[["Confirmed", "Deaths", "Recovered"]].sum().reset_index()
countries.info()
# World-level covid-19 data
world = df.groupby("ObservationDate")[["Confirmed", "Deaths", "Recovered"]].sum()
world.head()
# Confirmed case plot
fig, ax = plt.subplots()
ax.plot(world.index, world["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("World Pandemic")
#ax.set_yscale("log")
ax.grid()
plt.show()
# Simple Linear Regression for log-scale growth curve
X = np.array(range(world.shape[0])).reshape(-1, 1)
y = world["Confirmed"].apply(math.log)
lm = LinearRegression()
lm.fit(X, y)
y_pred = lm.predict(X)
# Plot regression versus log-scale growth curve
fig, ax = plt.subplots()
ax.plot(world.index, world["Confirmed"].apply(math.log), label="Real Trend")
ax.plot(world.index, y_pred, color='r', label='Linear Regression')
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case (Log)")
ax.set_title("World Pandemic_Log")
#ax.set_yscale("log")
ax.grid()
ax.legend()
plt.show()
# Load additional data
pop = pd.read_csv("~/Data/population_by_country_2020.csv")
# Uniform country names
countries["Country"] = countries["Country/Region"].replace(
["Mainland China","US", "UK", "Macau"], ["China", "United States", "United Kingdom", "Macao"])
countries["Country"].value_counts().head(30)
country_list = list(countries["Country"].value_counts().index[0:29])
country_list
# Select related features
country_pop = pop[pop["Country (or dependency)"].isin(country_list)].copy()
country_pop.info()
# String transformation
country_pop = country_pop.drop(columns=["Yearly Change", "Net Change", "Fert. Rate"])
country_pop["World Share"] = country_pop["World Share"].str.strip("%").astype(float) / 100
country_pop["Urban Pop %"] = country_pop["Urban Pop %"].replace('N.A.', '100%')
country_pop["Urban Pop %"] = country_pop["Urban Pop %"].str.strip("%").astype(float) / 100
country_pop["Med. Age"] = country_pop["Med. Age"].astype(int)
country_pop.rename(columns={"Country (or dependency)": "Country"}, inplace=True)
country_pop.info()
# Join Data
country_df = countries.merge(country_pop, how='inner', on = "Country")
country_df.info()
country_df.head()
# Scaling features to percentage
country = country_df.drop(columns=["Country/Region", "Land Area (Km²)"]).copy()
country["Confirmed_Rate"] = country["Confirmed"] / country_df["Population (2020)"]
country["Death_Rate"] = country["Deaths"] / country_df["Population (2020)"]
country["Recover_Rate"] = country["Recovered"] / country_df["Population (2020)"]
country["Density"] = country["Density (P/Km²)"] / country_df["Population (2020)"]
country["Migrants"] = country["Migrants (net)"] / country_df["Population (2020)"]
country.drop(columns=["Density (P/Km²)", "Migrants (net)"], inplace=True)
country.head()
cn = country[country["Country"] == 'China'].copy()
cn.tail()
# Scaling features to percentage
cn["DeathRate"] = cn["Deaths"] / cn["Confirmed"]
###'''
cn["RecoverRate"] = cn["Recovered"] / cn["Confirmed"]
cn["St"] = 1 - cn["Confirmed_Rate"]
cn["Rt"] = cn["Death_Rate"] + cn["Recover_Rate"]
cn["It"] = cn["Confirmed_Rate"] - cn["Rt"]
cn["Diff_Confirmed"] = cn["Confirmed_Rate"].diff(1)
cn["Diff_Rt"] = cn["Rt"].diff(1)
#cn = cn.drop(columns=["Country", "Deaths", "Recovered"])
cn = cn.fillna(0)
###'''
cn.head()
# SIR parameter calculation
cn["Beta"] = cn["Diff_Confirmed"] / (cn["St"] * cn["It"])
cn["Gamma"] = cn["Diff_Rt"] / cn["It"]
cn.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in China")
ax.grid()
plt.show()
# Daily Increase
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Diff_Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Daily Increasing Case")
ax.set_title("Daily Increasing Case in China")
ax.grid()
plt.show()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(cn.ObservationDate, cn["St"], color = 'b', label = 'S(t)')
ax.plot(cn.ObservationDate, cn["It"], color = 'orange', label = 'I(t)')
ax.plot(cn.ObservationDate, cn["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in China")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
# Recovered Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Gamma"])
ax.set_xlabel("Date")
ax.set_ylabel("Recovered Rate")
ax.set_title("Recovered (Recovered + Dead) Rate in China")
ax.grid()
plt.show()
# Infective Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(cn.ObservationDate, cn["Beta"])
ax.set_xlabel("Date")
ax.set_ylabel("Infective Rate")
ax.set_title("Infective Rate in China")
ax.grid()
plt.show()
sk = country[country["Country"] == 'South Korea'].copy()
sk.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in South Korea")
#ax.set_yscale("log")
ax.grid()
plt.show()
# Scaling features to percentage
sk["DeathRate"] = sk["Deaths"] / sk["Confirmed"]
###'''
sk["RecoverRate"] = sk["Recovered"] / sk["Confirmed"]
sk["St"] = 1 - sk["Confirmed_Rate"]
sk["Rt"] = sk["Death_Rate"] + sk["Recover_Rate"]
sk["It"] = sk["Confirmed_Rate"] - sk["Rt"]
sk["Diff_Confirmed"] = sk["Confirmed_Rate"].diff(1)
sk["Diff_Rt"] = sk["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
sk = sk.fillna(0)
###'''
sk.head()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(sk.ObservationDate, sk["It"], color = 'orange', label = 'I(t)')
ax.plot(sk.ObservationDate, sk["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in South Korea")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
# SIR Model Parameter Calculation
sk["Beta"] = sk["Diff_Confirmed"] / (sk["St"] * sk["It"])
sk["Gamma"] = sk["Diff_Rt"] / sk["It"]
sk.tail()
# Recovery Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Gamma"])
ax.set_xlabel("Date")
ax.set_ylabel("Recovered Rate")
ax.set_title("Recovered (Recovered + Dead) Rate in South Korea")
ax.grid()
plt.show()
# Infective Rate in SIR Model
fig, ax = plt.subplots()
ax.plot(sk.ObservationDate, sk["Beta"])
ax.set_xlabel("Date")
ax.set_ylabel("Infective Rate")
ax.grid()
ax.set_title("Infective Rate in South Korea")
plt.show()
us = country[country["Country"] == 'United States'].copy()
us.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(us.ObservationDate, us["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in US")
#ax.set_yscale("log")
ax.grid()
plt.show()
# Scaling features to percentage
us["DeathRate"] = us["Deaths"] / us["Confirmed"]
###'''
us["RecoverRate"] = us["Recovered"] / us["Confirmed"]
us["St"] = 1 - us["Confirmed_Rate"]
us["Rt"] = us["Death_Rate"] + us["Recover_Rate"]
us["It"] = us["Confirmed_Rate"] - us["Rt"]
us["Diff_Confirmed"] = us["Confirmed_Rate"].diff(1)
us["Diff_Rt"] = us["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
us = us.fillna(0)
###'''
us["Beta"] = us["Diff_Confirmed"] / (us["St"] * us["It"])
us["Gamma"] = us["Diff_Rt"] / us["It"]
us.tail()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(us.ObservationDate, us["It"], color = 'orange', label = 'I(t)')
ax.plot(us.ObservationDate, us["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in United States")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.show()
ita = country[country["Country"] == 'Italy'].copy()
ita.tail()
# Growth Plot
fig, ax = plt.subplots()
ax.plot(ita.ObservationDate, ita["Confirmed"])
ax.set_xlabel("Date")
ax.set_ylabel("Confirmed Case")
ax.set_title("Cumulative Confirmed Case in Italy")
#ax.set_yscale("log")
plt.xticks(rotation=45)
ax.grid()
plt.show()
# Scaling features to percentage
ita["DeathRate"] = ita["Deaths"] / ita["Confirmed"]
###'''
ita["RecoverRate"] = ita["Recovered"] / ita["Confirmed"]
ita["St"] = 1 - ita["Confirmed_Rate"]
ita["Rt"] = ita["Death_Rate"] + ita["Recover_Rate"]
ita["It"] = ita["Confirmed_Rate"] - ita["Rt"]
ita["Diff_Confirmed"] = ita["Confirmed_Rate"].diff(1)
ita["Diff_Rt"] = ita["Rt"].diff(1)
#sk = sk.drop(columns=["Deaths", "Recovered"])
ita = ita.fillna(0)
###'''
ita["Beta"] = ita["Diff_Confirmed"] / (ita["St"] * ita["It"])
ita["Gamma"] = ita["Diff_Rt"] / ita["It"]
ita.tail()
# SIR Simulation
fig, ax = plt.subplots()
#ax.plot(sk.ObservationDate, sk["St"], color = 'b', label = 'S(t)')
ax.plot(ita.ObservationDate, ita["It"], color = 'orange', label = 'I(t)')
ax.plot(ita.ObservationDate, ita["Rt"], color = 'g', label = 'R(t)')
ax.set_xlabel("Date")
ax.set_ylabel("Cases")
ax.set_title("SIR Simulation in Italy")
#ax.set_yscale('log')
ax.grid()
ax.legend()
plt.xticks(rotation=45)
plt.show()
# Correlation Heatmap
reg_df = country.copy()
reg_df["Month"] = reg_df["ObservationDate"].map(lambda x: x.month)
reg_df["Weekday"] = reg_df["ObservationDate"].map(lambda x: x.dayofweek)
reg_df["Day"] = reg_df["ObservationDate"].map(lambda x: x.day)
reg_df.drop(columns=["Confirmed", "Deaths", "Recovered", "Population (2020)",
"Death_Rate", "Recover_Rate"], inplace=True)
sns.heatmap(reg_df.corr(), annot=True, fmt=".2f")
# Linear Regression Model
X = reg_df.drop(columns=["ObservationDate", "Country", "Confirmed_Rate"])
y = (reg_df["Confirmed_Rate"] + 0.0000000001).apply(math.log)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42)
lm_model = LinearRegression()
lm_model.fit(X_train, y_train)
y_pred = lm_model.predict(X_test)
rmse = mean_squared_error(y_test, y_pred)
print("R2:", lm_model.score(X_test, y_test))
print("RMSE:", rmse)
# Feature Importance
imp = pd.DataFrame(lm_model.coef_, columns=["Importance"])
imp.index = X.columns
imp
sns.barplot(x=imp.Importance, y=imp.index, data=imp)
| 0.720368 | 0.865963 |
# Chapter 5: Point-Neuron Network Models (with PointNet)
In this chapter we will create a heterogeneous network of point-model neurons and use the PointNet simulator which will run the network using the NEST simulator. As with the previous BioNet examples will create both a internal recurrently-connected network of different node types, and an external network of "virtual" neurons that will drive the firing of the internal neurons. And we'll show how to drive network activity by using a current clamp.
PointNet, like BioNet and the other simulators, use the SONATA data format for representing networks, setting up simulations and saving results. Thus the tools used to build and display biophysically detailed networks in the previous chapters will work just the same.
Requirements:
* bmtk
* NEST 2.11+
## 1. Building the network
There are two ways of generating a network of point-neurons. Either we can take the existing biophysical network created in the previous chapters and make some minor adjustments to the neuron models being used. Or we can build a new network from scratch using the BMTK Builder.
### Converting networks
We want to take the BioNet V1 network and change parameters so that the individual neurons are using point models. Luckily there parameters are stored in the node and edge "types" csv files, thus we can easily change them with a simple text editor (emacs, vi, sublime-text, etc). Here is an example of the old *V1_node_types.csv*:
```
import pandas as pd
pd.read_csv('sources/chapter05/converted_network/V1_node_types_bionet.csv', sep=' ')
```
and here is the *V1_node_types.csv* used for PointNet:
```
pd.read_csv('sources/chapter05/converted_network/V1_node_types.csv', sep=' ')
```
Changes:
* **model_type** - PointNet will not support the "biophysical" model_type and only support "point_process" neuron models.
* **model_template** - nrn:IntFire1 and ctdb:Biophys1.hoc are special directives for running NEURON based models. Instead we replaced them with the "nest:\<nest-model\>" directive (note we can replace iaf_psc_alpha with any valid NEST model).
* **dynamics_params** - We have new json parameters files for the new NEST based models.
* **model_processing** - "aibs_perisomatic" is a special command for adjusting the morphology of biophysical models, and since our NEST-based models do not have a morphology we set it to none which tells the simulator to use the models as-is (note: you can implement custom model_processing functions for PointNet that will be explained later).
We must also adjust the *edges_types.csv* files:
```
pd.read_csv('sources/chapter05/converted_network/V1_V1_edge_types.csv', sep=' ')
```
* **model_template** has been changed to use a NEST based model type (static_synapse)
* Use different **dynamics_parameter** files
* It's important to readjust **syn_weight** as values appropiate for NEURON based models are oftern wrong for NEST based models.
Notice we don't have to change any of the hdf5 files. The network topology remains the same making it a powerful tool for comparing networks of different levels of resolution.
### Building a model from scratch.
We can use the BMTK Network Builder to create new network files just for point-based modeling
#### V1 Network
First lets build a "V1" network of 300 cells, split into 4 different populations
```
from bmtk.builder.networks import NetworkBuilder
from bmtk.builder.auxi.node_params import positions_columinar
net = NetworkBuilder("V1")
net.add_nodes(N=80, # Create a population of 80 neurons
positions=positions_columinar(N=80, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
pop_name='Scnn1a', location='VisL4', ei='e', # optional parameters
model_type='point_process', # Tells the simulator to use point-based neurons
model_template='nest:iaf_psc_alpha', # tells the simulator to use NEST iaf_psc_alpha models
dynamics_params='472363762_point.json' # File containing iaf_psc_alpha mdoel parameters
)
net.add_nodes(N=20, pop_name='PV', location='VisL4', ei='i',
positions=positions_columinar(N=20, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='472912177_point.json')
net.add_nodes(N=200, pop_name='LIF_exc', location='L4', ei='e',
positions=positions_columinar(N=200, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='IntFire1_exc_point.json')
net.add_nodes(N=100, pop_name='LIF_inh', location='L4', ei='i',
positions=positions_columinar(N=100, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='IntFire1_inh_point.json')
```
We can now go ahead and created synaptic connections then build and save our network.
```
from bmtk.builder.auxi.edge_connectors import distance_connector
## E-to-E connections
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'Scnn1a'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=5.0,
delay=2.0,
dynamics_params='ExcToExc.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'LIF_exc'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-1.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
### Generating I-to-I connections
net.add_edges(source={'ei': 'i'}, target={'pop_name': 'PV'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-1.0,
delay=2.0,
dynamics_params='InhToInh.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'i'}, target={'ei': 'i', 'pop_name': 'LIF_inh'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=10.0,
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='static_synapse')
### Generating I-to-E connections
net.add_edges(source={'ei': 'i'}, target={'ei': 'e', 'pop_name': 'Scnn1a'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-15.0,
delay=2.0,
dynamics_params='InhToExc.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'i'}, target={'ei': 'e', 'pop_name': 'LIF_exc'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-15.0,
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='static_synapse')
### Generating E-to-I connections
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'PV'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=15.0,
delay=2.0,
dynamics_params='ExcToInh.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'LIF_inh'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=5.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
net.build()
net.save_nodes(output_dir='sim_ch05/network')
net.save_edges(output_dir='sim_ch05/network')
```
### Building external network
Next we want to create an external network of "virtual cells" with spike-trains that will synapse onto our V1 cells and drive activity. We will call this external network "LGN" and contains 500 excitatory cells.
```
lgn = NetworkBuilder('LGN')
lgn.add_nodes(N=500, pop_name='tON', potential='exc', model_type='virtual')
```
We will use a special function for setting the number of synapses between the LGN --> V1 cells. The select_source_cells function will be called during the build process.
```
import numpy as np
def select_source_cells(sources, target, nsources_min=10, nsources_max=30, nsyns_min=3, nsyns_max=12):
total_sources = len(sources)
nsources = np.random.randint(nsources_min, nsources_max)
selected_sources = np.random.choice(total_sources, nsources, replace=False)
syns = np.zeros(total_sources)
syns[selected_sources] = np.random.randint(nsyns_min, nsyns_max, size=nsources)
return syns
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='Scnn1a'),
iterator='all_to_one',
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
syn_weight=20.0,
delay=2.0,
dynamics_params='ExcToExc.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='PV1'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=20.0,
delay=2.0,
dynamics_params='ExcToInh.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='LIF_exc'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
iterator='all_to_one',
syn_weight= 10.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='LIF_inh'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=10.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
```
Finally we build and save our lgn network.
```
lgn.build()
lgn.save_nodes(output_dir='sim_ch05/network')
lgn.save_edges(output_dir='sim_ch05/network')
```
## 2. Setting up PointNet Environment
#### Directory Structure
Before running a simulation, we will need to create the runtime environment, including parameter files, run-script and configuration files. If using the tutorial these files will already be in place. Otherwise we can use a command-line:
```bash
$ python -m bmtk.utils.sim_setup \
--network sim_ch05/network/ \
--report-vars V_m \
--include-examples \
--report-nodes 0,80,100,300 \
--tstop 3000.0 \
pointnet sim_ch05/
```
or
```
from bmtk.utils.sim_setup import build_env_pointnet
build_env_pointonet(base_dir='sim_ch05',
network_dir='sim_ch05/network',
tstop=3000.0, dt=0.01,
report_vars=['V_m'], # Record membrane potential (default soma)
report_nodes=[0, 80, 100, 300] # Select nodes to record from
include_examples=True, # Copies components files
)
```
The network files are written to **circuit_config.json** and the simulation parameters are set in **simulation_config**. The simulation time is set to run for 3000.0 ms (tstop). We also specify a membrane-report to record V_m property of 4 cells (gids 0, 80, 100, 300 - one from each cell-type). In general, all the parameters needed to setup and start a simulation are found in the config files, and adjusting network/simulation conditions can be done by editing these json files in a text editor.
#### lgn input
We need to provide our LGN external network cells with spike-trains so they can activate our recurrent network. Previously we showed how to do this by generating csv files. We can also use NWB files, which are a common format for saving electrophysiological data in neuroscience.
We can use any NWB file generated experimentally or computationally, but for this example we will use a preexsting one. First download the file:
```bash
$ cd sim_ch05
$ wget https://github.com/AllenInstitute/bmtk/raw/develop/docs/examples/spikes_inputs/lgn_spikes.nwb
```
Then we must edit the simulation_config.json file to tell the simulator to find the nwb file and which network to associate it with.
```json
"inputs": {
"LGN_spikes": {
"input_type": "spikes",
"module": "nwb",
"input_file": "$BASE_DIR/lgn_spikes.nwb",
"node_set": "LGN",
"trial": "trial_0"
}
},
```
## 3. Running the simulation
The call to sim_setup created a file run_pointnet.py which we can run directly in a command line:
```bash
$ python run_pointnet.py config.json
```
or if you have mpi setup:
```bash
$ mpirun -np $NCORES python run_pointnet.py config.json
```
Or we can run it directly
```
from bmtk.simulator import pointnet
configure = pointnet.Config.from_json('sim_ch05/simulation_config.json')
configure.build_env()
network = pointnet.PointNetwork.from_config(configure)
sim = pointnet.PointSimulator.from_config(configure, network)
sim.run()
```
## 4. Analyzing results
Results of the simulation, as specified in the config, are saved into the output directory. Using the analyzer functions, we can do things like plot the raster plot
```
from bmtk.analyzer.spike_trains import plot_raster, plot_rates
plot_raster(config_file='sim_ch05/simulation_config.json')
```
Or we can plot the rates of the different populations
```
plot_rates(config_file='sim_ch05/simulation_config.json')
```
In our simulation_config.json in the reports section, we can see we also record the V_m (i.e membrane potential) of a select sample of cells. By default these files are written to an hdf5 file with the same name as the report (membrane_potential.h5), and we can use the analyzer to show the time course of some of these cells.
```
from bmtk.analyzer.cell_vars import plot_report
plot_report(config_file='sim_ch05/simulation_config.json', node_ids=[0, 80])
```
|
github_jupyter
|
import pandas as pd
pd.read_csv('sources/chapter05/converted_network/V1_node_types_bionet.csv', sep=' ')
pd.read_csv('sources/chapter05/converted_network/V1_node_types.csv', sep=' ')
pd.read_csv('sources/chapter05/converted_network/V1_V1_edge_types.csv', sep=' ')
from bmtk.builder.networks import NetworkBuilder
from bmtk.builder.auxi.node_params import positions_columinar
net = NetworkBuilder("V1")
net.add_nodes(N=80, # Create a population of 80 neurons
positions=positions_columinar(N=80, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
pop_name='Scnn1a', location='VisL4', ei='e', # optional parameters
model_type='point_process', # Tells the simulator to use point-based neurons
model_template='nest:iaf_psc_alpha', # tells the simulator to use NEST iaf_psc_alpha models
dynamics_params='472363762_point.json' # File containing iaf_psc_alpha mdoel parameters
)
net.add_nodes(N=20, pop_name='PV', location='VisL4', ei='i',
positions=positions_columinar(N=20, center=[0, 50.0, 0], max_radius=30.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='472912177_point.json')
net.add_nodes(N=200, pop_name='LIF_exc', location='L4', ei='e',
positions=positions_columinar(N=200, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='IntFire1_exc_point.json')
net.add_nodes(N=100, pop_name='LIF_inh', location='L4', ei='i',
positions=positions_columinar(N=100, center=[0, 50.0, 0], min_radius=30.0, max_radius=60.0, height=100.0),
model_type='point_process',
model_template='nest:iaf_psc_alpha',
dynamics_params='IntFire1_inh_point.json')
from bmtk.builder.auxi.edge_connectors import distance_connector
## E-to-E connections
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'Scnn1a'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=5.0,
delay=2.0,
dynamics_params='ExcToExc.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'LIF_exc'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.34, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-1.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
### Generating I-to-I connections
net.add_edges(source={'ei': 'i'}, target={'pop_name': 'PV'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-1.0,
delay=2.0,
dynamics_params='InhToInh.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'i'}, target={'ei': 'i', 'pop_name': 'LIF_inh'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=10.0,
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='static_synapse')
### Generating I-to-E connections
net.add_edges(source={'ei': 'i'}, target={'ei': 'e', 'pop_name': 'Scnn1a'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-15.0,
delay=2.0,
dynamics_params='InhToExc.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'i'}, target={'ei': 'e', 'pop_name': 'LIF_exc'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 1.0, 'd_max': 160.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=-15.0,
delay=2.0,
dynamics_params='instanteneousInh.json',
model_template='static_synapse')
### Generating E-to-I connections
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'PV'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=15.0,
delay=2.0,
dynamics_params='ExcToInh.json',
model_template='static_synapse')
net.add_edges(source={'ei': 'e'}, target={'pop_name': 'LIF_inh'},
connection_rule=distance_connector,
connection_params={'d_weight_min': 0.0, 'd_weight_max': 0.26, 'd_max': 300.0, 'nsyn_min': 3, 'nsyn_max': 7},
syn_weight=5.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
net.build()
net.save_nodes(output_dir='sim_ch05/network')
net.save_edges(output_dir='sim_ch05/network')
lgn = NetworkBuilder('LGN')
lgn.add_nodes(N=500, pop_name='tON', potential='exc', model_type='virtual')
import numpy as np
def select_source_cells(sources, target, nsources_min=10, nsources_max=30, nsyns_min=3, nsyns_max=12):
total_sources = len(sources)
nsources = np.random.randint(nsources_min, nsources_max)
selected_sources = np.random.choice(total_sources, nsources, replace=False)
syns = np.zeros(total_sources)
syns[selected_sources] = np.random.randint(nsyns_min, nsyns_max, size=nsources)
return syns
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='Scnn1a'),
iterator='all_to_one',
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
syn_weight=20.0,
delay=2.0,
dynamics_params='ExcToExc.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='PV1'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=20.0,
delay=2.0,
dynamics_params='ExcToInh.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='LIF_exc'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 10, 'nsources_max': 25},
iterator='all_to_one',
syn_weight= 10.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
lgn.add_edges(source=lgn.nodes(), target=net.nodes(pop_name='LIF_inh'),
connection_rule=select_source_cells,
connection_params={'nsources_min': 15, 'nsources_max': 30},
iterator='all_to_one',
syn_weight=10.0,
delay=2.0,
dynamics_params='instanteneousExc.json',
model_template='static_synapse')
lgn.build()
lgn.save_nodes(output_dir='sim_ch05/network')
lgn.save_edges(output_dir='sim_ch05/network')
$ python -m bmtk.utils.sim_setup \
--network sim_ch05/network/ \
--report-vars V_m \
--include-examples \
--report-nodes 0,80,100,300 \
--tstop 3000.0 \
pointnet sim_ch05/
from bmtk.utils.sim_setup import build_env_pointnet
build_env_pointonet(base_dir='sim_ch05',
network_dir='sim_ch05/network',
tstop=3000.0, dt=0.01,
report_vars=['V_m'], # Record membrane potential (default soma)
report_nodes=[0, 80, 100, 300] # Select nodes to record from
include_examples=True, # Copies components files
)
$ cd sim_ch05
$ wget https://github.com/AllenInstitute/bmtk/raw/develop/docs/examples/spikes_inputs/lgn_spikes.nwb
"inputs": {
"LGN_spikes": {
"input_type": "spikes",
"module": "nwb",
"input_file": "$BASE_DIR/lgn_spikes.nwb",
"node_set": "LGN",
"trial": "trial_0"
}
},
$ python run_pointnet.py config.json
$ mpirun -np $NCORES python run_pointnet.py config.json
from bmtk.simulator import pointnet
configure = pointnet.Config.from_json('sim_ch05/simulation_config.json')
configure.build_env()
network = pointnet.PointNetwork.from_config(configure)
sim = pointnet.PointSimulator.from_config(configure, network)
sim.run()
from bmtk.analyzer.spike_trains import plot_raster, plot_rates
plot_raster(config_file='sim_ch05/simulation_config.json')
plot_rates(config_file='sim_ch05/simulation_config.json')
from bmtk.analyzer.cell_vars import plot_report
plot_report(config_file='sim_ch05/simulation_config.json', node_ids=[0, 80])
| 0.462473 | 0.958382 |
# Research Project #
## Introduction ##
This data set relates to academic performance in students grade 1-12. This dataset was collected in 2016 using a "learner activity tracking" tool (xADI) from a learning management system (LMS). It was collected over the course of two academic semesters. Data was collected for 480 students on 16 unique variables. We are interested in the different types of classroom interactions and how they may relate to other variables such as gender, parental satisfaction, and final grades.
We will explore how the number of classroom interactions affects final grades, whether the sum of different interactions is an accurate representation of participation, if interaction level is different by gender, if parental satisfaction correlates with total interaction, and the potential relationship of interaction and days absent from school.
## Data Set ##
The complete data set of 16 variables was cleaned thorough python scripts resulting in the 12 columns we will be using for our analysis.
Gender: Classifies the student's as either Male or Female. The gender column is useful as within our analysis we investigate if gender is correlarted to participation levels and therefore also class achievement levels.
PlaceofBirth: Although this column does not directly pertain to our research questions / analysis. It is might be relevant for future research questions. Such as, do certain countries have higher class achievements than others?
GradeID: Represents the grade of the student. This column also is not relevant to our specific research questions but might be insightful for future research questions such as does the average level of total interaction vary between grades.
Topic: Desribes the course subject of students is not significant to our analysis but could be suitable for future analysis prompts, for example "do certain subjects lead to higher interaction levels from students".
RaisedHands: Represents the number of times a student raised their hand in class, this column is one of the four columns that make up our "Total Interaction" column.
VisitedResources: Represents the number of times a student visited resources in class, this column is one of the four columns that make up our "Total Interaction" column.
AnnouncementsView: Shows the number of times a student viewed annoucements for a class, this column is one of the four columns that make up our "Total Interaction" column.
Discussion: Shows the number of times a student discussed in class, this column is one of the four columns that make up our "Total Interaction" column.
ParentschoolSatisfaction: Describes the Parent's rating of the school (either good or bad). This column is used within our analysis when observing if parent's ratings were correlated with the level of interaction of their students.
StudentAbsenceDays: This column classifies the amount a each student was absent with the options being "under 7" and "above 7"
Class: This column is very pertinent to our analysis as it describes the class achievement of each student. The three options are H for high achievemnt, M for medium and L for low.
Total Interaction: This is a column not originally found in the dataset, made by adding the "RaisedHands", "Visited Resources", "AnnouncementsView" and "Discusssion" columns. As most of our research questions in the individual and group portions of this data analysis involves questions of student's levels of interaction, this is a very important column to our analysis.
## Analysis ##
Research question 1 - How the number of classroom interactions related to final grades - will be answered by looking at a box plot. A box plot show the distribution of the total interaction variable for by final grade (low, medium, or high). If the total number of classroom interactions is related to final grade, we will see distributions of interactions that are quite different for each grade category - signifying that lower or higher interaction levels result in a certain final grade. The less overlap between final grade distributions, the better.
Research question 2 - Asking if total interaction sums would be an accurate representation of classroom participation - will be answered by looking at correlations. By seeing if each individual type of participation is correlated with the sum of interactions, we can confirm that the sum is an accurate measurement for our other analyses. This would suggest that students that are high on one interaction would be high in the total (and some students don't just participate in one manner)
Research question 3 - The question of "does Gender have an influence on the total interaction" - will be answered through a box plot. The box plot shows the distriubtion of the total interaction variable between the genders (male and female). With the corresponding box plot visualization, one can make substantial assumptions on the classroom achievement average of the different genders, as it was previously found that higher interaction levels lead to higher achievement.
Research question 4 - Finding if parent's satisfaction level with the school's is correlated with the total interaction levels of their students - will be answered through a catplot. The catplot enables observers to view if there is a concentration within the good or bad ratings on the scale of total interaction. As levels of achievement is a consequence of total interaction, the catplot provides insight on if participating student's parents have a positive or negative outlook on their academics (and evidently their school).
```
import sys, os
sys.path.insert(0, os.path.abspath('..'))
from scripts.project_functions import *
from scripts import project_functions_anamica
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
df = project_functions_anamica.load_process_data("../../data/data_raw/part_data.csv")
df
```
### Research Question 1: ###
*How does final grade in the class relate to the total number of classroom interactions?*
In the box plot visualization below, you can see that those with the highest final grades had the highest average total classroom interactions. Those with the lowest final grades had the fewest total classroom interactions on average. The average number of interactions for each final grade level appear significantly different from each other - especially for the low (L) grade cateogy whose middle 50% of scores does not overlap with the middle 50% of the middle final grade category.
Those with higher classroom interactions typicaly have a higher final grade designation.
```
sns.boxplot(x='Class', y='TotalInteraction', order = ['L', 'M', 'H'], data = df)
plt.title("Total Classroom Interactions by Final Grade")
plt.xlabel('Final Grade')
plt.ylabel('Total Number of Classroom Interactions')
```
### Research Question 2: ###
*Does the sum of all classroom interactions accurately represent their engagement? (ie: does total interactions correlate with each category of interaction)*
The default correlation for Seaborn's heatmap is Pearson's correlation as it is the standard for statistics when looking for relationships between two continuous variables. I used the accepted structure for categorizing the extent of the correlations: low is 0 to +/- .29; medium is +/- /30 to +/- .49; and high being +/-.50 to +/- 1. Having a score of +/- 1 means that variables are perfectly correlated. A value of r = 0 indicates that variables are not correlated at all.
Looking at the correlation matruct below, the sum of total interactions within the classroom is highly correlated with each of the singular interaction variables (r > .83). This is easily visuatized as the lighter the color square, the higher the correlation (as also indicated by the correlation number). All are positively correlated meaning an increase in one interaction is associated with an increase in another type of classroom interaction.
It is pertinent to note that discussion participation is not as highly correlated with the total interaction score. In general, the number of times a student participated in the class has a low correlation with other variables.
Note: The diagonal is a "perfect" correlation as the varibale is simply correlating with itself.
```
cor = df.corr()
heatmap = sns.heatmap(cor, xticklabels=cor.columns, yticklabels=cor.columns, annot=True,
cmap='mako')
corplot = heatmap.get_figure()
corplot.savefig("corplot.png")
```
### Research Question 3: ###
*Does Gender have an influence on the Total Interaction?*
When discussing gender vs. total interaction, through the visualization of the barplot, it is clear that there is a big disparity within the amount of interaction between males and females.
On average, females have an increased overall participation rate, and as a result of our previous findings, one can assume that females are higher achievers.
Although, it is important to note that this dataset has data from 1.74 times as many males as females, so that may be an influential factor to consider.
```
boxplot = sns.boxplot(x='Gender', y='TotalInteraction', data = df)
plt.ylabel("Total Interaction Levels")
```
### Research Question 4: ###
*Does the Parent's satisfaction level with the school correlate with the total interaction of their students?*
The dataset proves that parents whose students had higher levels of participation, had a higher satisfaction rate with the school. The catplot demonstrates that a majority of parents who had a positive rating of their school, had high achieving students, and vice versa.
```
dataPSS = df.groupby('ParentschoolSatisfaction', as_index = False)['TotalInteraction'].mean()
dataPSS
catplot = sns.catplot(data = df, y='TotalInteraction', x='ParentschoolSatisfaction')
plt.ylabel("Total Interaction Levels")
plt.xlabel("Parent's School Satisfaction Rating")
catplot
```
# Conclusions #
We explored a number of different relationships that exist within this data set pertaining to classroom interactions. We were able to use a variety of visulatizations to see that as total number of classroom interactions increases, final grade categorization gets higher; (RQ3 results); and (RQ4 results). We were also able to confirm that the total number of classroom interactions was a valid way to indicate classroom participation and use this as our main variable in our other analyses.
These findings are important for teachers, parents, and students. Teachers can encouraging interactions which may result in higher grades (note: causation is yet to be established). We also know that boys typically have a lower class number of classroom interactions and perhaps need more encouragement.
|
github_jupyter
|
import sys, os
sys.path.insert(0, os.path.abspath('..'))
from scripts.project_functions import *
from scripts import project_functions_anamica
import pandas as pd
import seaborn as sns
import matplotlib.pylab as plt
df = project_functions_anamica.load_process_data("../../data/data_raw/part_data.csv")
df
sns.boxplot(x='Class', y='TotalInteraction', order = ['L', 'M', 'H'], data = df)
plt.title("Total Classroom Interactions by Final Grade")
plt.xlabel('Final Grade')
plt.ylabel('Total Number of Classroom Interactions')
cor = df.corr()
heatmap = sns.heatmap(cor, xticklabels=cor.columns, yticklabels=cor.columns, annot=True,
cmap='mako')
corplot = heatmap.get_figure()
corplot.savefig("corplot.png")
boxplot = sns.boxplot(x='Gender', y='TotalInteraction', data = df)
plt.ylabel("Total Interaction Levels")
dataPSS = df.groupby('ParentschoolSatisfaction', as_index = False)['TotalInteraction'].mean()
dataPSS
catplot = sns.catplot(data = df, y='TotalInteraction', x='ParentschoolSatisfaction')
plt.ylabel("Total Interaction Levels")
plt.xlabel("Parent's School Satisfaction Rating")
catplot
| 0.165965 | 0.983971 |
```
import torch
torch.backends.cudnn.benchmark = True
from torch import nn
from torch.nn.functional import softmax, log_softmax
from torchmetrics import Accuracy
from resnet_cifar import resnet32
import pytorch_lightning as pl
import wandb
import torchvision.transforms as T
import sys
sys.path.append('../')
from datasets.cifar100_datamodule import DataModule
from deepblocks.layer import MultiHeadAttention
```
## Training with the network-based strategy
```
class Attention(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.linear = nn.Linear(input_dim, input_dim, bias=False)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x):
xa = self.linear(x)
b = xa @ x.transpose(-1, -2)
c = self.softmax(b)
y = c @ x
return y
for _ in range(1000):
att = Attention(10)
x = torch.rand(100, 51, 10)
assert att(x).min()>=0
wandb.finish()
def kl_div(x, y):
return (x*(x/y).log()).mean()
class LitModel(pl.LightningModule):
def __init__(self, ):
super().__init__()
self.student1 = resnet32()
self.student2 = resnet32()
self.student3 = resnet32()
self.leader = resnet32()
self.mha = Attention(input_dim=100)
self.T = 3.0
self.celoss = nn.CrossEntropyLoss()
self.acc = Accuracy(compute_on_step=True, top_k=1)
def configure_optimizers(self):
opt = torch.optim.SGD(self.parameters(), lr=1e-2, momentum=.9, nesterov=True, weight_decay=5e-4)
step = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[150, 255], gamma=.1)
return [opt], [step]
def forward(self, x, optimize_first:bool=True):
x1 = self.student1(x)
x2 = self.student2(x)
x3 = self.student3(x)
xl = self.leader(x)
return x1, x2, x3, xl
def training_step(self, batch, batch_id):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = torch.stack(loss, dim=0).sum()
# peers loss
t1, t2, t3, tl = [softmax(_x/self.T, dim=1) for _x in xs]
peers = torch.stack((t1, t2, t3), dim=1)
mha_peers = self.mha(peers)
loss += self.T * kl_div(mha_peers, peers)
# leader loss
mean = peers.mean(dim=1)
loss += self.T * kl_div(mean, tl)
assert loss.item() == loss.item()
# logging
self.log('train_loss', loss, prog_bar=True)
self.log('train_acc', self.acc(tl, y), prog_bar=True)
return loss
def validation_step(self, batch, batch_id):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = sum(loss)
# peers loss
t1, t2, t3, tl = [softmax(_x, dim=1) for _x in xs]
# peers = torch.stack((t1, t2, t3), dim=1)
# mha_peers = self.mha(peers)
# loss += self.T * kl_div(mha_peers, peers)
# leader loss
# mean = peers.mean(dim=1)
# loss += self.T * kl_div(mean, tl)
# logging
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', self.acc(tl, y), prog_bar=True)
return loss
def test_step(self, batch, *a):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = sum(loss)
# peers loss
t1, t2, t3, tl = [softmax(_x, dim=1) for _x in xs]
# peers = torch.stack((t1, t2, t3), dim=1)
# mha_peers = self.mha(peers)
# loss += self.T * kl_div(mha_peers, peers)
# leader loss
# mean = peers.mean(dim=1)
# loss += self.T * kl_div(mean, tl)
# logging
self.log('test_loss', loss, prog_bar=True)
self.log('test_acc', self.acc(tl, y), prog_bar=True)
return loss
train_transforms = T.Compose([
T.RandomCrop(32, padding=4),
T.RandomHorizontalFlip(), # randomly flip image horizontally
T.ToTensor(),
T.Normalize((0.5071, 0.4865, 0.4409), (0.2673, 0.2564, 0.2762))
])
test_transforms = T.Compose([
T.ToTensor(),
T.Normalize((0.5071, 0.4865, 0.4409), (0.2673, 0.2564, 0.2762))
])
wandb.finish()
lr_monitor = pl.callbacks.LearningRateMonitor(logging_interval='epoch')
logger = pl.loggers.wandb.WandbLogger(project='distilled models', entity='blurry-mood')
trainer = pl.Trainer(callbacks=[lr_monitor], logger=logger,
gpus=-1, max_epochs=300,
val_check_interval=1., progress_bar_refresh_rate=0)
dm = DataModule('../datasets/cifar-100-python/', train_transform=train_transforms, test_transform=test_transforms,
batch_size=128)
litmodel = LitModel()
trainer.fit(litmodel, dm)
trainer.test(litmodel)
torch.save(litmodel.leader.state_dict(), '../models/okddip_resnet32.pth')
```
|
github_jupyter
|
import torch
torch.backends.cudnn.benchmark = True
from torch import nn
from torch.nn.functional import softmax, log_softmax
from torchmetrics import Accuracy
from resnet_cifar import resnet32
import pytorch_lightning as pl
import wandb
import torchvision.transforms as T
import sys
sys.path.append('../')
from datasets.cifar100_datamodule import DataModule
from deepblocks.layer import MultiHeadAttention
class Attention(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.linear = nn.Linear(input_dim, input_dim, bias=False)
self.softmax = nn.Softmax(dim=-1)
def forward(self, x):
xa = self.linear(x)
b = xa @ x.transpose(-1, -2)
c = self.softmax(b)
y = c @ x
return y
for _ in range(1000):
att = Attention(10)
x = torch.rand(100, 51, 10)
assert att(x).min()>=0
wandb.finish()
def kl_div(x, y):
return (x*(x/y).log()).mean()
class LitModel(pl.LightningModule):
def __init__(self, ):
super().__init__()
self.student1 = resnet32()
self.student2 = resnet32()
self.student3 = resnet32()
self.leader = resnet32()
self.mha = Attention(input_dim=100)
self.T = 3.0
self.celoss = nn.CrossEntropyLoss()
self.acc = Accuracy(compute_on_step=True, top_k=1)
def configure_optimizers(self):
opt = torch.optim.SGD(self.parameters(), lr=1e-2, momentum=.9, nesterov=True, weight_decay=5e-4)
step = torch.optim.lr_scheduler.MultiStepLR(opt, milestones=[150, 255], gamma=.1)
return [opt], [step]
def forward(self, x, optimize_first:bool=True):
x1 = self.student1(x)
x2 = self.student2(x)
x3 = self.student3(x)
xl = self.leader(x)
return x1, x2, x3, xl
def training_step(self, batch, batch_id):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = torch.stack(loss, dim=0).sum()
# peers loss
t1, t2, t3, tl = [softmax(_x/self.T, dim=1) for _x in xs]
peers = torch.stack((t1, t2, t3), dim=1)
mha_peers = self.mha(peers)
loss += self.T * kl_div(mha_peers, peers)
# leader loss
mean = peers.mean(dim=1)
loss += self.T * kl_div(mean, tl)
assert loss.item() == loss.item()
# logging
self.log('train_loss', loss, prog_bar=True)
self.log('train_acc', self.acc(tl, y), prog_bar=True)
return loss
def validation_step(self, batch, batch_id):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = sum(loss)
# peers loss
t1, t2, t3, tl = [softmax(_x, dim=1) for _x in xs]
# peers = torch.stack((t1, t2, t3), dim=1)
# mha_peers = self.mha(peers)
# loss += self.T * kl_div(mha_peers, peers)
# leader loss
# mean = peers.mean(dim=1)
# loss += self.T * kl_div(mean, tl)
# logging
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', self.acc(tl, y), prog_bar=True)
return loss
def test_step(self, batch, *a):
x, y = batch
xs = self(x)
# GT loss
loss = [self.celoss(_x, y) for _x in xs]
loss = sum(loss)
# peers loss
t1, t2, t3, tl = [softmax(_x, dim=1) for _x in xs]
# peers = torch.stack((t1, t2, t3), dim=1)
# mha_peers = self.mha(peers)
# loss += self.T * kl_div(mha_peers, peers)
# leader loss
# mean = peers.mean(dim=1)
# loss += self.T * kl_div(mean, tl)
# logging
self.log('test_loss', loss, prog_bar=True)
self.log('test_acc', self.acc(tl, y), prog_bar=True)
return loss
train_transforms = T.Compose([
T.RandomCrop(32, padding=4),
T.RandomHorizontalFlip(), # randomly flip image horizontally
T.ToTensor(),
T.Normalize((0.5071, 0.4865, 0.4409), (0.2673, 0.2564, 0.2762))
])
test_transforms = T.Compose([
T.ToTensor(),
T.Normalize((0.5071, 0.4865, 0.4409), (0.2673, 0.2564, 0.2762))
])
wandb.finish()
lr_monitor = pl.callbacks.LearningRateMonitor(logging_interval='epoch')
logger = pl.loggers.wandb.WandbLogger(project='distilled models', entity='blurry-mood')
trainer = pl.Trainer(callbacks=[lr_monitor], logger=logger,
gpus=-1, max_epochs=300,
val_check_interval=1., progress_bar_refresh_rate=0)
dm = DataModule('../datasets/cifar-100-python/', train_transform=train_transforms, test_transform=test_transforms,
batch_size=128)
litmodel = LitModel()
trainer.fit(litmodel, dm)
trainer.test(litmodel)
torch.save(litmodel.leader.state_dict(), '../models/okddip_resnet32.pth')
| 0.809427 | 0.715064 |
# IAPWS-IF97 Libraries
## 1 Introduction to IAPWS-IF97
http://www.iapws.org/relguide/IF97-Rev.html
This formulation is recommended for industrial use (primarily the steam power industry) for the calculation of thermodynamic properties of ordinary water in its fluid phases, including vapor-liquid equilibrium.
The release also contains "backward" equations to allow calculations with certain common sets of independent variables to be made without iteration; these equations may also be used to provide good initial guesses for iterative solutions.
Since the release was first issued, it has been supplemented by several additional "backward" equations that are available for use if desired; these are for p(h,s) in Regions 1 and 2, T(p,h), v(p,h), T(p,s), v(p,s) in Region 3, p(h,s) in Region 3 with auxiliary equations for independent variables h and s, and v(p,T) in Region 3.

## 2 Python library of IAPWS
### 2.1 IAPWS
https://github.com/jjgomera/iapws
**dependences:** Numpy,scipy: library with mathematic and scientific tools
```bash
python -m pip install iapws
```
```
from iapws import IAPWS97
sat_steam=IAPWS97(P=1,x=1) # saturated steam with known P,x=1
sat_liquid=IAPWS97(T=370, x=0) #saturated liquid with known T,x=0
steam=IAPWS97(P=2.5, T=500) # steam with known P and T(K)
print(sat_steam.h, sat_liquid.h, steam.h) #calculated enthalpies
```
### 2.2 SEUIF97
https://github.com/PySEE/SEUIF97
The high-speed shared library is provided for developers to calculate the properties of water and steam where the direct IAPWS-IF97 implementation may be unsuitable because of their computing time consumption, such as Computational Fluid Dynamics (CFD), heat cycle calculations, simulations of non-stationary processes, and real-time process optimizations.
Through the high-speed library, the results of the IAPWS-IF97 are accurately produced at about 3 times computational speed than the repeated squaring method for fast computation of large positive integer powers.
The library is written in ANSI C for faster, smaller binaries and better compatibility for accessing the DLL/SO from different C++ compilers.
For Windows and Linux users, the convenient binary library and APIs are provided.
* The shared library: Windows(32/64): `libseuif97.dll`; Linux(64): `libseuif97.so`
* The binding API: Python, C/C++, Microsoft Excel VBA, MATLAB,Java, Fortran, C#
#### 2.2.1 API:seuif97.py
Functions of `water and steam properties`, `exerg`y analysis and the `thermodynamic process of steam turbine` are provided in **SEUIF97**
##### 2.2.1.1 Functions of water and steam properties
Using SEUIF97, you can set the state of steam using various pairs of know properties to get any output properties you wish to know, including in the `30 properties in libseuif97`
The following input pairs are implemented:
```c
(p,t) (p,h) (p,s) (p,v)
(t,h) (t,s) (t,v)
(h,s)
(p,x) (t,x)
```
The two type functions are provided in the seuif97 pacakge:
* ??2?(in1,in2) , e.g: ```h=pt2h(p,t)```
* ??(in1,in2,propertyID), , e.g: ```h=pt(p,t,4)```, the propertyID h is 4
Python API:seuif97.py
```python
from ctypes import *
flib = windll.LoadLibrary('libseuif97.dll')
prototype = WINFUNCTYPE(c_double, c_double, c_double, c_int)
# ---(p,t) ----------------
def pt(p, t, pid):
f = prototype(("seupt", flib),)
result = f(p, t, pid)
return result
def pt2h(p, t):
f = prototype(("seupt", flib),)
result = f(p, t, 4)
return result
```
```
import seuif97
p, t = 16.10, 535.10
# ??2?(in1,in2)
h = seuif97.pt2h(p, t)
s = seuif97.pt2s(p, t)
v = seuif97.pt2v(p, t)
print("(p,t),h,s,v:",
"{:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, t, h, s, v))
# ??(in1,in2,propertyid)
t = seuif97.ph(p, h, 1)
s = seuif97.ph(p, h, 5)
v = seuif97.ph(p, h, 3)
print("(p,h),t,s,v:",
"{:>.2f}\t {:>.2f}\t {:>.2f}\t {:>.3f}\t {:>.4f}".format(p, h, t, s, v))
```
##### 2.2.1.2 Functions of Thermodynamic Process of Steam Turbine
* 1 Isentropic Enthalpy Drop:ishd(pi,ti,pe)
pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C)
pe - double, outlet pressure(MPa)
* 2 Isentropic Efficiency(`0~100`): ief(pi,ti,pe,te)
pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C)
pe - double, outlet pressure(MPa); te - double, outlet temperature(°C)
```
from seuif97 import *
p1=16.1
t1=535.2
p2=3.56
t2=315.1
hdis=ishd(p1,t1,p2) # Isentropic Enthalpy Drop
ef=ief(p1,t1,p2,t2) # Isentropic Efficiency:0-100
print('Isentropic Enthalpy Drop =',hdis,'kJ/kg')
print('Isentropic Efficiency = %.2f%%'%ef)
```
#### 2.2.2 Propertiey and Process Diagram
**1 T-s Diagram**
```
%matplotlib inline
"""
T-s Diagram
1 isoenthalpic lines isoh(200, 3600)kJ/kg
2 isobar lines isop(611.657e-6,100)MPa
3 saturation lines x=0,x=1
4 isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h, ph2t, ph2s, tx2s
import numpy as np
import matplotlib.pyplot as plt
Pt=611.657e-6
Tc=647.096
xAxis = "s"
yAxis = "T"
title = {"T": "T, ºC", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 11.5)
plt.ylim(0, 800)
plt.grid()
isoh = np.linspace(200, 3600, 18)
isop = np.array([Pt,0.001,0.002,0.004,0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1.0,
2.0, 5.0, 10.0, 20.0, 50.0, 100.0])
for h in isoh:
T = np.array([ph2t(p, h) for p in isop])
S = np.array([ph2s(p, h) for p in isop])
plt.plot(S, T, 'b', lw=0.5)
for p in isop:
T = np.array([ph2t(p, h) for h in isoh])
S = np.array([ph2s(p, h) for h in isoh])
plt.plot(S, T, 'b', lw=0.5)
tc = Tc - 273.15
T = np.linspace(0.01, tc, 100)
for x in np.array([0, 1.0]):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r', lw=1.0)
for x in np.linspace(0.1, 0.9, 11):
S = np.array([tx2s(t, x) for t in T])
plt.plot(S, T, 'r--', lw=0.5)
plt.show()
```
**2 H-S Diagram**
```
%matplotlib inline
"""
h-s Diagram
1 Calculating Isotherm lines isot(0.0,800)ºC
2 Calculating Isobar lines isop(611.657e-6, 100)Mpa
3 Calculating saturation lines x=0,x=1
4 Calculating isoquality lines x(0.1,0.9)
"""
from seuif97 import pt2h,pt2s,tx2s,tx2h
import numpy as np
import matplotlib.pyplot as plt
xAxis = "s"
yAxis = "h"
title = { "h": "h, kJ/kg", "s": "s, kJ/kgK"}
plt.title("%s-%s Diagram" % (yAxis, xAxis))
plt.xlabel(title[xAxis])
plt.ylabel(title[yAxis])
plt.xlim(0, 12.2)
plt.ylim(0, 4300)
plt.grid()
Pt=611.657e-6
isot = np.array([0, 50, 100, 200, 300, 400, 500, 600, 700, 800])
isop = np.array([Pt,0.001, 0.01, 0.1, 1, 10, 20, 50, 100])
# Isotherm lines in ºC
for t in isot:
h = np.array([pt2h(p,t) for p in isop])
s = np.array([pt2s(p,t) for p in isop])
plt.plot(s,h,'g',lw=0.5)
# Isobar lines in Mpa
for p in isop:
h = np.array([pt2h(p,t) for t in isot])
s = np.array([pt2s(p,t) for t in isot])
plt.plot(s,h,'b',lw=0.5)
tc=647.096-273.15
T = np.linspace(0.1,tc,100)
# saturation lines
for x in np.array([0,1.0]):
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r',lw=1.0)
# Isoquality lines
isox=np.linspace(0.1,0.9,11)
for x in isox:
h = np.array([tx2h(t,x) for t in T])
s = np.array([tx2s(t,x) for t in T])
plt.plot(s,h,'r--',lw=0.5)
plt.show()
```
**4 H-S(Mollier) Diagram of Steam Turbine Expansion**
```
%matplotlib inline
"""
H-S(Mollier) Diagram of Steam Turbine Expansion
4 lines:
1 Isobar line:p inlet
2 Isobar line:p outlet
3 isentropic line: (p inlet ,t inlet h inlet,s inlet), (p outlet,s inlet)
4 Expansion line: inlet,outlet
License: this code is in the public domain
Author: Cheng Maohua
Email: [email protected]
Last modified: 2018.11.28
"""
import matplotlib.pyplot as plt
import numpy as np
from seuif97 import pt2h, pt2s, ps2h, ph2t, ief, ishd
class Turbine(object):
def __init__(self, pin, tin, pex, tex):
self.pin = pin
self.tin = tin
self.pex = pex
self.tex = tex
def analysis(self):
self.ef = ief(self.pin, self.tin, self.pex, self.tex)
self.his = ishd(self.pin, self.tin, self.pex)
self.hin = pt2h(self.pin, self.tin)
self.sin = pt2s(self.pin, self.tin)
self.hex = pt2h(self.pex, self.tex)
self.sex = pt2s(self.pex, self.tex)
def expansionline(self):
sdelta = 0.01
# 1 Isobar pin
s_isopin = np.array([self.sin - sdelta, self.sin + sdelta])
h_isopin = np.array([ps2h(self.pin, s_isopin[0]),
ps2h(self.pin, s_isopin[1])])
# 2 Isobar pex
s_isopex = np.array([s_isopin[0], self.sex + sdelta])
h_isopex = np.array([ps2h(self.pex, s_isopex[0]),
ps2h(self.pex, s_isopex[1])])
# 3 isentropic lines
h_isos = np.array([self.hin, ps2h(self.pex, self.sin)])
s_isos = np.array([self.sin, self.sin])
# 4 expansion Line
h_expL = np.array([self.hin, self.hex])
s_expL = np.array([self.sin, self.sex])
# plot lines
plt.figure(figsize=(6, 8))
plt.title("H-S(Mollier) Diagram of Steam Turbine Expansion")
plt.plot(s_isopin, h_isopin, 'b-') # Isobar line: pin
plt.plot(s_isopex, h_isopex, 'b-') # Isobar line: pex
plt.plot(s_isos, h_isos, 'ys-') # isoentropic line:
plt.plot(s_expL, h_expL, 'r-', label='Expansion Line')
plt.plot(s_expL, h_expL, 'rs')
_title = 'The isentropic efficiency = ' + \
r'$\frac{h_1-h_2}{h_1-h_{2s}}$' + '=' + \
'{:.2f}'.format(self.ef) + '%'
plt.legend(loc="center", bbox_to_anchor=[
0.6, 0.9], ncol=2, shadow=True, title=_title)
# annotate the inlet and exlet
txt = "h1(%.2f,%.2f)" % (self.pin, self.tin)
plt.annotate(txt,
xy=(self.sin, self.hin), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
txt = "h2(%.2f,%.2f)" % (self.pex, self.tex)
plt.annotate(txt,
xy=(self.sex, self.hex), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
# annotate h2s
txt = "h2s(%.2f,%.2f)" % (self.pex, ph2t(self.pex, h_isos[1]))
plt.annotate(txt,
xy=(self.sin, h_isos[1]), xycoords='data',
xytext=(+1, +10), textcoords='offset points', fontsize=10,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.xlabel('s(kJ/(kg.K))')
plt.ylabel('h(kJ/kg)')
plt.grid()
plt.show()
def __str__(self):
result = ('\n Inlet(p, t) {:>6.2f}MPa {:>6.2f}°C \n Exlet(p, t) {:>6.2f}MPa {:>6.2f}°C \nThe isentropic efficiency: {:>5.2f}%'
.format(self.pin, self.tin, self.pex, self.tex, self.ef))
return result
if __name__ == '__main__':
pin, tin = 16.0, 535.0
pex, tex = 3.56, 315.0
tb1 = Turbine(pin, tin, pex, tex)
tb1.analysis()
print(tb1)
tb1.expansionline()
```
## Reference
* IAPWS.org: http://www.iapws.org/
* IF97-Rev.pdf: http://www.iapws.org/relguide/IF97-Rev.pdf
* python libray for IAPWS standard calculation of water and steam properties: https://github.com/jjgomera/iapws
* SEUIF97: https://github.com/PySEE/SEUIF97
* seuif97 Python Package https://pypi.org/project/seuif97/
|
github_jupyter
|
python -m pip install iapws
from iapws import IAPWS97
sat_steam=IAPWS97(P=1,x=1) # saturated steam with known P,x=1
sat_liquid=IAPWS97(T=370, x=0) #saturated liquid with known T,x=0
steam=IAPWS97(P=2.5, T=500) # steam with known P and T(K)
print(sat_steam.h, sat_liquid.h, steam.h) #calculated enthalpies
(p,t) (p,h) (p,s) (p,v)
(t,h) (t,s) (t,v)
(h,s)
(p,x) (t,x)
* ??(in1,in2,propertyID), , e.g: ```h=pt(p,t,4)```, the propertyID h is 4
Python API:seuif97.py
##### 2.2.1.2 Functions of Thermodynamic Process of Steam Turbine
* 1 Isentropic Enthalpy Drop:ishd(pi,ti,pe)
pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C)
pe - double, outlet pressure(MPa)
* 2 Isentropic Efficiency(`0~100`): ief(pi,ti,pe,te)
pi - double, inlet pressure(MPa); ti - double, inlet temperature(°C)
pe - double, outlet pressure(MPa); te - double, outlet temperature(°C)
#### 2.2.2 Propertiey and Process Diagram
**1 T-s Diagram**
**2 H-S Diagram**
**4 H-S(Mollier) Diagram of Steam Turbine Expansion**
| 0.388502 | 0.885532 |
```
import os
import sys
import collections
import csv
import pandas as pd
import numpy as np
import tensorflow as tf
import pandas as pd
import numpy as np
import time
from pymongo import MongoClient
import urllib
import multiprocess
import pickle
import random
# BERT files
os.listdir("../bert-master")
sys.path.insert(0, '../bert-master')
from run_classifier import *
import modeling
import optimization
import tokenization
import preprocessor as p
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
p.set_options(p.OPT.URL, p.OPT.EMOJI)
# Set up data directories
data_dir = './data'
model_dir = './model'
output_dir = './output'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(model_dir):
os.makedirs(data_dir)
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
def worker(consp_tuple):
## this is temp
from pymongo import MongoClient
import urllib
import pickle
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
data_dir = './data'
## end temp
data = []
conspiracy, hashtag = consp_tuple
cursor = client[conspiracy][hashtag].find({})
for i, document in enumerate(cursor):
try:
inputs = {'text' : p.clean(document['text'])}
inputs.update({'tweetId' : document['tweetId']})
data.append(inputs)
except Exception as e:
pass
if (i > 100):
break
with open(data_dir + '/' + conspiracy + '-' + hashtag + '.p', 'wb') as f:
pickle.dump(data, f)
return consp_tuple
def get_consp_tuples():
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
ignore_mask = ['test_db', 'admin', 'local', 'config', 'TwitterJobs']
conspiracies = list(set(client.list_database_names()) - set(ignore_mask))
consp_tuples = []
for conspiracy in conspiracies:
for hashtag in client[conspiracy].list_collection_names():
consp_tuples.append((conspiracy, hashtag))
return consp_tuples
def preprocess():
consp_tuples = get_consp_tuples()
random.shuffle(consp_tuples)
p = multiprocess.Pool(multiprocess.cpu_count())
# consp_tuples = [(consp_tuple, data_dir) for consp_tuple in consp_tuples]
# results = p.map(worker, consp_tuples)
for i in range(10):
worker(consp_tuples[i])
preprocess()
def generate_labels():
conspiracies = set()
for filename in os.listdir(data_dir):
if '.p' in filename:
conspiracies.add(filename.split('-')[0])
return {consp:i for i, consp in enumerate(conspiracies)}
data = []
dummytext = '*'
labels = generate_labels()
for filename in os.listdir(data_dir):
if '.p' in filename:
with open(data_dir + '/' + filename, 'rb') as f:
consp = filename.split('-')[0]
x = pickle.load(f)
for row in x:
data.append({'dummy_1': row['tweetId'], 'target': labels[consp], 'dummy_2' : dummytext, 'text' : row['text']})
df = pd.DataFrame(data)
data = None
# force train into cola format, test is fine as it is
# train = train.sample(frac=0.01)
# test = train.sample(frac=0.01)
test_split = 0.2
permuted_indices = list(np.random.permutation(len(df)))
test_indices = random.sample(permuted_indices, int(len(permuted_indices) * test_split))
train_indices = list(set(permuted_indices) - set(test_indices))
train = df.iloc[train_indices]
test = df.iloc[test_indices]
train.to_csv(data_dir + '/train.tsv', sep='\t', index=False, header=False)
test.to_csv(data_dir + '/test.tsv', sep='\t', index=False, header=True)
class MultiClassColaProcessor(ColaProcessor):
def get_labels(self):
"""See base class."""
return list([str(x) for x in labels.values()])
task_name = 'cola'
bert_config_file = model_dir + '/bert_config.json'
vocab_file = model_dir + '/vocab.txt'
init_checkpoint = model_dir + '/bert_model.ckpt'
do_lower_case = True
max_seq_length = 72
do_train = True
do_eval = False
do_predict = False
train_batch_size = 32
eval_batch_size = 32
predict_batch_size = 32
learning_rate = 2e-5
num_train_epochs = 1.0
warmup_proportion = 0.1
use_tpu = False
master = None
save_checkpoints_steps = 99999999 # <----- don't want to save any checkpoints
iterations_per_loop = 1000
num_tpu_cores = 8
tpu_cluster_resolver = None
start = time.time()
print("--------------------------------------------------------")
print("Starting training ...")
print("--------------------------------------------------------")
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
processor = MultiClassColaProcessor()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
tpu_cluster_resolver = None
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=master,
model_dir=output_dir,
save_checkpoints_steps=save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=iterations_per_loop,
num_shards=num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = processor.get_train_examples(data_dir)
num_train_steps = int(len(train_examples) / train_batch_size * num_train_epochs)
num_warmup_steps = int(num_train_steps * warmup_proportion)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=init_checkpoint,
learning_rate=learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=use_tpu,
use_one_hot_embeddings=use_tpu)
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=train_batch_size)
train_file = os.path.join(output_dir, "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, max_seq_length, tokenizer, train_file)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
end = time.time()
print("--------------------------------------------------------")
print("Training complete in ", end - start, " seconds")
print("--------------------------------------------------------")
def file_based_input_fn_builder(input_file, seq_length, is_training,
drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
name_to_features = {
"input_ids": tf.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.FixedLenFeature([seq_length], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
"is_real_example": tf.FixedLenFeature([], tf.int64),
}
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
example = tf.parse_single_example(record, name_to_features)
# tf.Example only supports tf.int64, but the TPU only supports tf.int32.
# So cast all int64 to int32.
for name in list(example.keys()):
t = example[name]
if t.dtype == tf.int64:
t = tf.to_int32(t)
example[name] = t
return example
def input_fn(params):
"""The actual input function."""
#batch_size = params["batch_size"]
batch_size = 64 # <----- hardcoded batch_size added here
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
d = tf.data.TFRecordDataset(input_file)
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.apply(
tf.contrib.data.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=batch_size,
drop_remainder=drop_remainder))
return d
return input_fn
start = time.time()
print("--------------------------------------------------------")
print("Starting inference ...")
print("--------------------------------------------------------")
predict_examples = processor.get_test_examples(data_dir)
num_actual_predict_examples = len(predict_examples)
predict_file = os.path.join(output_dir, "predict.tf_record")
file_based_convert_examples_to_features(predict_examples, label_list,
max_seq_length, tokenizer,
predict_file)
tf.logging.info("***** Running prediction*****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(predict_examples), num_actual_predict_examples,
len(predict_examples) - num_actual_predict_examples)
tf.logging.info(" Batch size = %d", predict_batch_size)
predict_drop_remainder = True if use_tpu else False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file,
seq_length=max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
output_predict_file = os.path.join(output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
end = time.time()
print("--------------------------------------------------------")
print("Inference complete in ", end - start, " seconds")
print("--------------------------------------------------------")
labels
```
|
github_jupyter
|
import os
import sys
import collections
import csv
import pandas as pd
import numpy as np
import tensorflow as tf
import pandas as pd
import numpy as np
import time
from pymongo import MongoClient
import urllib
import multiprocess
import pickle
import random
# BERT files
os.listdir("../bert-master")
sys.path.insert(0, '../bert-master')
from run_classifier import *
import modeling
import optimization
import tokenization
import preprocessor as p
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
p.set_options(p.OPT.URL, p.OPT.EMOJI)
# Set up data directories
data_dir = './data'
model_dir = './model'
output_dir = './output'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
if not os.path.exists(model_dir):
os.makedirs(data_dir)
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
def worker(consp_tuple):
## this is temp
from pymongo import MongoClient
import urllib
import pickle
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
data_dir = './data'
## end temp
data = []
conspiracy, hashtag = consp_tuple
cursor = client[conspiracy][hashtag].find({})
for i, document in enumerate(cursor):
try:
inputs = {'text' : p.clean(document['text'])}
inputs.update({'tweetId' : document['tweetId']})
data.append(inputs)
except Exception as e:
pass
if (i > 100):
break
with open(data_dir + '/' + conspiracy + '-' + hashtag + '.p', 'wb') as f:
pickle.dump(data, f)
return consp_tuple
def get_consp_tuples():
username = 'christian'
password = 'Dec211996'
client = MongoClient('mongodb://' + urllib.parse.quote_plus(username) + ':' + urllib.parse.quote_plus(password) + '@198.211.115.252')
ignore_mask = ['test_db', 'admin', 'local', 'config', 'TwitterJobs']
conspiracies = list(set(client.list_database_names()) - set(ignore_mask))
consp_tuples = []
for conspiracy in conspiracies:
for hashtag in client[conspiracy].list_collection_names():
consp_tuples.append((conspiracy, hashtag))
return consp_tuples
def preprocess():
consp_tuples = get_consp_tuples()
random.shuffle(consp_tuples)
p = multiprocess.Pool(multiprocess.cpu_count())
# consp_tuples = [(consp_tuple, data_dir) for consp_tuple in consp_tuples]
# results = p.map(worker, consp_tuples)
for i in range(10):
worker(consp_tuples[i])
preprocess()
def generate_labels():
conspiracies = set()
for filename in os.listdir(data_dir):
if '.p' in filename:
conspiracies.add(filename.split('-')[0])
return {consp:i for i, consp in enumerate(conspiracies)}
data = []
dummytext = '*'
labels = generate_labels()
for filename in os.listdir(data_dir):
if '.p' in filename:
with open(data_dir + '/' + filename, 'rb') as f:
consp = filename.split('-')[0]
x = pickle.load(f)
for row in x:
data.append({'dummy_1': row['tweetId'], 'target': labels[consp], 'dummy_2' : dummytext, 'text' : row['text']})
df = pd.DataFrame(data)
data = None
# force train into cola format, test is fine as it is
# train = train.sample(frac=0.01)
# test = train.sample(frac=0.01)
test_split = 0.2
permuted_indices = list(np.random.permutation(len(df)))
test_indices = random.sample(permuted_indices, int(len(permuted_indices) * test_split))
train_indices = list(set(permuted_indices) - set(test_indices))
train = df.iloc[train_indices]
test = df.iloc[test_indices]
train.to_csv(data_dir + '/train.tsv', sep='\t', index=False, header=False)
test.to_csv(data_dir + '/test.tsv', sep='\t', index=False, header=True)
class MultiClassColaProcessor(ColaProcessor):
def get_labels(self):
"""See base class."""
return list([str(x) for x in labels.values()])
task_name = 'cola'
bert_config_file = model_dir + '/bert_config.json'
vocab_file = model_dir + '/vocab.txt'
init_checkpoint = model_dir + '/bert_model.ckpt'
do_lower_case = True
max_seq_length = 72
do_train = True
do_eval = False
do_predict = False
train_batch_size = 32
eval_batch_size = 32
predict_batch_size = 32
learning_rate = 2e-5
num_train_epochs = 1.0
warmup_proportion = 0.1
use_tpu = False
master = None
save_checkpoints_steps = 99999999 # <----- don't want to save any checkpoints
iterations_per_loop = 1000
num_tpu_cores = 8
tpu_cluster_resolver = None
start = time.time()
print("--------------------------------------------------------")
print("Starting training ...")
print("--------------------------------------------------------")
bert_config = modeling.BertConfig.from_json_file(bert_config_file)
processor = MultiClassColaProcessor()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=do_lower_case)
tpu_cluster_resolver = None
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=master,
model_dir=output_dir,
save_checkpoints_steps=save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=iterations_per_loop,
num_shards=num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = processor.get_train_examples(data_dir)
num_train_steps = int(len(train_examples) / train_batch_size * num_train_epochs)
num_warmup_steps = int(num_train_steps * warmup_proportion)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=init_checkpoint,
learning_rate=learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=use_tpu,
use_one_hot_embeddings=use_tpu)
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=train_batch_size)
train_file = os.path.join(output_dir, "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, max_seq_length, tokenizer, train_file)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
end = time.time()
print("--------------------------------------------------------")
print("Training complete in ", end - start, " seconds")
print("--------------------------------------------------------")
def file_based_input_fn_builder(input_file, seq_length, is_training,
drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
name_to_features = {
"input_ids": tf.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.FixedLenFeature([seq_length], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
"is_real_example": tf.FixedLenFeature([], tf.int64),
}
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
example = tf.parse_single_example(record, name_to_features)
# tf.Example only supports tf.int64, but the TPU only supports tf.int32.
# So cast all int64 to int32.
for name in list(example.keys()):
t = example[name]
if t.dtype == tf.int64:
t = tf.to_int32(t)
example[name] = t
return example
def input_fn(params):
"""The actual input function."""
#batch_size = params["batch_size"]
batch_size = 64 # <----- hardcoded batch_size added here
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
d = tf.data.TFRecordDataset(input_file)
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.apply(
tf.contrib.data.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=batch_size,
drop_remainder=drop_remainder))
return d
return input_fn
start = time.time()
print("--------------------------------------------------------")
print("Starting inference ...")
print("--------------------------------------------------------")
predict_examples = processor.get_test_examples(data_dir)
num_actual_predict_examples = len(predict_examples)
predict_file = os.path.join(output_dir, "predict.tf_record")
file_based_convert_examples_to_features(predict_examples, label_list,
max_seq_length, tokenizer,
predict_file)
tf.logging.info("***** Running prediction*****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(predict_examples), num_actual_predict_examples,
len(predict_examples) - num_actual_predict_examples)
tf.logging.info(" Batch size = %d", predict_batch_size)
predict_drop_remainder = True if use_tpu else False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file,
seq_length=max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
output_predict_file = os.path.join(output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
end = time.time()
print("--------------------------------------------------------")
print("Inference complete in ", end - start, " seconds")
print("--------------------------------------------------------")
labels
| 0.237664 | 0.113334 |
# Preparação
Carregar as biliotecas e ler os caminhos de ENV.
```
# Import Libraries
import os
import pandas as pd
import joblib
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn import metrics
from sklearn import tree
import graphviz
import re
# Set paths
data_set_path = os.environ['DATASET_PATH']
metrics_path = os.environ['METRICS_PATH']
model_path = os.environ['MODEL_PATH']
```
# Data Extraction
Carregar os dados do arquivo CSV.
```
# Load product data from CSV
product_data = pd.read_csv(data_set_path)
product_data
```
Verificar quantas categorias existem, para entender melhor o problema de classificação.
```
product_data.category.value_counts()
```
# Data Formatting
Remover a coluna *product_id*, porque ela é provavelmente um identificador aletatório que não contém nenhuma informação relevante.
```
product_data_prepared = product_data.drop(columns=['product_id'])
product_data_prepared.columns
```
Transformar valores de NaN em colunas numéricas para 0, e validar a eficácia no exemplo da coluna *weight*.
```
product_data_prepared['weight'].isna().value_counts()
for column in ['search_page','position','price','weight','express_delivery','minimum_quantity','view_counts','order_counts']:
product_data_prepared[column] = product_data_prepared[column].fillna(0)
product_data_prepared['weight'].isna().value_counts()
```
Transformar a coluna *creation_date* em unix timestamp porque o classificador consegue trabalhar apenas com campos numéricos.
```
product_data_prepared['creation_date'] = pd.to_datetime(product_data_prepared['creation_date'], format='%Y-%m-%d %H:%M:%S').astype(int) / 10**9
product_data_prepared['creation_date']
```
Fazer um one-hot-encoding do *seller_id* para o classificador consegue interpretar melhor o conteúdo do campo. Cada vendedor é independente portanto cada vendedor precisa sua coluna. Não há uma corelação entre os vendedores e seus id's, p.ex. vendedor com id 1500 é 2 vezes mais importante do que o vendedor com id 3000.
Como muitos vendedores tem poucos produtos listados, focamos apenas nos 100 vendedores que tem mais produtos listados.
```
top_100_sellers = sorted([x for x in product_data.seller_id.value_counts().head(100).index])
# Build data frame with one-hot encoded seller_id
enc = OneHotEncoder(categories=[top_100_sellers],handle_unknown='ignore')
enc_df = pd.DataFrame(enc.fit_transform(product_data_prepared[['seller_id']]).toarray())
enc_df.columns = ['seller_id_'+str(col) for col in top_100_sellers]
# Replace seller_id column with one-hot-encoded data frame
product_data_prepared = product_data_prepared.drop(columns=['seller_id'])
product_data_prepared = product_data_prepared.join(enc_df)
product_data_prepared
```
Transformar as colunas de texto (query, title e concatenated_tags) em contagens de palavaras. Ou seja, se houver "box banheiro adesivo" na coluna "title", haverá colunas com os nomes *title_box*, *title_banheiro*, *title_adesivo* cujo valores para o respective dataset são "1".
```
# Transform query, title and concatenated_tags field
# Create a transformation function
def generate_word_count_frame(column):
# Limit the verctorizer to the 1.000 most popular words (for memory & speed reasons)
cv = CountVectorizer(max_features=1000)
column = column.fillna('')
tf = cv.fit_transform(column)
word_count_frame = pd.DataFrame(tf.toarray(), columns=cv.get_feature_names())
word_count_frame.columns = [column.name+'_'+str(col) for col in word_count_frame.columns]
return word_count_frame
# Run transformation on each relevant column
word_counts = {}
for column in ['query','title','concatenated_tags']:
word_counts[column] = generate_word_count_frame(product_data[column])
```
Testar o resultado da transformação no caso da coluna *title*
```
word_counts["title"]
```
Substituir as colunas *query*, *title* e *concatenated_tags* pelas colunas de 'word_count'.
```
# Drop query, title and concatenated_tags columns & append word_count data frames
product_data_prepared = product_data_prepared.drop(columns=['query'])
product_data_prepared = product_data_prepared.join(word_counts['query'])
product_data_prepared = product_data_prepared.drop(columns=['title'])
product_data_prepared = product_data_prepared.join(word_counts['title'])
product_data_prepared = product_data_prepared.drop(columns=['concatenated_tags'])
product_data_prepared = product_data_prepared.join(word_counts['concatenated_tags'])
product_data_prepared
```
# Modeling
Separar conjunto de dados de treinamento e validação.
```
# Prepare data and folder
data_X = product_data_prepared.drop(columns='category')
data_Y = product_data_prepared['category']
X_train, X_test, Y_train, Y_test = train_test_split( data_X, data_Y, test_size=0.1, random_state=3884)
```
Criar um classificador de arvore e treina o modelo. Para ter um modelo mais simples que pode ser visualizado, o número maximo de folhas da árvore será limitada a 48 folhas.
```
clf = DecisionTreeClassifier(max_leaf_nodes = 48,random_state = 2232)
clf.fit(X_train,Y_train)
```
# Model Validation
Medir metricas do modelo e mostra-las.
```
Y_pred = clf.predict(X_test)
classification_report = metrics.classification_report(Y_test,Y_pred)
print(classification_report)
```
O classificador funciona relativamente bem para as categórias de *Bebê*, *Biijuterias & Jóias* e *Decoração & Lembracinhas*. Nesses casos mais do que a metade de itens foram classificados corretamente, tanto em respeito da precisão (mais do que a metade dos itens classificado como uma detereminada categoria realmente são dessa categoria) quanto em respeito do recall (mais do que a metade de itens de uma categoria foram classficados corretamente).
No caso das categórias de *Outros* e *Papel & Cia* apenas a precisão é boa, ou seja quanto um item foi classificado a probabilidade que a classificação é correta é alta. Porém poucos itens dessas categórias foram classificads corretamente (baixo recall).
Escreve o reporte para 'metrics_path'.
```
f = open(metrics_path, "w")
f.write(classification_report)
f.close()
```
# Model exportation & visualization
Exporta o modelo para 'model_path'.
```
joblib.dump(clf, model_path, compress=9)
```
Visualizar a arvore usando o pacote de graphviz.
```
# Print decsion tree
dot_data = tree.export_graphviz(clf, \
out_file=None, \
feature_names=X_train.columns, \
filled=True, \
rounded=True, \
class_names=clf.classes_)
# Remove value fields since they blow up the nodes and make the tree harder to read
dot_data = re.sub(r'\\nvalue = \[[\d\,\s\\n]+\]', '', dot_data)
graph = graphviz.Source(dot_data)
graph
```
Fato interessante da arvore de classificação é que o primeiro críteiro é o preço (quando o preço for menor do que 29,2 o produto tem uma alta chance de ser da categória *Lembracinhas*).
|
github_jupyter
|
# Import Libraries
import os
import pandas as pd
import joblib
from sklearn.preprocessing import OneHotEncoder
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn import metrics
from sklearn import tree
import graphviz
import re
# Set paths
data_set_path = os.environ['DATASET_PATH']
metrics_path = os.environ['METRICS_PATH']
model_path = os.environ['MODEL_PATH']
# Load product data from CSV
product_data = pd.read_csv(data_set_path)
product_data
product_data.category.value_counts()
product_data_prepared = product_data.drop(columns=['product_id'])
product_data_prepared.columns
product_data_prepared['weight'].isna().value_counts()
for column in ['search_page','position','price','weight','express_delivery','minimum_quantity','view_counts','order_counts']:
product_data_prepared[column] = product_data_prepared[column].fillna(0)
product_data_prepared['weight'].isna().value_counts()
product_data_prepared['creation_date'] = pd.to_datetime(product_data_prepared['creation_date'], format='%Y-%m-%d %H:%M:%S').astype(int) / 10**9
product_data_prepared['creation_date']
top_100_sellers = sorted([x for x in product_data.seller_id.value_counts().head(100).index])
# Build data frame with one-hot encoded seller_id
enc = OneHotEncoder(categories=[top_100_sellers],handle_unknown='ignore')
enc_df = pd.DataFrame(enc.fit_transform(product_data_prepared[['seller_id']]).toarray())
enc_df.columns = ['seller_id_'+str(col) for col in top_100_sellers]
# Replace seller_id column with one-hot-encoded data frame
product_data_prepared = product_data_prepared.drop(columns=['seller_id'])
product_data_prepared = product_data_prepared.join(enc_df)
product_data_prepared
# Transform query, title and concatenated_tags field
# Create a transformation function
def generate_word_count_frame(column):
# Limit the verctorizer to the 1.000 most popular words (for memory & speed reasons)
cv = CountVectorizer(max_features=1000)
column = column.fillna('')
tf = cv.fit_transform(column)
word_count_frame = pd.DataFrame(tf.toarray(), columns=cv.get_feature_names())
word_count_frame.columns = [column.name+'_'+str(col) for col in word_count_frame.columns]
return word_count_frame
# Run transformation on each relevant column
word_counts = {}
for column in ['query','title','concatenated_tags']:
word_counts[column] = generate_word_count_frame(product_data[column])
word_counts["title"]
# Drop query, title and concatenated_tags columns & append word_count data frames
product_data_prepared = product_data_prepared.drop(columns=['query'])
product_data_prepared = product_data_prepared.join(word_counts['query'])
product_data_prepared = product_data_prepared.drop(columns=['title'])
product_data_prepared = product_data_prepared.join(word_counts['title'])
product_data_prepared = product_data_prepared.drop(columns=['concatenated_tags'])
product_data_prepared = product_data_prepared.join(word_counts['concatenated_tags'])
product_data_prepared
# Prepare data and folder
data_X = product_data_prepared.drop(columns='category')
data_Y = product_data_prepared['category']
X_train, X_test, Y_train, Y_test = train_test_split( data_X, data_Y, test_size=0.1, random_state=3884)
clf = DecisionTreeClassifier(max_leaf_nodes = 48,random_state = 2232)
clf.fit(X_train,Y_train)
Y_pred = clf.predict(X_test)
classification_report = metrics.classification_report(Y_test,Y_pred)
print(classification_report)
f = open(metrics_path, "w")
f.write(classification_report)
f.close()
joblib.dump(clf, model_path, compress=9)
# Print decsion tree
dot_data = tree.export_graphviz(clf, \
out_file=None, \
feature_names=X_train.columns, \
filled=True, \
rounded=True, \
class_names=clf.classes_)
# Remove value fields since they blow up the nodes and make the tree harder to read
dot_data = re.sub(r'\\nvalue = \[[\d\,\s\\n]+\]', '', dot_data)
graph = graphviz.Source(dot_data)
graph
| 0.58166 | 0.775562 |
Data Manipulation in Pandas
1. Renaming columns
2. Sorting Data
3. Binning
4. Handling missing values
5. Apply methods in Pandas
6. Aggregation of Data Using Pandas
7. Merging data using Pandas
Column Names are called as labels..
```
import numpy as np
import pandas as pd
data={'Title':[None, 'Robinson Crusoe', 'Moby Dick'],
'Author':['sa', 'Daniel Defoe', 'Herman Melville']}
df = pd.DataFrame(data)
df.dropna(how='all', inplace= True)
df
df
```
Renaming columns using pandas
```
df.columns=['TITLE', 'AUTHOR'] #Changing the keys of the column
print(df)
# other method of renaming the column labels
df.rename(columns={'TITLE':'Book Name', 'AUTHOR':'Writer'}) # It is a dictionary
df['New Col'] = [1,2,3] # new column created
df
```
Sorting using Pandas
```
import pandas as pd
df= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
df.head()
df= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
df.dropna(how='all') # this can be used for substitute for head() method in Pandas library
```
Two methods in pandas:
1. sort_values():- to sort DataFrame by one or more columns
2. sort_index():- to sort DataFrame by row indices
```
# Ascending Order
sort_df= df.sort_values(by=['Outlet_Establishment_Year'], ascending= True, inplace= False) # sort_values(by=['column_name'], ascending= True/False, inplace= True/False) inplace if true it changes the original dataFrame into sorted DataFrame
sort_df
# Descending Order
df.sort_values(by=['Outlet_Establishment_Year'], ascending= False, inplace= True)
df
#sort_index() method sorts the dataFrame based on the index values of the DataFramem
df.sort_index(ascending= False, inplace= True)
df.head()
#Sorting Mutliple columns
multi=df.sort_values(by=['Outlet_Establishment_Year', 'Item_Weight'], ascending= True)
multi.tail()
print(type('Outlet_Location_Type'))
print(df['Outlet_Location_Type'].unique())
multi.sort_values(by=['Outlet_Location_Type'], inplace= True)
multi.head()
```
Binning/ Discretization Using Pandas
```
import pandas as pd
data={'Title':['The Hobbit', 'Robinson Crusoe', 'Moby Dick'],
'Author':['J.R.R. Tolkien', 'Daniel Defoe', 'Herman Melville']}
df= pd.DataFrame(data)
df
df['new_col'] = [3,13,29] # Created a new column named new_col and assigned values as 10,20,30
df['bins'] = pd.cut(x=df['new_col'], bins = [0,10,20,30]) # df['new_bin_column_name']=pd.cut(x=df['column_to_be_binned'], bins=[range of values])
df
df['bins'].unique()
df.rename(columns={'new_col': 'Age'}, inplace= True)
df
# Create New Column of of decade With Labels 10s, 20s, and 30s
df['decade']= pd.cut(x=df['Age'], bins=[0,10,20,30], labels=['10s', '20s', '30s'])
df
```
Handling Missing values in Pandas DataFrame
```
import pandas as pd
import numpy as np
```
fillna() method helps to fill missing data values in a DataFrame.
YOu can fill this mean/median/mode of the column.
df.isna().sum()
```
df.head()
df=pd.read_csv('D:\\Datasets\\bigmart_data.csv')
null=df.isna().sum() # NA / NaN/ NULL
print(null)
print(null.sum())
df.isna().head() # we have 1463 na values in Item_Weight column and 2410 NA values in outlet_Size column
# we calculate the percantage of NA values in DataSet
print(df.shape)
total= np.product(df.shape)
print(total)
missing= null.sum()
percentage_NA= (missing/total)*100
print("The percentage of NA values in the DataSet given:", percentage_NA) # There is 3.78% of NA values in the Dataset provided
print(df['Item_Fat_Content'].dtype)
df['Item_Weight'].dtype
#Fill the NA values with mean of the Dataset
mean_weight= df['Item_Weight'].mean()
Weight_mean=df['Item_Weight'].fillna(mean_weight, inplace= True)
print(df.isna().sum())
df['Outlet_Size'].dtype
print(df['Outlet_Size'].unique())
find_mode= df['Outlet_Size'].mode()
print(find_mode)
df['Outlet_Size'].fillna(find_mode.iloc[0], inplace= True)
df.isna().sum()
```
Apply method in pandas
```
def function_name():
print(str)
print("hello world")
if 2<3: print(True)
function_name()
```
lambda functions which are also called as ananymous funciton
lambda functions can take any number of arguments/ paramteres but return just one value in the form of an expression.
lambda requires an expression, they have theor own local namespace and cannot access other than those in their parameters.
SYNTAX:-
lambda arg1, arg2 .... argn : expression
```
sum = lambda a,b : print("Add:",a+b, "\n" "Sub:", a-b)
print(sum(1,2))
import numpy as np
import pandas as pd
import time
import calendar
data= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
data.head()
storage = data.groupby('Item_Type')
storage.first()
#data['Item_Type'].unique()
print("Apply method:\n")
print(data.apply(lambda x: x[0])) # We are using the apply method to access to the 1st row of the dataset, Remember by deafult the apply method takes axis=0
print("\n")
print("traditional method:\n")
print(data.iloc[0])
print("Apply method:\n")
print(data.apply(lambda x: x[0], axis=1)) # here we are using teh apply method to access to the 1st column using index and mentioning the axis=1
print("traditional method:\n")
print(data.iloc[:,0])
print(data.iloc[::-1]) # [:] indicates all values
print(data.apply(lambda x: x['Item_Type'], axis = 1)) # Here we are accessing the column using column name by using a apply method.
print()
print(data['Item_Type'])
data.apply(lambda x: x)
time.time()
time.localtime()
time.asctime()
print(calendar.calendar(2020))
# In apply() even conditions can be added to the dataset as it can iterate through all observations in the DataFrame
print("before clipping\n",data['Item_MRP'][:5])
print()
def clip(price):
if price>200:
price=200
return price
#print( clip(249))
print(data['Item_MRP'].apply(lambda x: clip(x))[:5])
def clip(price):
if price>200:
price=200
return price
for i in data.index:
if (data['Item_MRP'].iloc[i]) > 200:
data['Item_MRP'].iloc[i]=200
data.query('Item_MRP == 200')
# Labeling the values of the categorical data can be done in apply()
print(data['Outlet_Location_Type'].unique())
def labeling(city):
if city == 'Tier 1': # we have labelled tier 1 as 0
label = 0
elif city== 'Tier 2': # we have labelled tier 2 as 1
label = 1
else: # we have labelled tier 3 as 2
label = 2
return label
print(data['Outlet_Location_Type'].apply(lambda x: labeling(x)))
```
1. df.apply(lambda x: x [ 0 ]) ---> 1st row od dataframe
2. df.apply(lambda x: x [0], axis= 1) -----> 1st column of dataframe
3. df.apply(lambda x: x['Column_name'], axis=1)
4. df.apply(lambda x: function(x))
```
print(data['Outlet_Size'].unique())
# Labelling Outlet_Size column
print(data['Outlet_Size'].unique())
def label_Outlet_Size(size):
if size== 'Small':
label=0
elif size== 'Medium':
label=2
elif size== 'High':
label=3
else:
label=size
return label
labelled_osize=data['Outlet_Size'].apply(lambda x: label_Outlet_Size(x))
print(labelled_osize)
labelled_osize.isna().sum()
mean_osize= labelled_osize.mean()
print("Mean:", mean_osize)
labelled_osize.fillna(mean_osize, inplace= True)
print(labelled_osize)
df.head()
df.query('5 < Item_Weight < 9.30').head()
var= pd.DataFrame(df['Item_Type'])
var.head()
```
AGGREGATION OF DATA USING PANDAS
```
import pandas as pd
import numpy as np
df=pd.read_csv('C:Users\\LENOVO\\Downloads\\Test.csv')
df.head()
df.dropna(how='any', inplace= True)
df.head()
df= df.reset_index(drop=True)
df.isna().sum()
```
Query:- What's the mean price for each of the Item type???
hint:- here price is the Item_MRP column...
```
var = df.groupby('Item_Type')
print(type(var))
var
var['Item_MRP'].mean()
multi = df.groupby(['Item_Type', 'Item_Fat_Content'])
multi.first()
print(multi.Item_MRP.mean())
```
crosstab()
it gives us the frequency table of 2 columns.
```
df.head()
print(df.Outlet_Location_Type.unique())
print(df.Outlet_Size.unique())
df.Outlet_Location_Type.count()
sorted= df.sort_values(by= 'Outlet_Establishment_Year')
df['Outlet_Establishment_Year'].unique()
sort= sorted.query('Outlet_Establishment_Year == 1997')
sort['Outlet_Size'].unique()
pd.crosstab(df['Outlet_Size'], df['Outlet_Location_Type'], margins = True)
```
From this we can make sure that Outlet_Loaction_Type is expected to affect the values of Outlet_Size.
Inferences which can be made using this tablation table:-
-> The highest sized outlets are present only in Tier 3 part of the city
-> The medium sized outlets are present in Tier 1 and Tier 3 parts of the city
-> The small sized outlets are present in Tier 1 and Tier 2 parts of the city
Other inferences which can be made from this tabulation table are:
-> The highest sized outlets are present only in Tier 3
-> ~50% of the medium sized outlets are present in either Tier 1 or Tier 3
-> 50% of the small sized outlets are presetn in either Tier 1 or Tier 2
```
pd.crosstab(df['Item_Type'], df['Outlet_Type'], margins = True)
```
Inferences which can be made using this tabulation column:-
-> Firstly, Supermarket Type1 is taking 3722 items as compared to Supermarket Type2
-> Out of 351 Baking Goods, 283 items are sent to Supermarket Type1 and only 68 are sent to Supermarket Type2
Pivot Table
```
df
pd.pivot_table(df, index= ['Outlet_Establishment_Year'], values= 'Item_Weight')
pd.pivot_table(df, index= ['Outlet_Establishment_Year'], values= ['Item_Weight'], aggfunc= [np.mean, np.median, min, max, np.std])
weight_by_item = df.groupby('Item_Type')
weight_by_item.Item_Weight.mean()
```
|
github_jupyter
|
import numpy as np
import pandas as pd
data={'Title':[None, 'Robinson Crusoe', 'Moby Dick'],
'Author':['sa', 'Daniel Defoe', 'Herman Melville']}
df = pd.DataFrame(data)
df.dropna(how='all', inplace= True)
df
df
df.columns=['TITLE', 'AUTHOR'] #Changing the keys of the column
print(df)
# other method of renaming the column labels
df.rename(columns={'TITLE':'Book Name', 'AUTHOR':'Writer'}) # It is a dictionary
df['New Col'] = [1,2,3] # new column created
df
import pandas as pd
df= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
df.head()
df= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
df.dropna(how='all') # this can be used for substitute for head() method in Pandas library
# Ascending Order
sort_df= df.sort_values(by=['Outlet_Establishment_Year'], ascending= True, inplace= False) # sort_values(by=['column_name'], ascending= True/False, inplace= True/False) inplace if true it changes the original dataFrame into sorted DataFrame
sort_df
# Descending Order
df.sort_values(by=['Outlet_Establishment_Year'], ascending= False, inplace= True)
df
#sort_index() method sorts the dataFrame based on the index values of the DataFramem
df.sort_index(ascending= False, inplace= True)
df.head()
#Sorting Mutliple columns
multi=df.sort_values(by=['Outlet_Establishment_Year', 'Item_Weight'], ascending= True)
multi.tail()
print(type('Outlet_Location_Type'))
print(df['Outlet_Location_Type'].unique())
multi.sort_values(by=['Outlet_Location_Type'], inplace= True)
multi.head()
import pandas as pd
data={'Title':['The Hobbit', 'Robinson Crusoe', 'Moby Dick'],
'Author':['J.R.R. Tolkien', 'Daniel Defoe', 'Herman Melville']}
df= pd.DataFrame(data)
df
df['new_col'] = [3,13,29] # Created a new column named new_col and assigned values as 10,20,30
df['bins'] = pd.cut(x=df['new_col'], bins = [0,10,20,30]) # df['new_bin_column_name']=pd.cut(x=df['column_to_be_binned'], bins=[range of values])
df
df['bins'].unique()
df.rename(columns={'new_col': 'Age'}, inplace= True)
df
# Create New Column of of decade With Labels 10s, 20s, and 30s
df['decade']= pd.cut(x=df['Age'], bins=[0,10,20,30], labels=['10s', '20s', '30s'])
df
import pandas as pd
import numpy as np
df.head()
df=pd.read_csv('D:\\Datasets\\bigmart_data.csv')
null=df.isna().sum() # NA / NaN/ NULL
print(null)
print(null.sum())
df.isna().head() # we have 1463 na values in Item_Weight column and 2410 NA values in outlet_Size column
# we calculate the percantage of NA values in DataSet
print(df.shape)
total= np.product(df.shape)
print(total)
missing= null.sum()
percentage_NA= (missing/total)*100
print("The percentage of NA values in the DataSet given:", percentage_NA) # There is 3.78% of NA values in the Dataset provided
print(df['Item_Fat_Content'].dtype)
df['Item_Weight'].dtype
#Fill the NA values with mean of the Dataset
mean_weight= df['Item_Weight'].mean()
Weight_mean=df['Item_Weight'].fillna(mean_weight, inplace= True)
print(df.isna().sum())
df['Outlet_Size'].dtype
print(df['Outlet_Size'].unique())
find_mode= df['Outlet_Size'].mode()
print(find_mode)
df['Outlet_Size'].fillna(find_mode.iloc[0], inplace= True)
df.isna().sum()
def function_name():
print(str)
print("hello world")
if 2<3: print(True)
function_name()
sum = lambda a,b : print("Add:",a+b, "\n" "Sub:", a-b)
print(sum(1,2))
import numpy as np
import pandas as pd
import time
import calendar
data= pd.read_csv('D:\\Datasets\\bigmart_data.csv')
data.head()
storage = data.groupby('Item_Type')
storage.first()
#data['Item_Type'].unique()
print("Apply method:\n")
print(data.apply(lambda x: x[0])) # We are using the apply method to access to the 1st row of the dataset, Remember by deafult the apply method takes axis=0
print("\n")
print("traditional method:\n")
print(data.iloc[0])
print("Apply method:\n")
print(data.apply(lambda x: x[0], axis=1)) # here we are using teh apply method to access to the 1st column using index and mentioning the axis=1
print("traditional method:\n")
print(data.iloc[:,0])
print(data.iloc[::-1]) # [:] indicates all values
print(data.apply(lambda x: x['Item_Type'], axis = 1)) # Here we are accessing the column using column name by using a apply method.
print()
print(data['Item_Type'])
data.apply(lambda x: x)
time.time()
time.localtime()
time.asctime()
print(calendar.calendar(2020))
# In apply() even conditions can be added to the dataset as it can iterate through all observations in the DataFrame
print("before clipping\n",data['Item_MRP'][:5])
print()
def clip(price):
if price>200:
price=200
return price
#print( clip(249))
print(data['Item_MRP'].apply(lambda x: clip(x))[:5])
def clip(price):
if price>200:
price=200
return price
for i in data.index:
if (data['Item_MRP'].iloc[i]) > 200:
data['Item_MRP'].iloc[i]=200
data.query('Item_MRP == 200')
# Labeling the values of the categorical data can be done in apply()
print(data['Outlet_Location_Type'].unique())
def labeling(city):
if city == 'Tier 1': # we have labelled tier 1 as 0
label = 0
elif city== 'Tier 2': # we have labelled tier 2 as 1
label = 1
else: # we have labelled tier 3 as 2
label = 2
return label
print(data['Outlet_Location_Type'].apply(lambda x: labeling(x)))
print(data['Outlet_Size'].unique())
# Labelling Outlet_Size column
print(data['Outlet_Size'].unique())
def label_Outlet_Size(size):
if size== 'Small':
label=0
elif size== 'Medium':
label=2
elif size== 'High':
label=3
else:
label=size
return label
labelled_osize=data['Outlet_Size'].apply(lambda x: label_Outlet_Size(x))
print(labelled_osize)
labelled_osize.isna().sum()
mean_osize= labelled_osize.mean()
print("Mean:", mean_osize)
labelled_osize.fillna(mean_osize, inplace= True)
print(labelled_osize)
df.head()
df.query('5 < Item_Weight < 9.30').head()
var= pd.DataFrame(df['Item_Type'])
var.head()
import pandas as pd
import numpy as np
df=pd.read_csv('C:Users\\LENOVO\\Downloads\\Test.csv')
df.head()
df.dropna(how='any', inplace= True)
df.head()
df= df.reset_index(drop=True)
df.isna().sum()
var = df.groupby('Item_Type')
print(type(var))
var
var['Item_MRP'].mean()
multi = df.groupby(['Item_Type', 'Item_Fat_Content'])
multi.first()
print(multi.Item_MRP.mean())
df.head()
print(df.Outlet_Location_Type.unique())
print(df.Outlet_Size.unique())
df.Outlet_Location_Type.count()
sorted= df.sort_values(by= 'Outlet_Establishment_Year')
df['Outlet_Establishment_Year'].unique()
sort= sorted.query('Outlet_Establishment_Year == 1997')
sort['Outlet_Size'].unique()
pd.crosstab(df['Outlet_Size'], df['Outlet_Location_Type'], margins = True)
pd.crosstab(df['Item_Type'], df['Outlet_Type'], margins = True)
df
pd.pivot_table(df, index= ['Outlet_Establishment_Year'], values= 'Item_Weight')
pd.pivot_table(df, index= ['Outlet_Establishment_Year'], values= ['Item_Weight'], aggfunc= [np.mean, np.median, min, max, np.std])
weight_by_item = df.groupby('Item_Type')
weight_by_item.Item_Weight.mean()
| 0.254694 | 0.902136 |
# Should we remove top-level switches?
```
import io
import zipfile
import os
import pandas
from plotnine import *
import plotnine
plotnine.options.figure_size = (12, 8)
import yaml
from lxml import etree
import warnings
import re
warnings.simplefilter(action='ignore')
def get_yaml(archive_name, yaml_name):
archive = zipfile.ZipFile(archive_name)
return yaml.load(io.BytesIO(archive.read(yaml_name)))
def get_platform(archive_name):
info = get_yaml(archive_name, 'info.yaml')
expfiles = info['expfile']
platform = [f for f in expfiles if f.endswith('xml')]
assert len(platform) == 1
return platform[0]
def get_platform_type(archive_name):
platform_file = get_platform(archive_name)
archive = zipfile.ZipFile(archive_name)
xml = etree.fromstring(io.BytesIO(archive.read(platform_file)).read().decode())
AS = xml.findall('AS')[0]
cluster = AS.findall('cluster')
assert len(cluster) == 1
cluster = cluster[0]
param = cluster.get('topo_parameters')
if param is None:
param = 'cluster'
nb_switches = 'NA'
else:
nb_switches = int(param.split(';')[2].split(',')[1])
return param, nb_switches
def read_csv(archive_name, file_name):
archive = zipfile.ZipFile(archive_name)
res = pandas.read_csv(io.BytesIO(archive.read(file_name)))
res['filename'] = archive_name
return res
def read_result(name):
res = read_csv(name, 'results.csv')
res['start_timestamp'] = pandas.to_datetime(res['start_timestamp'])
res['start'] = res['start_timestamp'] - res['start_timestamp'].min()
return res
def read_sim_result(name):
archive = zipfile.ZipFile(name)
result = pandas.read_csv(io.BytesIO(archive.read('results.csv')))
result['platform'] = get_platform(name)
result['filename'] = name
info = get_yaml(name, 'info.yaml')
expfiles = info['expfile']
dgemm_file = [f for f in expfiles if f.endswith('.yaml')]
assert len(dgemm_file) == 1
result['dgemm_file'] = dgemm_file[0]
reg = re.compile('dgemm_synthetic(_shrinked(-[0-9]+)?)?_(?P<platform_id>[0-9]+).yaml')
match = reg.fullmatch(dgemm_file[0])
assert match is not None
result['platform_id'] = int(match.groupdict()['platform_id'])
dgemm_model = get_yaml(name, dgemm_file[0])
synthetic = 'experiment_date' not in dgemm_model['info']
result['synthetic'] = synthetic
try:
nb_removed_nodes = dgemm_model['info']['nb_removed_nodes']
except KeyError:
nb_removed_nodes = 0
result['nb_removed_nodes'] = nb_removed_nodes
param, nb_switches = get_platform_type(name)
result['topo'] = param
result['nb_switches'] = nb_switches
return result
simulation_dir = 'synthetic_model/2/'
simulation_files = [os.path.join(simulation_dir, f) for f in os.listdir(simulation_dir) if f.endswith('.zip')]
simulation_dir = 'synthetic_model/3/'
simulation_files += [os.path.join(simulation_dir, f) for f in os.listdir(simulation_dir) if f.endswith('.zip')]
df = pandas.concat([read_sim_result(f) for f in simulation_files])
df = df[(df['proc_p'] == 16) & (df['proc_q'] == 16)]
df['nb_nodes'] = df['proc_p'] * df['proc_q']
df['geometry'] = df['proc_p'].astype(str) + '×' + df['proc_q'].astype(str)
df = df[df['nb_nodes'] == df['nb_nodes'].max()]
df.head()
dumped_cols = ['filename', 'dgemm_file', 'platform_id', 'matrix_size', 'topo', 'nb_switches', 'time', 'gflops']
df[dumped_cols].to_csv('/tmp/removing_switches.csv', index=False)
```
### Checking the parameters
```
name_exceptions = {'application_time', 'simulation_time', 'usr_time', 'sys_time', 'time', 'gflops', 'residual', 'cpu_utilization',
'dgemm_coefficient', 'dgemm_intercept', 'dtrsm_coefficient', 'dtrsm_intercept',
'stochastic_cpu', 'polynomial_dgemm', 'stochastic_network', 'heterogeneous_dgemm', 'platform', 'model', 'filename',
'simulation', 'slow_nodes',
'major_page_fault', 'minor_page_fault', 'matrix_size', 'mode',
'start_timestamp', 'stop_timestamp'}
colnames = set(df) - name_exceptions
df[list(colnames)].drop_duplicates()
df.groupby(list(colnames))[['swap']].count()
from IPython.display import display, Markdown
platforms = [(get_platform(f), zipfile.ZipFile(f).read(get_platform(f)).decode('ascii')) for f in simulation_files]
platforms = list(set(platforms))
for name, plat in platforms:
display(Markdown('### %s' % name))
display(Markdown('```xml\n%s\n```' % plat))
```
### Checking the patch in the simulation
```
patches = set()
for row in df.iterrows():
filename = row[1].filename
repos = get_yaml(filename, 'info.yaml')['git_repositories']
hpl = [repo for repo in repos if repo['path'] == 'hpl-2.2']
assert len(hpl) == 1
patches.add(hpl[0]['patch'])
assert len(patches) == 1
display(Markdown('```diff\n%s\n```' % patches.pop()))
```
## Impact on the predicted HPL performance
```
df['n3'] = df.matrix_size ** 3
df['n2'] = df.matrix_size ** 2
df['n'] = df.matrix_size
from statsmodels.formula.api import ols
reg = {}
for topo in df['topo'].unique():
reg[topo] = ols('time ~ n3 + n2 + n', df[df['topo'] == topo]).fit()
all_pred = []
for topo, r in reg.items():
pars = r.params
pred = pandas.DataFrame([{'n': n*10000} for n in range(1, 100)])
pred['n2'] = pred['n']**2
pred['n3'] = pred['n']**3
pred['matrix_size'] = pred['n']
pred['time'] = pars['Intercept']
for col in ['n', 'n2', 'n3']:
pred['time'] += pred[col]*pars[col]
pred['gflops'] = (2/3*pred['n3'] + 2*pred['n2']) / pred['time'] * 1e-9
pred['topo'] = topo
all_pred.append(pred)
pred = pandas.concat(all_pred)
pred = pred.set_index('topo').join(df[['topo', 'nb_switches']].set_index('topo')).reset_index() # setting the number of nodes
ggplot(df) +\
aes(x='matrix_size', color='factor(nb_switches)') +\
geom_point(aes(y='gflops')) +\
geom_line(pred, aes(y='gflops'), linetype='dashed') +\
xlab('Matrix size') +\
ylab('Performance (Gflop/s)') +\
labs(color='Number of switches') +\
expand_limits(y=0) +\
ggtitle('HPL predicted performance on a cluster of 256 nodes') +\
theme_bw()
ggplot(df[df['matrix_size'] == df['matrix_size'].max()]) +\
aes(x='factor(nb_switches)', color='factor(nb_switches)') +\
theme_bw() +\
geom_boxplot(aes(y='gflops')) +\
xlab('Number of switches') +\
ylab('Performance (Gflop/s)') +\
theme(legend_position='none') +\
ggtitle(f'HPL predicted performance with a matrix size of {df["matrix_size"].max()}')
```
|
github_jupyter
|
import io
import zipfile
import os
import pandas
from plotnine import *
import plotnine
plotnine.options.figure_size = (12, 8)
import yaml
from lxml import etree
import warnings
import re
warnings.simplefilter(action='ignore')
def get_yaml(archive_name, yaml_name):
archive = zipfile.ZipFile(archive_name)
return yaml.load(io.BytesIO(archive.read(yaml_name)))
def get_platform(archive_name):
info = get_yaml(archive_name, 'info.yaml')
expfiles = info['expfile']
platform = [f for f in expfiles if f.endswith('xml')]
assert len(platform) == 1
return platform[0]
def get_platform_type(archive_name):
platform_file = get_platform(archive_name)
archive = zipfile.ZipFile(archive_name)
xml = etree.fromstring(io.BytesIO(archive.read(platform_file)).read().decode())
AS = xml.findall('AS')[0]
cluster = AS.findall('cluster')
assert len(cluster) == 1
cluster = cluster[0]
param = cluster.get('topo_parameters')
if param is None:
param = 'cluster'
nb_switches = 'NA'
else:
nb_switches = int(param.split(';')[2].split(',')[1])
return param, nb_switches
def read_csv(archive_name, file_name):
archive = zipfile.ZipFile(archive_name)
res = pandas.read_csv(io.BytesIO(archive.read(file_name)))
res['filename'] = archive_name
return res
def read_result(name):
res = read_csv(name, 'results.csv')
res['start_timestamp'] = pandas.to_datetime(res['start_timestamp'])
res['start'] = res['start_timestamp'] - res['start_timestamp'].min()
return res
def read_sim_result(name):
archive = zipfile.ZipFile(name)
result = pandas.read_csv(io.BytesIO(archive.read('results.csv')))
result['platform'] = get_platform(name)
result['filename'] = name
info = get_yaml(name, 'info.yaml')
expfiles = info['expfile']
dgemm_file = [f for f in expfiles if f.endswith('.yaml')]
assert len(dgemm_file) == 1
result['dgemm_file'] = dgemm_file[0]
reg = re.compile('dgemm_synthetic(_shrinked(-[0-9]+)?)?_(?P<platform_id>[0-9]+).yaml')
match = reg.fullmatch(dgemm_file[0])
assert match is not None
result['platform_id'] = int(match.groupdict()['platform_id'])
dgemm_model = get_yaml(name, dgemm_file[0])
synthetic = 'experiment_date' not in dgemm_model['info']
result['synthetic'] = synthetic
try:
nb_removed_nodes = dgemm_model['info']['nb_removed_nodes']
except KeyError:
nb_removed_nodes = 0
result['nb_removed_nodes'] = nb_removed_nodes
param, nb_switches = get_platform_type(name)
result['topo'] = param
result['nb_switches'] = nb_switches
return result
simulation_dir = 'synthetic_model/2/'
simulation_files = [os.path.join(simulation_dir, f) for f in os.listdir(simulation_dir) if f.endswith('.zip')]
simulation_dir = 'synthetic_model/3/'
simulation_files += [os.path.join(simulation_dir, f) for f in os.listdir(simulation_dir) if f.endswith('.zip')]
df = pandas.concat([read_sim_result(f) for f in simulation_files])
df = df[(df['proc_p'] == 16) & (df['proc_q'] == 16)]
df['nb_nodes'] = df['proc_p'] * df['proc_q']
df['geometry'] = df['proc_p'].astype(str) + '×' + df['proc_q'].astype(str)
df = df[df['nb_nodes'] == df['nb_nodes'].max()]
df.head()
dumped_cols = ['filename', 'dgemm_file', 'platform_id', 'matrix_size', 'topo', 'nb_switches', 'time', 'gflops']
df[dumped_cols].to_csv('/tmp/removing_switches.csv', index=False)
name_exceptions = {'application_time', 'simulation_time', 'usr_time', 'sys_time', 'time', 'gflops', 'residual', 'cpu_utilization',
'dgemm_coefficient', 'dgemm_intercept', 'dtrsm_coefficient', 'dtrsm_intercept',
'stochastic_cpu', 'polynomial_dgemm', 'stochastic_network', 'heterogeneous_dgemm', 'platform', 'model', 'filename',
'simulation', 'slow_nodes',
'major_page_fault', 'minor_page_fault', 'matrix_size', 'mode',
'start_timestamp', 'stop_timestamp'}
colnames = set(df) - name_exceptions
df[list(colnames)].drop_duplicates()
df.groupby(list(colnames))[['swap']].count()
from IPython.display import display, Markdown
platforms = [(get_platform(f), zipfile.ZipFile(f).read(get_platform(f)).decode('ascii')) for f in simulation_files]
platforms = list(set(platforms))
for name, plat in platforms:
display(Markdown('### %s' % name))
display(Markdown('```xml\n%s\n```' % plat))
patches = set()
for row in df.iterrows():
filename = row[1].filename
repos = get_yaml(filename, 'info.yaml')['git_repositories']
hpl = [repo for repo in repos if repo['path'] == 'hpl-2.2']
assert len(hpl) == 1
patches.add(hpl[0]['patch'])
assert len(patches) == 1
display(Markdown('```diff\n%s\n```' % patches.pop()))
df['n3'] = df.matrix_size ** 3
df['n2'] = df.matrix_size ** 2
df['n'] = df.matrix_size
from statsmodels.formula.api import ols
reg = {}
for topo in df['topo'].unique():
reg[topo] = ols('time ~ n3 + n2 + n', df[df['topo'] == topo]).fit()
all_pred = []
for topo, r in reg.items():
pars = r.params
pred = pandas.DataFrame([{'n': n*10000} for n in range(1, 100)])
pred['n2'] = pred['n']**2
pred['n3'] = pred['n']**3
pred['matrix_size'] = pred['n']
pred['time'] = pars['Intercept']
for col in ['n', 'n2', 'n3']:
pred['time'] += pred[col]*pars[col]
pred['gflops'] = (2/3*pred['n3'] + 2*pred['n2']) / pred['time'] * 1e-9
pred['topo'] = topo
all_pred.append(pred)
pred = pandas.concat(all_pred)
pred = pred.set_index('topo').join(df[['topo', 'nb_switches']].set_index('topo')).reset_index() # setting the number of nodes
ggplot(df) +\
aes(x='matrix_size', color='factor(nb_switches)') +\
geom_point(aes(y='gflops')) +\
geom_line(pred, aes(y='gflops'), linetype='dashed') +\
xlab('Matrix size') +\
ylab('Performance (Gflop/s)') +\
labs(color='Number of switches') +\
expand_limits(y=0) +\
ggtitle('HPL predicted performance on a cluster of 256 nodes') +\
theme_bw()
ggplot(df[df['matrix_size'] == df['matrix_size'].max()]) +\
aes(x='factor(nb_switches)', color='factor(nb_switches)') +\
theme_bw() +\
geom_boxplot(aes(y='gflops')) +\
xlab('Number of switches') +\
ylab('Performance (Gflop/s)') +\
theme(legend_position='none') +\
ggtitle(f'HPL predicted performance with a matrix size of {df["matrix_size"].max()}')
| 0.282889 | 0.584983 |
Plot Tide Forecasts
===================
Plots the daily tidal displacements for a given location
OTIS format tidal solutions provided by Ohio State University and ESR
- http://volkov.oce.orst.edu/tides/region.html
- https://www.esr.org/research/polar-tide-models/list-of-polar-tide-models/
- ftp://ftp.esr.org/pub/datasets/tmd/
Global Tide Model (GOT) solutions provided by Richard Ray at GSFC
Finite Element Solution (FES) provided by AVISO
- https://www.aviso.altimetry.fr/en/data/products/auxiliary-products/global-tide-fes.html
#### Python Dependencies
- [numpy: Scientific Computing Tools For Python](https://www.numpy.org)
- [scipy: Scientific Tools for Python](https://www.scipy.org/)
- [pyproj: Python interface to PROJ library](https://pypi.org/project/pyproj/)
- [netCDF4: Python interface to the netCDF C library](https://unidata.github.io/netcdf4-python/)
- [matplotlib: Python 2D plotting library](https://matplotlib.org/)
- [ipyleaflet: Jupyter / Leaflet bridge enabling interactive maps](https://github.com/jupyter-widgets/ipyleaflet)
#### Program Dependencies
- `calc_astrol_longitudes.py`: computes the basic astronomical mean longitudes
- `calc_delta_time.py`: calculates difference between universal and dynamic time
- `convert_ll_xy.py`: convert lat/lon points to and from projected coordinates
- `load_constituent.py`: loads parameters for a given tidal constituent
- `load_nodal_corrections.py`: load the nodal corrections for tidal constituents
- `infer_minor_corrections.py`: return corrections for minor constituents
- `model.py`: retrieves tide model parameters for named tide models
- `read_tide_model.py`: extract tidal harmonic constants from OTIS tide models
- `read_netcdf_model.py`: extract tidal harmonic constants from netcdf models
- `read_GOT_model.py`: extract tidal harmonic constants from GSFC GOT models
- `read_FES_model.py`: extract tidal harmonic constants from FES tide models
- `predict_tidal_ts.py`: predict tidal time series at a location using harmonic constants
This notebook uses Jupyter widgets to set parameters for calculating the tidal maps.
The widgets can be installed as described below.
```
pip3 install --user ipywidgets
jupyter nbextension install --user --py widgetsnbextension
jupyter nbextension enable --user --py widgetsnbextension
jupyter-notebook
```
#### Load modules
```
from __future__ import print_function
import os
import datetime
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
import ipyleaflet as leaflet
import pyTMD.time
import pyTMD.model
from pyTMD.calc_delta_time import calc_delta_time
from pyTMD.infer_minor_corrections import infer_minor_corrections
from pyTMD.predict_tidal_ts import predict_tidal_ts
from pyTMD.read_tide_model import extract_tidal_constants
from pyTMD.read_netcdf_model import extract_netcdf_constants
from pyTMD.read_GOT_model import extract_GOT_constants
from pyTMD.read_FES_model import extract_FES_constants
from pyTMD.spatial import wrap_longitudes
# autoreload
%load_ext autoreload
%autoreload 2
# set the directory with tide models
dirText = widgets.Text(
value=os.getcwd(),
description='Directory:',
disabled=False
)
# dropdown menu for setting tide model
model_list = ['CATS0201','CATS2008','TPXO9-atlas','TPXO9-atlas-v2',
'TPXO9-atlas-v3','TPXO9-atlas-v4','TPXO9.1','TPXO8-atlas','TPXO7.2',
'AODTM-5','AOTIM-5','AOTIM-5-2018','Gr1km-v2',
'GOT4.7','GOT4.8','GOT4.10','FES2014']
modelDropdown = widgets.Dropdown(
options=model_list,
value='GOT4.10',
description='Model:',
disabled=False,
)
# dropdown menu for setting ATLAS format model
atlas_list = ['OTIS','netcdf']
atlasDropdown = widgets.Dropdown(
options=atlas_list,
value='netcdf',
description='ATLAS:',
disabled=False,
)
# checkbox for setting if tide files are compressed
compressCheckBox = widgets.Checkbox(
value=True,
description='Compressed?',
disabled=False,
)
# date picker widget for setting time
datepick = widgets.DatePicker(
description='Date:',
value = datetime.date.today(),
disabled=False
)
# display widgets for setting directory and model
widgets.VBox([
dirText,
modelDropdown,
atlasDropdown,
compressCheckBox,
datepick
])
# default coordinates to use
LAT,LON = (32.86710263,-117.25750387)
m = leaflet.Map(center=(LAT,LON), zoom=12, basemap=leaflet.basemaps.Esri.WorldTopoMap)
# add control for zoom
zoom_slider = widgets.IntSlider(description='Zoom level:', min=0, max=15, value=7)
widgets.jslink((zoom_slider, 'value'), (m, 'zoom'))
zoom_control = leaflet.WidgetControl(widget=zoom_slider, position='topright')
m.add_control(zoom_control)
# add marker with default location
marker = leaflet.Marker(location=(LAT,LON), draggable=True)
m.add_layer(marker)
# add text with marker location
markerText = widgets.Text(
value='{0:0.8f},{1:0.8f}'.format(LAT,LON),
description='Lat/Lon:',
disabled=False
)
# add function for setting marker text if location changed
def set_marker_text(sender):
LAT,LON = marker.location
markerText.value = '{0:0.8f},{1:0.8f}'.format(LAT,wrap_longitudes(LON))
# add function for setting map center if location changed
def set_map_center(sender):
m.center = marker.location
# add function for setting marker location if text changed
def set_marker_location(sender):
LAT,LON = [float(i) for i in markerText.value.split(',')]
marker.location = (LAT,LON)
# watch marker widgets for changes
marker.observe(set_marker_text)
markerText.observe(set_marker_location)
m.observe(set_map_center)
# add control for marker location
marker_control = leaflet.WidgetControl(widget=markerText, position='bottomright')
m.add_control(marker_control)
m
# leaflet location
LAT,LON = marker.location
# verify longitudes
LON = wrap_longitudes(LON)
# convert from calendar date to days relative to Jan 1, 1992 (48622 MJD)
YMD = datepick.value
# calculate a weeks forecast every minute
minutes = np.arange(7*1440)
tide_time = pyTMD.time.convert_calendar_dates(YMD.year, YMD.month,
YMD.day, minute=minutes)
hours = minutes/60.0
# delta time (TT - UT1) file
delta_file = pyTMD.utilities.get_data_path(['data','merged_deltat.data'])
# get model parameters
model = pyTMD.model(dirText.value, format=atlasDropdown.value,
compressed=compressCheckBox.value).elevation(modelDropdown.value)
# read tidal constants and interpolate to leaflet points
if model.format in ('OTIS','ATLAS'):
amp,ph,D,c = extract_tidal_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.grid_file, model.model_file,
model.projection, TYPE=model.type, METHOD='spline',
EXTRAPOLATE=True, GRID=model.format)
DELTAT = np.zeros_like(tide_time)
elif (model.format == 'netcdf'):
amp,ph,D,c = extract_netcdf_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.grid_file, model.model_file,
TYPE=model.type, METHOD='spline', EXTRAPOLATE=True,
SCALE=model.scale, GZIP=model.compressed)
DELTAT = np.zeros_like(tide_time)
elif (model.format == 'GOT'):
amp,ph,c = extract_GOT_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.model_file, METHOD='spline',
EXTRAPOLATE=True, SCALE=model.scale,
GZIP=model.compressed)
# interpolate delta times from calendar dates to tide time
DELTAT = calc_delta_time(delta_file, tide_time)
elif (model.format == 'FES'):
amp,ph = extract_FES_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.model_file, TYPE=model.type,
VERSION=model.version, METHOD='spline', EXTRAPOLATE=True,
SCALE=model.scale, GZIP=model.compressed)
# interpolate delta times from calendar dates to tide time
DELTAT = calc_delta_time(delta_file, tide_time)
# calculate complex phase in radians for Euler's
cph = -1j*ph*np.pi/180.0
# calculate constituent oscillation
hc = amp*np.exp(cph)
# convert time from MJD to days relative to Jan 1, 1992 (48622 MJD)
# predict tidal elevations at time 1 and infer minor corrections
TIDE = predict_tidal_ts(tide_time, hc, c,
DELTAT=DELTAT, CORRECTIONS=model.format)
MINOR = infer_minor_corrections(tide_time, hc, c,
DELTAT=DELTAT, CORRECTIONS=model.format)
TIDE.data[:] += MINOR.data[:]
# convert to centimeters
TIDE.data[:] *= 100.0
# differentiate to calculate high and low tides
diff = np.zeros_like(tide_time, dtype=np.float64)
# forward differentiation for starting point
diff[0] = TIDE.data[1] - TIDE.data[0]
# backward differentiation for end point
diff[-1] = TIDE.data[-1] - TIDE.data[-2]
# centered differentiation for all others
diff[1:-1] = (TIDE.data[2:] - TIDE.data[0:-2])/2.0
# indices of high and low tides
htindex, = np.nonzero((np.sign(diff[0:-1]) >= 0) & (np.sign(diff[1:]) < 0))
ltindex, = np.nonzero((np.sign(diff[0:-1]) <= 0) & (np.sign(diff[1:]) > 0))
# create plot with tidal displacements, high and low tides and dates
fig,ax1 = plt.subplots(num=1)
ax1.plot(hours,TIDE.data,'k')
ax1.plot(hours[htindex],TIDE.data[htindex],'r*')
ax1.plot(hours[ltindex],TIDE.data[ltindex],'b*')
for h in range(24,192,24):
ax1.axvline(h,color='gray',lw=0.5,ls='dashed',dashes=(11,5))
ax1.set_xlim(0,7*24)
ax1.set_ylabel('{0} Tidal Displacement [cm]'.format(model.name))
args = (YMD.year,YMD.month,YMD.day)
ax1.set_xlabel('Time from {0:4d}-{1:02d}-{2:02d} UTC [Hours]'.format(*args))
ax1.set_title(u'{0:0.6f}\u00b0N {1:0.6f}\u00b0W'.format(LAT,LON))
fig.subplots_adjust(left=0.10,right=0.98,bottom=0.10,top=0.95)
plt.show()
```
|
github_jupyter
|
pip3 install --user ipywidgets
jupyter nbextension install --user --py widgetsnbextension
jupyter nbextension enable --user --py widgetsnbextension
jupyter-notebook
from __future__ import print_function
import os
import datetime
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
import ipyleaflet as leaflet
import pyTMD.time
import pyTMD.model
from pyTMD.calc_delta_time import calc_delta_time
from pyTMD.infer_minor_corrections import infer_minor_corrections
from pyTMD.predict_tidal_ts import predict_tidal_ts
from pyTMD.read_tide_model import extract_tidal_constants
from pyTMD.read_netcdf_model import extract_netcdf_constants
from pyTMD.read_GOT_model import extract_GOT_constants
from pyTMD.read_FES_model import extract_FES_constants
from pyTMD.spatial import wrap_longitudes
# autoreload
%load_ext autoreload
%autoreload 2
# set the directory with tide models
dirText = widgets.Text(
value=os.getcwd(),
description='Directory:',
disabled=False
)
# dropdown menu for setting tide model
model_list = ['CATS0201','CATS2008','TPXO9-atlas','TPXO9-atlas-v2',
'TPXO9-atlas-v3','TPXO9-atlas-v4','TPXO9.1','TPXO8-atlas','TPXO7.2',
'AODTM-5','AOTIM-5','AOTIM-5-2018','Gr1km-v2',
'GOT4.7','GOT4.8','GOT4.10','FES2014']
modelDropdown = widgets.Dropdown(
options=model_list,
value='GOT4.10',
description='Model:',
disabled=False,
)
# dropdown menu for setting ATLAS format model
atlas_list = ['OTIS','netcdf']
atlasDropdown = widgets.Dropdown(
options=atlas_list,
value='netcdf',
description='ATLAS:',
disabled=False,
)
# checkbox for setting if tide files are compressed
compressCheckBox = widgets.Checkbox(
value=True,
description='Compressed?',
disabled=False,
)
# date picker widget for setting time
datepick = widgets.DatePicker(
description='Date:',
value = datetime.date.today(),
disabled=False
)
# display widgets for setting directory and model
widgets.VBox([
dirText,
modelDropdown,
atlasDropdown,
compressCheckBox,
datepick
])
# default coordinates to use
LAT,LON = (32.86710263,-117.25750387)
m = leaflet.Map(center=(LAT,LON), zoom=12, basemap=leaflet.basemaps.Esri.WorldTopoMap)
# add control for zoom
zoom_slider = widgets.IntSlider(description='Zoom level:', min=0, max=15, value=7)
widgets.jslink((zoom_slider, 'value'), (m, 'zoom'))
zoom_control = leaflet.WidgetControl(widget=zoom_slider, position='topright')
m.add_control(zoom_control)
# add marker with default location
marker = leaflet.Marker(location=(LAT,LON), draggable=True)
m.add_layer(marker)
# add text with marker location
markerText = widgets.Text(
value='{0:0.8f},{1:0.8f}'.format(LAT,LON),
description='Lat/Lon:',
disabled=False
)
# add function for setting marker text if location changed
def set_marker_text(sender):
LAT,LON = marker.location
markerText.value = '{0:0.8f},{1:0.8f}'.format(LAT,wrap_longitudes(LON))
# add function for setting map center if location changed
def set_map_center(sender):
m.center = marker.location
# add function for setting marker location if text changed
def set_marker_location(sender):
LAT,LON = [float(i) for i in markerText.value.split(',')]
marker.location = (LAT,LON)
# watch marker widgets for changes
marker.observe(set_marker_text)
markerText.observe(set_marker_location)
m.observe(set_map_center)
# add control for marker location
marker_control = leaflet.WidgetControl(widget=markerText, position='bottomright')
m.add_control(marker_control)
m
# leaflet location
LAT,LON = marker.location
# verify longitudes
LON = wrap_longitudes(LON)
# convert from calendar date to days relative to Jan 1, 1992 (48622 MJD)
YMD = datepick.value
# calculate a weeks forecast every minute
minutes = np.arange(7*1440)
tide_time = pyTMD.time.convert_calendar_dates(YMD.year, YMD.month,
YMD.day, minute=minutes)
hours = minutes/60.0
# delta time (TT - UT1) file
delta_file = pyTMD.utilities.get_data_path(['data','merged_deltat.data'])
# get model parameters
model = pyTMD.model(dirText.value, format=atlasDropdown.value,
compressed=compressCheckBox.value).elevation(modelDropdown.value)
# read tidal constants and interpolate to leaflet points
if model.format in ('OTIS','ATLAS'):
amp,ph,D,c = extract_tidal_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.grid_file, model.model_file,
model.projection, TYPE=model.type, METHOD='spline',
EXTRAPOLATE=True, GRID=model.format)
DELTAT = np.zeros_like(tide_time)
elif (model.format == 'netcdf'):
amp,ph,D,c = extract_netcdf_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.grid_file, model.model_file,
TYPE=model.type, METHOD='spline', EXTRAPOLATE=True,
SCALE=model.scale, GZIP=model.compressed)
DELTAT = np.zeros_like(tide_time)
elif (model.format == 'GOT'):
amp,ph,c = extract_GOT_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.model_file, METHOD='spline',
EXTRAPOLATE=True, SCALE=model.scale,
GZIP=model.compressed)
# interpolate delta times from calendar dates to tide time
DELTAT = calc_delta_time(delta_file, tide_time)
elif (model.format == 'FES'):
amp,ph = extract_FES_constants(np.atleast_1d(LON),
np.atleast_1d(LAT), model.model_file, TYPE=model.type,
VERSION=model.version, METHOD='spline', EXTRAPOLATE=True,
SCALE=model.scale, GZIP=model.compressed)
# interpolate delta times from calendar dates to tide time
DELTAT = calc_delta_time(delta_file, tide_time)
# calculate complex phase in radians for Euler's
cph = -1j*ph*np.pi/180.0
# calculate constituent oscillation
hc = amp*np.exp(cph)
# convert time from MJD to days relative to Jan 1, 1992 (48622 MJD)
# predict tidal elevations at time 1 and infer minor corrections
TIDE = predict_tidal_ts(tide_time, hc, c,
DELTAT=DELTAT, CORRECTIONS=model.format)
MINOR = infer_minor_corrections(tide_time, hc, c,
DELTAT=DELTAT, CORRECTIONS=model.format)
TIDE.data[:] += MINOR.data[:]
# convert to centimeters
TIDE.data[:] *= 100.0
# differentiate to calculate high and low tides
diff = np.zeros_like(tide_time, dtype=np.float64)
# forward differentiation for starting point
diff[0] = TIDE.data[1] - TIDE.data[0]
# backward differentiation for end point
diff[-1] = TIDE.data[-1] - TIDE.data[-2]
# centered differentiation for all others
diff[1:-1] = (TIDE.data[2:] - TIDE.data[0:-2])/2.0
# indices of high and low tides
htindex, = np.nonzero((np.sign(diff[0:-1]) >= 0) & (np.sign(diff[1:]) < 0))
ltindex, = np.nonzero((np.sign(diff[0:-1]) <= 0) & (np.sign(diff[1:]) > 0))
# create plot with tidal displacements, high and low tides and dates
fig,ax1 = plt.subplots(num=1)
ax1.plot(hours,TIDE.data,'k')
ax1.plot(hours[htindex],TIDE.data[htindex],'r*')
ax1.plot(hours[ltindex],TIDE.data[ltindex],'b*')
for h in range(24,192,24):
ax1.axvline(h,color='gray',lw=0.5,ls='dashed',dashes=(11,5))
ax1.set_xlim(0,7*24)
ax1.set_ylabel('{0} Tidal Displacement [cm]'.format(model.name))
args = (YMD.year,YMD.month,YMD.day)
ax1.set_xlabel('Time from {0:4d}-{1:02d}-{2:02d} UTC [Hours]'.format(*args))
ax1.set_title(u'{0:0.6f}\u00b0N {1:0.6f}\u00b0W'.format(LAT,LON))
fig.subplots_adjust(left=0.10,right=0.98,bottom=0.10,top=0.95)
plt.show()
| 0.70619 | 0.867822 |
# 1) Defining how we assess performance
## What do we mean by "loss"?
<img src="images/lec3_pic01.png">
<img src="images/lec3_pic02.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/cGUQ3/what-do-we-mean-by-loss) 1:00*
<!--TEASER_END-->
How do we formalize this notion of how much we're losing? And in machine learning, we do this by defining something called a loss function.
And what the loss function specifies is the cost incurred when the true observation is y, and I make some other prediction. So, a bit more explicitly, what we're gonna do, is we're gonna estimate our model parameters. And those are $\hat w$. We're gonna use those to form predictions.
- $f_{\hat w}(x) = \hat f(x)$, it's our predicted value at some input x.
The loss function L, is somehow measuring the difference between these two things.
And there are a couple ways in which we could define loss function. And very common choices include assuming
something that's called absolute error, which just looks at the absolute value of the difference between your true value and your predicted value. And another common choice is something called squared error, where, instead of just looking at the absolute value, you look at the square of that difference. And so that means that you have a very high cost if that difference is large, relative to just absolute error.
<img src="images/lec3_pic03.png">
<img src="images/lec3_pic04.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/cGUQ3/what-do-we-mean-by-loss) 3:30*
<!--TEASER_END-->
# 2) 3 measures of loss and their trends with model complexity
## 1) Training error: assessing loss on the training set
The first measure of error of our predictions that we can look at is something called training error. And we discussed this at a high level in the first course of the specialization, but now let's go through it in a little bit more detail.
So, to define training error, we first have to define training data. So, training data typically you have some dataset which I've shown you are these blue circles here, and we're going to choose our training dataset just some subset of these points. So, the greyed circles are ones that are not included in the training set. The blue circles are the ones that we're keeping in this training set. And then we take our training data and, as we've discussed in previous modules of this course, we use it in order to fit our model, to estimate our model parameters. Just as an example, for example with this dataset here, maybe we choose to fit some quadratic function to the data and like we've talked about in order to fit this quadratic function, we're gonna minimize the residual sum of squares on these training data points.
<img src="images/lec3_pic05.png">
<img src="images/lec3_pic06.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 1:00*
<!--TEASER_END-->
So, now we have our estimated model parameters, w hat. And we want to assess the training error of that estimated model. And the way we do that is first we need to define some lost functions. So, maybe we look at squared error, absolute error.
And then the way training error's defined is simply as the average loss, defined over the training points. So, mathematically what this is is simply:
$$\dfrac{1}{N} \sum_{i=1}^N L(y_i, f_{\hat w}(x_i))$$
- N: are the total number of observations in my training set
And just to remember to be very clear the estimated parameters were estimated on the training set. They were minimizing the residual sum of squares for these training points that we're looking at again and defining this training error.
<img src="images/lec3_pic07.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 2:00*
<!--TEASER_END-->
So, we can go through this pictorially in the following example, where in this case we're specifically looking at using squared error as our loss function. And in this case, our training error is simply $\dfrac{1}{N}$ times the sum of the difference between our actual house sales price and our predicted house sales price squared, where that sum is taken over all houses in our training data set. And what we see is that in this case where we choose squared error as our loss function, then the form of training error is exactly $\dfrac{1}{N}$ times our residual sum of squares. So, just be aware of that when you're computing training error and reporting these numbers. Here we're defining it as the average loss.
<img src="images/lec3_pic08.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 3:00*
<!--TEASER_END-->
More formally we can write our training error as follows and then we can define something that's commonly referred to just as something as RMSE and the full name is root mean square error. And RMSE is simply the square root of our average loss on the training houses. So, the square root of our training error. And the reason one might consider looking at root mean square error is because the units, in this case, are just dollars. Whereas when we thought about our training error, the units were dollars squared.
<img src="images/lec3_pic09.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 3:39*
<!--TEASER_END-->
Now, that we've defined training error, we can look at how training error behaves as model complexity increases. So, to start with let's look at the simplest possible model you might fit, which is just a constant model. So this is the simplest model we're gonna consider, or could consider, and you see that there is pretty significant training error.
Then let's say I fit a linear model. Well, a line, these are all linear models we're looking at, it's linear regression. But just fitting a line to the data. And you see that my training error has gone down.
Then I fit a quadratic function again training error goes down, and what I see is that as I increase my model
complexity to maybe this higher order of polynomial, I have very low training error just this one pink bar here. So, training error decreases quite significantly with model complexity .
So, there's a decrease in training error as you increase your model complexity. And why is that? Well, it's pretty intuitive, because the model was fit on the training points and then I'm saying how well does it fit it? As I increase the model complexity, I'm better and better able to fit my training data points. So, then when I go to assess my training error with these high-complexity models, I have very low training error.
<img src="images/lec3_pic10.png">
<img src="images/lec3_pic11.png">
<img src="images/lec3_pic12.png">
<img src="images/lec3_pic13.png">
<img src="images/lec3_pic14.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 5:00*
<!--TEASER_END-->
So, a natural question is whether a training error is a good measure of predictive performance? And what we're showing here is
one of our high-complexity, high-order polynomial models that had very low training error. So it really fit those training data points well. But how's it gonna perform on some new house?
<img src="images/lec3_pic15.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 6:00*
<!--TEASER_END-->
So, in particular, maybe we're looking at a house in this gray region, so with this range of square feet. Question is, is there something particularly wrong with having $x_t$ square feet? Because what our fitted function is saying is that I believe or I'm predicting that the values of houses with roughly Xt square feet are less valuable than houses with fewer square feet, cuz there's this dip down in this function. Do we really believe that this is a true dip in value, that these houses are just less desirable than houses with fewer or more square feet? Probably not. So, what's going wrong here?
<img src="images/lec3_pic16.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 6:45*
<!--TEASER_END-->
The issue is the fact that training error is overly optimistic when we're going to assess predictive performance. And that's because these parameters, $\hat w$, were fit on the training data. They were fit to minimize residual sum of squares, which can often be related to training error. And then we're using training error to assess predictive performance but that's gonna be very very optimistic as this picture shows. So, in general, having small training error does not imply having good predictive performance unless your training data set is really representative of everything that you might see there out in the world.
<img src="images/lec3_pic17.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 7:30*
<!--TEASER_END-->
## 2) Generalization error: what we really want
So, instead of using training error to assess our predictive performance. What we'd really like to do is analyze something that's called generalization or true error. So, in particular, we really want an estimate of what the loss is averaged over all houses that we might ever see in our neighborhood. But really, in our dataset we only have a few examples of houses that were sold. But there are lots of other houses that are in our neighborhood that we don't have in our dataset, or other houses that
you might imagine having been sold.
<img src="images/lec3_pic18.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 0:30*
<!--TEASER_END-->
Okay, so to compute this estimate over all houses that we might see in our dataset, we'd like to weight these house pairs,
so the pair of house attributes and the house sale's price. By how likely that pair is to have occurred in our dataset. So to do this we can think about defining a distribution and in this case over square feet of houses in our neighborhood.
What this picture is showing is a distribution that says we're very unlikely to see houses with very small or low number of square feet, very small houses. And we're also very unlikely to see really, really massive houses. So there's some bell curve to this, there's some sweet spot of kind of typical houses in our neighborhood, and then the likelihood drops off from there.
<img src="images/lec3_pic19.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 1:30*
<!--TEASER_END-->
Likewise what we can do is define a distribution that says for a given square footage of a house, what's the distribution over
the sales price of that house? ? So let's say the house has 2,640 square feet. Maybe I expect the range of house prices to be somewhere between 680,000 to maybe 950,000. That might be a typical range. But of course, you might see much lower valued houses or higher value, depending on the quality of that house.
<img src="images/lec3_pic20.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 1:39*
<!--TEASER_END-->
Formally when we go to define our generalization error, we're saying that we're taking the average value of our loss weighted by how likely those pairs were in our dataset.
So specifically we estimate our model parameters on our training data set so that's what gives us $\hat w$. That defines the model we're using for prediction, and then we have our loss function, assessing the cost of predicting $f_{\hat w}$ at our square foot x when the true value was y. And then what we're gonna do is we're gonna average over all possible (x,y). But weighted by how likely they are according to those distributions over square feet and value given square feet.
<img src="images/lec3_pic21.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 3:00*
<!--TEASER_END-->
Let's go back to these plots of looking at error versus model complexity. But in this case let's quantify our generalization error as a function of this complexity.
And to do this, what I'm showing by this crazy blue region here. And, it has different gradation going from white to darker blue, is the distribution of houses that I'm likely to see in my dataset. So, this white region here, these are the houses that I'm very likely to see, and then as I go further away from the white region I get to less likely house sale prices given a specific square foot value.
And so what I'm gonna do when I look at thinking about generalization error is I'm gonna take my fitted function where remember this green line was fit on the training data which are these blue circles. And then I'm gonna say, how well does it predict houses in this shaded blue region, weighted by how likely they are, how close to that white region.
Okay, so what I see here is this constant model who really doesn't approximate things well except maybe in this region here. So overall it has a reasonably high generalization error and I can go to my more complex model.
<img src="images/lec3_pic22.png">
<img src="images/lec3_pic23.png">
<img src="images/lec3_pic24.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 5:00*
<!--TEASER_END-->
Then I get to this much higher order polynomial, and when we were looking at training error, the training error was lower, right? But now, when we think about generalization error, we actually see that the generalization error is gonna go up relative to the simpler model.
<img src="images/lec3_pic25.png">
<img src="images/lec3_pic26.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 6:50*
<!--TEASER_END-->
So our generalization error in general will have some shape where it's going down. And then we get to a point where
the error starts increasing. Sorry, that should have been a smoother curve. The error starts increasing because we're getting to these overly complex models that fit the training data really well but don't generalize to other houses that we might see.
But importantly, in contrast to training error we can't actually compute generalization error. Because everything was relative
to this true distribution, the true way in which the world works. How likely houses are to appear in our dataset over all possible square feet and all possible house values. And of course, we don't know what that is. So, this is our ideal picture or
our cartoon of what would happen. But we can't actually go along and compute these different points.
<img src="images/lec3_pic27.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 8:00*
<!--TEASER_END-->
## 3) Test error: what we can actually compute
So we can't compute generalization error, but we want some better measure of our predictive performance than training error gives us. And so this takes us to something called test error, and what test error is going to allow us to do is approximate generalization error.
And the way we're gonna do this is by approximating the error, looking at houses that aren't in our training set.
<img src="images/lec3_pic28.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:00*
<!--TEASER_END-->
So instead of including all these colored houses in our training set, we're gonna shade out some of them, these shaded gray houses and we're gonna make these into what's called a test set.
<img src="images/lec3_pic29.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:15*
<!--TEASER_END-->
And when we go to fit our models, we're just going to fit our models on the training data set. But then when we go to assess
our performance of that model, we can look at these test houses, and these are hopefully going to serve as a proxy of everything out there in the world. So hopefully, our test data set is a good measure of other houses that we might see, or at least in order to think of how well a given model is performing.
<img src="images/lec3_pic30.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:25*
<!--TEASER_END-->
So test error is gonna be our average loss computed over the houses in our test data set.
- $N_{test}$: are the number of houses in our test data
- $\hat w$: very important, estimated parameters were fit on the training data set
Okay, so even though this function looks very much like training error, the sum is over the test houses, but the function we're looking at was fit on training data. Okay, so these parameters in this fitted function never saw the test data.
<img src="images/lec3_pic31.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:20*
<!--TEASER_END-->
So just to illustrate this, we might think of fitting a quadratic function through this data, where we're gonna minimize the residual sum of squares on the training points, those blue circles, to get our estimated parameters $\hat w$.
<img src="images/lec3_pic32.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:33*
<!--TEASER_END-->
Then when we go to compute our test error, which in this case again we're gonna use squared error as an example, we're computing this error over the test points, all these grey different circles here. So test error is $\dfrac{1}{N}$ times the sum of the difference between our true house sales prices and our predicted price squared summing over all houses in our test data set.
<img src="images/lec3_pic33.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:45*
<!--TEASER_END-->
**Let's summarize our measures of error as a function of model complexity**
- Our training error decreased with increasing model complexity.
- In contrast, our generalization error went down for some period of time. But then we started getting to overly complex models that didn't generalize well, and the generalization error started increasing. So here we have generalization error. Or true error
- Our test error is a noisy approximation of generalization error. Because if our test data setting included everything we might ever see in the world in proportion to how likely it was to be seen, then that would be exactly our generalization error. But of course, our test data set is just some finite data set, and we're using it to approximate generalization error, so it's gonna be some noisy version of this curve here.
Test error is the thing that we can actually compute. Generalization error is the thing that we really want.
<img src="images/lec3_pic34.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 3:00*
<!--TEASER_END-->
## 4) Defining overfitting
The notion of overfitting is if you have a model with parameters $\hat w$. In this model, there exists an estimated parameters, I'll just call them $w'$.
The model is overfit with two conditions hold:
- training error ($\hat w$) < training error ($w'$).
- true error ($\hat w$) > true error ($w'$).
Generally, the models are overfit, are the ones that have smaller training error. These are the ones that are really highly fit to the training data set but don't generalize well. Whereas the other points on the other half of this space are the ones that are not really well fit to the training data and also don't generalize well.
<img src="images/lec3_pic35.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/u8c2x/defining-overfitting) 2:00*
<!--TEASER_END-->
## 5) Training/test split
So we've said to assess the performance of our model, we really need to have a test data set carved out from our full data set. So, this raises the question of, how do I think about dividing the data set into training data versus test data?
- If I put too few points in my training set, then I'm not going to estimate my model well. And so, I'm going to have clearly bad predictor performance because of that.
- If I put too few points in my test set, that's gonna be a bad approximation to generalization error.
A general rule of thumb is typically you want just enough points in your test set to approximate generalization error well. And you want all your points in your training data set. Because you want to have as many points in your training data set
to learn a good model.
<img src="images/lec3_pic36.png">
<img src="images/lec3_pic37.png">
<img src="images/lec3_pic38.png">
<img src="images/lec3_pic39.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qn2vj/training-test-split) 1:00*
<!--TEASER_END-->
# 3) 3 sources of error and the bias-variance tradeoff
## 1) Irreducible error and bias
We've talked about three different measures of error. And now in this part, we're gonna talk about three different sources of error. And this is gonna lead us into a conversation of the bias variance trade-off. Okay, so when we were forming our prediction, there are three different sources of error.
- Noise
- Bias
- Variance
<img src="images/lec3_pic40.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 0:30*
<!--TEASER_END-->
**Let's look at the noise term**
As we've mentioned many times in this specialization, data are inherently noisy.
So the way the world works is that there's some true relationship between square feet and the value of a house. Or generically, between x and y. And we're representing that arbitrary relationship defined by the world, by $f_{w(true)}$, which is the notation we're using for that functional relationship.
But of course that's not a perfect description between x and y. The number of square feet and the house value. There are lot of other contributing factors including other attributes of the house that are not included just in square feet or how a person feels when they go in and make a purchase of a house or a personal relationship they might have with the owners. Or lots and lots of other things that we can't ever perfectly capture with just some function between square feet and value, and so that is the noise that's inherent in this process represented by this epsilon term ($\epsilon$). So in particular for any observation $y_i$ it's the sum of this relationship between the square feet and the value plus this noise term $\epsilon_i$ specific to that i. house.
And we've talked before about our assumption that this noise has zero mean because if it didn't that could be shoved into the f function instead. But what we haven't talked about is the spread of that noise. So at any given square feet what kind of variation and house price are we likely to see based on this type of noise that's inherent in our observations. And so this is referred to as the variance of this noise term epsilon. And this is something that's just a property of the data. We don't have control over this. This has nothing to do with our model nor our estimation procedure, it's just something that we have to deal with. And so this is called Irreducible error because it's nothing that we can reduce through choosing a better model or a better estimation procedure.
<img src="images/lec3_pic41.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 2:45*
<!--TEASER_END-->
The things that we can control are bias and variance, so we're gonna focus quite heavily on those two terms. So let's start by talking about bias. And this is basically just an assessment of how well my model can fit the true relationship between x and y.
So to think about this, let's think about how we get data in our data set. So here these points that we observed they're just a random snapshot of N houses that were sold and recorded and we tabulated in our data set. Well, based on that data set,
we fit some function and, thinking about bias, it's intuitive to start which is a very simple model of just a constant function. But what if another set of N houses had been sold? Then we would have had a different data set that we were using. And when we went to fit our model, we would have gotten a different line.
In the first data set, I tended to draw points that were below the true relationship so they happen to have, our houses in our data set happened to have values less than what the world kind of specifies as typical. And on the right hand side I drew points
that tended to lie above the line. So these are pretty extremely different data sets, but what you see is that the fits are pretty similar.
<img src="images/lec3_pic42.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 4:00*
<!--TEASER_END-->
So what we are saying is, over all possible data sets of size N that we might have been presented with of house sales, what do we expect our fit to look like?
There's a continuum ofpossible fits we might have gotten. And for all those possible fits, here this dashed green line represents our average fit, averaged over all those fits weighted by how likely they were to have appeared.
<img src="images/lec3_pic43.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 5:00*
<!--TEASER_END-->
Now we can start talking about bias. What bias is, is it's the difference between this average fit and the true function, $f_{w(true)}$.
That's what this equation shows here, and we're seeing this with this gray shaded region. That's the difference between the true
function and our average fit. And so intuitively what bias is saying is, is our model flexible enough to on average be able to capture the true relationship between square feet and house value. And what we see is that for this very simple constant model, this low complexity model has high bias. It's not flexible enough to have a good approximation to the true relationship. And because of these differences, because of this bias, this leads to errors in our prediction.
<img src="images/lec3_pic44.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 6:15*
<!--TEASER_END-->
## 2) Variance and the bias-variance tradeoff
http://scott.fortmann-roe.com/docs/BiasVariance.html
Let's turn to this third component which is a variance.
And what variance is gonna say is, how different can my specific fits to a given data set be from one another, as I'm looking at different possible data sets? And in this case, when we are looking at just this constant model, we showed by that early picture
where I drew points that were mainly above the true relationship and the points mainly below, that the actual resulting fits didn't vary very much. And when you look at the space of all possible observations, you see that the fits, they're fairly
similar, they're fairly stable.
<img src="images/lec3_pic45.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 0:30*
<!--TEASER_END-->
When you look at the variation in these fits, which I'm drawing with these grey bars here. We see that they don't vary very much.
<img src="images/lec3_pic46.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 0:54*
<!--TEASER_END-->
So, for this low complexity model, we see that there's low variance. So, to summarize what this variance is saying is, how much can the fits vary? And if they could vary dramatically from one data set to the other, then you would have very erratic predictions. Your prediction would just be sensitive to what data set you got. So, that would be a source of error in your predictions.
<img src="images/lec3_pic47.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 1:10*
<!--TEASER_END-->
And to see this, we can start looking at high-complexity models. So in particular, let's look at this data set again. And now, let's fit some high-order polynomial to it.
In the right dataset, let's choose two points, which I'm gonna highlight as these pink circles. And let's just move them a little bit. So, out of this whole data set, I've just moved two observations and not too dramatically, but I get a dramatically different fit.
<img src="images/lec3_pic48.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 1:20*
<!--TEASER_END-->
So then, when I think about looking over all possible data sets I might get, I might get some crazy set of curves. There is an average curve. And in this case, the average curve is actually pretty well behaved. Because this wild, wiggly curve is at any point, equally, likely to have been wild above, or wild below. So, on average over all data sets, it's actually a fairly smooth reasonable curve. But if I look at the variation between these fits, it's really large. So, what we're saying is that high-complexity models have high variance.
<img src="images/lec3_pic49.png">
<img src="images/lec3_pic50.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 2:30*
<!--TEASER_END-->
On the other, if I look at the bias of this model, so here again, I'm showing this average fit which was this fairly well behaved curve. And match pretty well to the true relationship between square feet and house value, because my model is really flexible. So on average, it was able to fit pretty precisely that true relationship. So, these high-complexities models have low bias.
<img src="images/lec3_pic51.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 3:00*
<!--TEASER_END-->
We can now talk about this bias-variance tradeoff. So, in particular, we're gonna plot bias and variance as a function of model complexity.
- Model complexity increases, our bias decreases.
- Model complexity increases, variance increases.So, our very simple model had very low variance, and the high-complexity models had high variance.
what we see is there's this natural tradeoff between bias and variance. And one way to summarize this is something that's called mean squared error.
MSE = bias$^2$ + variance
Machine learning is all about this tradeoff between bias and variance. And the goal is finding this sweet spot. This is the sweet spot where we get our minimum error, the minimum contribution of bias and variance, to our prediction errors.
But just like with generalization error, we cannot compute bias and variance, and mean squared error. Well, the reason is because just like with generalization error, they were defined in terms of the true function. Well, bias was defined very
explicitly in terms of the relationship relative to the true function. And when we think about defining variance, we have to average over all possible data sets, and the same was true for bias too. But all possible data sets of size n, we could have gotten from the world, and we just don't know what that is. So, we can't compute these things exactly. But throughout the rest of this course, we're gonna look at ways to optimize this tradeoff between bias and variance in a practical way.
<img src="images/lec3_pic52.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 6:00*
<!--TEASER_END-->
## 3) Error vs. amount of data
Let's start with looking at our true error or generalization error. But first, I want to make sure its clear that we are looking at these errors for a fixed model complexity.
If we have very few data points, our fitted function is a pretty poor estimate of the true relationship between x and y. So our true error's gonna be pretty high, so let's say that w hat is not approximated well from few points. But as we get more and
more data, we get a better and better approximation of our model and our true error decreases. But it decreases to some limit.
And what is that limit? Well that limit is the bias plus the noise inherent in the data. Because as we get tons and
tons of observations, well, we're taking our model and fitting it as well as we could ever hope to fit it, because we have every observation out there in the world. But the model might just not be flexible enough to capture the true relationship between x and y, and that is our notion of bias. Plus, of course, there's the error just from the noise in observations that other contribution. Okay, so this difference here Is the bias of the model and noise of the data.
Now let's look at training error. So let's say our training error starts somewhere. But what ends up happening is training error goes up as you get more and more data points. So when we have few data points, so with few data points, a fixed complexity model can fit them reasonably well, where reasonably of course depends on what the complexity of the model is. But as I get more and more and more data points, that same complexity of the model can't hope to fit all these points perfectly well. What is the limit of training error? That limit is exactly the same as the limit of our true error.
The reason is I have tons and tons of points there. That's all points that there could ever be possibly in the world, and I fit my model to it. And if I measure training error, I'm running it to all the possible points there are out there in the world. And that's exactly what our definition of true error is. So they converge to exactly the same point in the limit. Where that difference again, is the bias inherent from the lack of flexibility of the model, plus the noise inherent in the data.
So just to write this down:
- In the limit, I'm getting lots and lots of data points, this curve is gonna flatten out to how well model can fit true relationship $f_{true}$.
- In the limit, true error = training error.
So what we've seen so far in this module are three different measures of error. Our training, our true generalization error as well as our test error approximation of generalization error. And we've seen three different contributions to our errors. Thinking about that inherent noise in the data and then thinking about this notion of bias in variance. And we finally concluded with this discussion on the tradeoff between bias in variance and how bias appears no matter how much data we have. We can't escape the bias from having a specified model of a given complexity.
<img src="images/lec3_pic53.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/lYBeX/error-vs-amount-of-data) 5:00*
<!--TEASER_END-->
# 4) Formally defining and deriving the 3 sources of error
## 1) Formally defining the 3 sources of error
So you mentioned that the training set is just a random sample of some and observations. In this case, some N houses that were sold and recorded, but what if N other houses had been sold and recorded? How would our performance change? So for example, here in this picture we're showing one set of N observations that are used for training data, those are the blue circles. And we fit some quadratic function through this data and here we show some other set of N observations and we see that we get a different fit.
<img src="images/lec3_pic54.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:00*
<!--TEASER_END-->
And to assess our performance of each one of these fits we can think about looking at generalization error.
- So in the first case we might get one generalization error of this specific fit $\hat w(1)$.
- And in the second case we would get some different evaluation of generalization error. Let's call it generalization error of $\hat w(2)$.
<img src="images/lec3_pic55.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:30*
<!--TEASER_END-->
But one thing that we might be interested in is, how do we perform on average for a training data set of N observations?
Because imagine them trying to develop a tool that's gonna be used by real estate agents to form these types of predictions. Well I like to design my tool, package it up and send it out there, and then a real estate agent might come in and have some set of observations of house sales from their neighborhood that they're using to make their predictions. So that might be different than another real estate agent.
And what I'd like to know, is for a given amount of data, some training set of size N, how well should I expect the performance of this model to be, regardless of what specific training dataset I'm looking at? So in these cases what we like to do is average our performance over all possible fits that we might get. What I mean by that is all possible training data sets that might have appeared, and the resulting fits on those data sets.
<img src="images/lec3_pic56.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:50*
<!--TEASER_END-->
So formerly, we're gonna define this thing called expected prediction error which is the expected value of our generalization
error, over different training data sets. So very specifically, for a given training data set, we get parameters that are fit to that data set. So I'll call that $\hat w$ of training set. And then for that estimated model, I can evaluate my generalization error and what the expected prediction error is doing is it's taking a weighted average over all possible training sets that I might have seen. Where for each one I get a different set of estimated parameters and thus a different notion of the generalization error.
<img src="images/lec3_pic57.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 3:00*
<!--TEASER_END-->
And to start analyzing this quantity of prediction error, let's specifically look at some target input $x_t$, which might be a house with 2,640 square feet. And let's also take our loss function to be squared error. So in this case when we're talking
specifically about a target point $x_t$. What we can do later after we do the analysis specifically for $x_t$ is we can think about averaging this over all possible $x_t$, over all x all square feet. But in some cases we might actually be interested in one region of our input space in particular. And then when we talk about using squared error in particular, this is gonna allow our analysis to follow through really nicely as we're gonna show not in this video, but in our next even more in
depth video which is also optional.
<img src="images/lec3_pic58.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 4:00*
<!--TEASER_END-->
But under these assumptions of looking specifically at $x_t$ and looking at squared error as our measure of loss. You can show that the average prediction error at xt is simply the sum of three terms which we're gonna go through: $\sigma$ (sigma), bias, variance.
So these terms are yet to be defined, and this is what we're gonna walk through in this video in a much more formal way than we did in the previous set of slides.
<img src="images/lec3_pic59.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 4:35*
<!--TEASER_END-->
So let's start by talking about this first term, sigma squared and what this is gonna represent is the noise we talked about in the earlier videos.
So in particular, remember that we're saying that there's some true relationship between square feet and house value. That that's just a relationship that exists out there in the world, and that's captured by $f_{w(true)}$, but of course that doesn't fully capture how we think about the value of a house. There are other factors at play. And so all those other factors out there in the world are captured by our noise term, which here we write as just an additive term plus epsilon.
So epsilon is our noise, and we said that this noise term has zero meaning cuz if not we can just shove that other component into $f_{w(true)}$. But we're just gonna make the assumption that epsilon has 0 mean then we can start talking about what is the
spread of noise you're likely to see at any point in the input space. And that spread is called the variance. So we denote it by sigma squared and sigma squared is the variance of this noise epsilon.
And as we talked about before, this noise is just noise that's out there in the world, we have no control over it no matter how complicated and interesting of a model, we specify our algorithm for fitting that model. We can't do anything about the fact
that we're using x for our prediction. But there's just inherently some noise in how our observations are generated in the world. So for this reason, this is called our irreducible error. Because it's noise that we can't reduce through any choices that we have control over.
<img src="images/lec3_pic60.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 5:50*
<!--TEASER_END-->
So now let's talk about this second term, bias squared.
And remember that when we talked about bias this was a notion of how well our model could on average fit the true relationship between x and y. But now let's go through this at a much more formal level. And in particular let's just remember that
there's some relationship between square feet and house value in our case which is represented by this orange line. And then from this true world we get some data set and to find a training set which are these blue circles. And using this training data we estimate our model parameters. Well, if we had gotten some other set of endpoints, we would have fit some other functions.
<img src="images/lec3_pic61.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 7:00*
<!--TEASER_END-->
Now, when I look over all possible data sets of size N that I might have gotten, where remember where this blue shaded region here represents the distribution over x and y. So how likely it is to get different combinations of x and y. And let's say, I draw endpoints from this joint distribution over x and y and over all possible values I look at an estimated function. So for example here are the two, estimated functions from the previous slide, those example data sets that I showed. But of course there's a whole continuum of estimated functions that I get for different training sets of size N. Then when I average these estimated functions, these specific fits over all my possible training data sets, what I get is my average fit. So now let's talk about this a little bit more formally. We had already presented this in our previous video.
This $f_{\bar w}$ (f sub w bar). But now, let's define this. This is the expectation of a specific fit on a specific training data set or let me rephrase that, the fit I get on a specific training data set averaged over all possible training data sets of size N that I might get. So that is the formal definition of this $f_{\bar w}$ (f sub w bar), what we have been calling our average fit.
And what we're talking about when we're talking about bias is, we're talking about comparing this average fit to the true relationship. And here remember again, we're focusing specifically on some target $x_t$. And so the bias at $x_t$ is the difference between the true relationship at $x_t$ between $x_t$ and y. So between a given square feet and the house value whatever the true relationship is between that input and the observation versus this average relationship estimated over all possible training data sets.
<img src="images/lec3_pic62.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 9:00*
<!--TEASER_END-->
So that is the formal notion of bias of $x_t$, and let's just remember that when it comes in as our error term, we're looking at bias squared.
<img src="images/lec3_pic63.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 9:25*
<!--TEASER_END-->
**So that's the second term. Now let's turn to this third term which is variance. **
And let's go through this definition where again, we're interested in this average fit $f_{\bar w}$ (f sub w bar), this green dashed line. But that really isn't the quantity of interest. It's gonna be used in our definition here. But the thing that we're really interested in, is over all possible fits we might see. How much do they deviate from this expected fit?
<img src="images/lec3_pic64.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 10:00*
<!--TEASER_END-->
So thinking about again, specifically at our target $x_t$, how much variation is there in the training dataset specific fits across all training datasets we might see?
<img src="images/lec3_pic65.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 10:15*
<!--TEASER_END-->
And that's this variance term and now again, let's define it very formally.
Well let me first state what variance is in general. So variance of some random variable is simply looking at the expected value of that random variable minus its mean squared. So in this context, when we're looking at the variability of these functions at xt, we're taking the expectation and our random quantity is our estimated function for a specific training data set at $x_t$.
And then what's the mean of that random function? The mean is this average fit, this $f_{\bar w}$ (f sub w bar). So we're looking at the difference between fit on a specific training dataset and what I expect to earn averaged over all possible training datasets. I look at that quantity squared and what is my expectation taken over?
Let me just mention that this quantity when I take this squared, represents a notion of how much deviation a specific fit has from the expected fit at $x_t$. And then when I think about what the expectation is taking over, it's taking over all possible
training data sets of size N. So that's my variance term.
And when we think intuitively about why it makes sense that we have the sum of these three terms in this specific form. Well what we're saying is variance is telling us how much can my specific function that I'm using for prediction. I'm just gonna use one of these functions for prediction. I get a training dataset that gives me an $f_{\bar w}$ (f sub w hat), I'm using that for prediction. Well, how much can that deviate from my expected fit over all datasets I might have seen.
So again, going back to our analogy, I'm a real estate agent, I grab my data set, I fit a specific function to that training data. And I wanna know well, how wild of a fit could this be relative to what I might have seen on average over all possible datasets that all these other realtors are using out there?
And so of course, if the function from one realtor to another realtor looking at different data sets can vary dramatically, that can be a source of error in our predictions. But another source of error which the biases is capturing is over all these possible datasets, all these possible realtors. If this average function just can never capture anything close to their true relationship between square feet and house value, then we can't hope to get good predictions either and that's what our bias is capturing. And why are we looking at bias squared? Well, that's putting it on an equal footing of these variance terms because remember bias was just the difference between the true value and our expected value. But these variance terms are looking at
these types of quantities but squared. So that's intuitively why we get bias squared and then finally, what's our third sense of error?
Well let's say, I have no variance in my estimator always very low variance. And the model happens to be a very good
fit so neither of these things are sources of error, I'm doing basically magically perfect on my modeling side, while still inherently there's noise in the data. There are things that just trying to form predictions from square feet alone can't capture. And so that's where irreducible error or this sigma squared is coming through. And so intuitively this is why our
prediction errors are a sum of these three different terms that now we've defined much more formally.
<img src="images/lec3_pic66.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 12:00*
<!--TEASER_END-->
## 2) Formally defining the 3 sources of error
Why specifically these are the three sources of error, and why they appear as sigma squared plus bias squared plus variance.
Let's start by recalling our definition of expected prediction error, which was the expectation over trending data sets of our generalization error. And, here I'm using just a shorthand notation train instead of training set. (train = training set)
So let's plug in the formal definition of our generalization error. And remember that our generalization error was our expectation over all possible input and output pairs, X, Y pairs of our loss. And so that's what is written here on the second line. And then let's remember that we talked about specifying things specifically at a target $x_t$, and under an assumption of
using a loss function of squared error. And so again we're gonna use this to form all of our derivations. And so when we make these two assumptions, then this expected prediction error at $x_t$ simplifies to the following where there's no longer an expectation over x because we're fixing our point in the input space to be $x_t$. And our expectation over y becomes an expectation over yt because we're only interested in the observations that appear for an input at xt. So, the other thing that we've done in this equation is we've plugged in our specific definition of our loss function as our squared error loss. So, for the remainder of this video, we're gonna start with this equation and we're gonna derive why we get this specific form, sigma-squared plus bias squared plus variance.
<img src="images/lec3_pic66.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 2:00*
<!--TEASER_END-->
Expected prediction error at $x_t$
$$= \large E_{train,y_t}[(y_t - f_{\hat w(train)}(x_t))^2]$$
So this is the definition of expected prediction error at $x_t$ that we had on the previous slide, under our assumption of squared error loss. What we can do is we can rewrite this equation as follows, where what we've done is we've simply added and subtracted the true function, the true relationship between x and y, specifically at xt. And because we've just simply added and subtracted these two quantities, nothing in this equation has changed as a result.
$$= \large E_{train,y_t}[((y_t - f_{w(true)}(x_t)) + (f_{w(true)}(x_t) - f_{\hat w(train)}(x_t)))^2]$$
Let's do a little aside here, because it is useful. So if we take the expectation of some quantity:
$$ \large E[(a + b)^2] \\
= E[a^2 + 2ab + b^2] \\
= E[a^2] + 2E[ab] + E[b^2]$$
I'm going to define some shorthand for writing purpose
- $y_t$: y
- $f_{w(true)}$: f
- $f_{\hat w(train)}$: $\hat f$
Now that we've set the stage for this derivation, let's rewrite this term here. So we get the expectation over our training data set and our observation it's remember I'm writing $y_t$ just as y and I'm going to get the first term squared. So I'm going to get y- f. Squared that's my a squared term this first term here. And then I'm gonna get two times the expectation of a times b, and let me again specify what the expectations is over the expectations over training data set and observation Y. And when I so A times B I get Y minus F times F minus F hat. And then the final term is I'm going to get the expectation over my training set and the observation Y. Of B squared, which is F minus F hat squared.
$$= \large E_{train,y}[(y-f)^2] +2E_{train,y}[(y - f)(f- \hat f)] + E_{train,y}[(f- \hat f)^2]$$
Now let's simplify this a bit.
Does anything in this first term depend on my training set? Well y is not a function of the training data, F is not a function of the training data, that's the true function. So this expectation over the training set, that's not relevant for this first term here. And when I think about the expectation over y, well what is this? This is the difference between my observation and the true function. And that's specifically, that's epsilon. So what this term here is, this is epsilon squared. And epsilon has zero mean so if I take the expectation of epsilon squared that's just my variance from the world. That's sigma squared. Okay so
this first term results in sigma squared.
$$ (y - f)^2 = \epsilon^2 =\sigma^2 $$
Now let's look at this second term, you know what, I'm going to write this a little bit differently to make it very clear here. So I'll just say that this first term here is sigma squared by definition. Okay, now let's look at this second term. And again what is Y minus F? Well Y minus F is this epsilon noise term and our noise is a completely independent variable from F or F hat.
- If I take the expectation of A and B, where A and B are independent random variables, then the expectation of A times B is equal to the expectation of A times the expectation of B. So, this is another little aside.
$$E[ab] = E[a]E[b] \text{, where a, b are independent variables.}$$
And, so what I'll get here,
is I'm going to get that this term is the expectation of epsilon times
the expectation of F minus F hat. And what's the expectation of epsilon,
my noise? It's zero,
remember we said that again and again, that we're assuming that epsilon is zero
noise, that can be incorporated into F. This term is zero, the result of this
whole thing is going to be zero. We can ignore that second term.
$$E[(y - f)(f- \hat f)] \\
= E[\epsilon] E[f - \hat f] \\
= 0 \cdot E[f - \hat f] \\
= 0$$
Let's look at this last term and this term for this slide, I'm simply gonna call the mean squared error. I'm gonna define this little equal with a triangle on top is something that I'm defining here. I'm defining this to be equal to something called the mean square error, let me write that out if you want to look it up later. Mean square error of F hat.
$$E[(f- \hat f)^2] = MSE(\hat f)$$
Now that I've gone through and done that, I can say that the result of all this derivation is that I get a quantity sigma squared. Plus mean squared error of F hat.
$$\large E_{train,y}[(y-f)^2] +2E_{train,y}[(y - f)(f- \hat f)] + E_{train,y}[(f- \hat f)^2] \\
= \sigma^2 + MSE(\hat f)$$
But so far we've said a million times that my expected prediction error at $x_t$ is sigma squared plus bias squared plus variance. On the next slide what we're gonna do is we're gonna show how our mean squared error is exactly equal to bias squared plus variance.
<img src="images/lec3_pic74.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 10:00*
<!--TEASER_END-->
What I've done is I've started this slide by writing mean squared error of remember on the previous slide we were calling this
$\hat f$, that was our shorthand notation.
$$MSE[f_{\hat w(train}(x_t)] = \\
E_{train}[(f_{w(true)}(x_t) - f_{\hat w(train)}(x_t))^2]$$
- $f_{\hat w(train)}(x_t) = \hat f$
And so mean squared error of $\hat f$ according to the definition on the previous slide is it's looking at the expectation of F minus F hat squared. And I guess here I can mention when I take this expectation over training data and my observation Y. Does the observation Y appear anywhere in here, F minus F hat? No, so I can get rid of that Y there. If I look at this I'm repeating it here on this next slide where I have the expectation over my training data of my true function, which I had on the last slide
just been denoting as simply F. And the estimated function which I had been denoting, let me be clear it's inside this square that I'm looking at I'd been denoting this as F hat. And both of these quantities were evaluated specifically at $x^t$.
$$= E_{train}[((f_{w(true)}(x_t) - f_{\bar w}(x_t)) + (f_{\bar w}(x_t) - f_{\hat w(train)}(x_t)))^2]$$
Again let's go through expanding this, where in this case, when we rewrite this quantity in a way that's gonna be useful for this derivation, we're gonna add and subtract $f_{\bar w}$ (F sub W bar) and what $f_{\bar w}$, remember that it was the green dashed line in all those bias variance plots. What $f_{\bar w}$ is looking average over all possible training data sets, where for each training data set, I get a specific fitted function and I average all those fitted functions over those different training data sets. That's what results in F sub W bar. It's my average fit that for my specific model that I'm getting
averaging over my training data sets. And so for simplicity here, I'm gonna refer to $f_{\bar w}$ as $\bar f$.
- $f_{\bar w} = \bar f$
Using that same trick of taking the expectation of A plus B squared and completing the square and then passing the expectation through, I'm going to do the same thing here
$$= E_{train}[(f - \bar f)^2] + 2E_{train}[(f - \bar f)(\bar f - \hat f)] + E_{train}[(\bar f - \hat f)^2]$$
Now let's go through and talk about what each of these quantities is.
- And the first thing is let's just remember that $\bar f$ what was the definition of $\bar f$ formerly? It was my expectation over training data sets of $\hat f$ of my fitted function on a specific training data set. I've already taken expectation over the training set here. F is a true relationship. F has nothing to do with the training data. This is a number. This is the mean of a random variable, and it no longer has to do with the training data set either. I've averaged over training data sets. Here there's really no expectation over trending data sets. Nothing is random in terms of the trending data set for this first quantity. So $\bar f = E_{train}[\hat f]$
- This first quantity is really simply $(f - \bar f)^2$, and what is that? That's the difference between the true function and my average, my expected fit. Specifically at $x_t$, but squared. That is bias squared. That's by definition. So $$E_{train}[(f - \bar f)^2] = (f - \bar f)^2 = bias^2(\hat f)$$
Now let's look at this second term, and the second term is not a function of training data. So, this is just like a scaler. It can just come out of the expectation so for this second term I can rewrite this as
$$2E_{train}[(f - \bar f)(\bar f - \hat f)] \\
= 2(f- \bar f) E_{train}[\bar f - \hat f]$$
- Okay. And now let's re-write this term, and just pass the expectation through. And the first thing is again $\bar f$ is not a function of training data, so the result of that is just f bar and then i'm gonna get minus the expectation over my training data of f hat.
$$E_{train}[\bar f - \hat f] = \bar f - E_{train}[\hat f]$$
- So, what is this $E_{train}[\hat f]$? This is the definition of f bar. This is taking my specific fit on a specific, so it's the fit on a specific training data set at $x_t$ And it's taking the expectation over all training data sets. That's exactly the definition of what f bar is, that average fit.
$$E_{train}[\hat f] = \bar f$$
- So, this term here is equal to 0
$$E_{train}[\bar f - \hat f] \\
= \bar f - E_{train}[\hat f] \\
= \bar f - \bar f \\
= 0$$
That just leaves one more quantity to analyze and that's the last term here where what I have is an expectation over a function minus it's mean squared. So, let me just write this in words. It's an expectation of let's just say, so the fact that I can
equivalently write this as F hat minus F bar squared. I hope that's clear that the negative sign there doesn't matter. It gets squared. They're exactly equivalent. And so what is this?
- $\hat f$: this is a random function at $x_t$ which is equal to just a random variable.
- $\bar f$: and this is its mean.
And so the definition of taking the expectation of some random variable minus its mean squared, that's the definition of variance. So, this term is the variance of f hat.
$$E_{train}[(\bar f - \hat f)^2] = E[(\hat f - \bar f)^2] = var(\hat f)$$
<img src="images/lec3_pic75.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 19:00*
<!--TEASER_END-->
That's exactly what we're hoping to show because now we can talk about putting it all together. Where what we see is that our
expected prediction error at $x_t$ we derived to be equal to Sigma squared plus mean squared error. And then we derived the fact that mean squared error is equal to bias squared plus variance. So, we get the end result that our expected prediction error at Xt is sigma squared plus bias squared plus variance, and this represents our three sources of error. And we've know completed our formal derivation of this.
<img src="images/lec3_pic76.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 20:00*
<!--TEASER_END-->
# 5) Putting the pieces together
## 1) Training/validation/test split for model selection, fitting, and assessment
Let's wrap up by talking about two really important task when you're doing regression. And through this discussion, it's gonna motivate another important concept of thinking about validation sets.
So, the two important task in regression, is first we need to choose a specific model complexity. So for example, when we're talking about polynomial regression, what's the degree of that polynomial? And then for our selected model, we assess its performance. And actually these two steps aren't specific gesture regression. We're gonna see this in all different aspects of machine learning, where we have to specify our model and then we need to assess the performance of that model. So, what we're gonna talk about in this portion of this module generalizes well beyond regression. And for this first task, where we're
talking about choosing the specific model. We're gonna talk about it in terms of sum set of tuning parameters, lambda, which control the model complexity. Again, and for example, lambda might specify the degree of the polynomial and polynomial aggression.
<img src="images/lec3_pic77.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 1:00*
<!--TEASER_END-->
So, let's first talk about how we can think about choosing lambda. And then for a given model specified by lambda, a given model complexity, let's think about how we're gonna assess the performance of that model.
Well, one really naive approach is to do what we've described before, where you take your data set and split it into a training set and a test set. And then, what we're gonna do is for our model selection portion where we're choosing the model complexity lambda.
For every possible choice of lambda, we're gonna estimate model parameters associated with that lambda model on the training set. And the we're gonna test the performance of that fitted model on the test set. And we're gonna tabulate that for every lambda that we're considering. And we're gonna choose our tuning parameters as the ones that minimize this test error. So, the ones that perform best on the test data. And we're gonna call those parameters lambda star.
So, now I have my model. I have my specific degree of polynomial that I'm gonna use. And I wanna go and assess the performance of this specific model. And the way I'm gonna do this is I'm gonna take my test data again. And I'm gonna say, well, okay, I know that test error is an approximation of generalization error. So, I'm just gonna compute the test error for this lambda star fitted model. And I'm gonna use that as my approximation of the performance of this model. Well, what's the issue with this? Is this gonna perform well? No, it's really overly optimistic.
<img src="images/lec3_pic78.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 2:50*
<!--TEASER_END-->
So, this issue is just like what we saw when we weren't dealing with this notion of choosing model complexity. We just assumed that we had a specific model, like a specific degree polynomial. But we wanted to assess the performance of the model. And the naive approach we took there was saying, well, we fit the model to the training data, and then we're gonna use training error to
assess the performance of the model. And we said, that was overly optimistic because we were double dipping. We already used the data to fit our model. And then, so that error was not a good measure of how we're gonna perform on new data.
Well, it's exactly the same notion here and let's walk through why. Most specifically, when we're thinking about choosing our model complexity, we were using our test data to compare between different lambda values. And we chose the lambda value that
minimized the error on that test data that performed the best there. So, you could think of this as having fit lambda, this model complexity tuning parameter, on the test data. And now, we're thinking about using test error as a notion of approximating
how well we'll do on new data. But the issue is, unless our test data represents everything we might see out there in the world,
that's gonna be way too optimistic. Because lambda was chosen, the model was chosen, to do well on the test data and so that won't generalize well to new observations.
<img src="images/lec3_pic79.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 4:00*
<!--TEASER_END-->
So, what's our solution? Well, we can just create two test data sets. They won't both be called test sets, we're gonna call one of them a validation set. So, we're gonna take our entire data set, just to be clear. And now, we're gonna split it into three data sets.
One will be our training data set, one will be what we call our validation set, and the other will be our test set. And then what we're gonna do is, we're going to fit our model parameters always on our training data, for every given model complexity that we're considering. But then we're gonna select our model complexity as the model that performs best on the validation set
has the lowest validation error. And then we're gonna assess the performance of that selected model on the test set. And we're gonna say that that test error is now an approximation of our generalization error. Because that test set was never used in
either fitting our parameters, w hat, or selecting our model complexity lambda, that other tuning parameter. So, that data was completely held out, never touched, and it now forms a fair estimate of our generalization error.
<img src="images/lec3_pic80.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 5:00*
<!--TEASER_END-->
So in summary, we're gonna fit our model parameters for any given complexity on our training set. Then we're gonna, for every fitted model and for every model complexity, we're gonna assess the performance and tabulate this on our validation set. And we're gonna use that to select the optimal set of tuning parameters lambda star. And then for that resulting model, that w hat sub lambda star, we're gonna assess a notion of the generalization error using our test set.
<img src="images/lec3_pic81.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 6:00*
<!--TEASER_END-->
And so a question, is how can we think about doing the split between our training set, validation set, and test set? And there's no hard and fast rule here, there's no one answer that's the right answer. But typical splits that you see out there
are something like an 80-10-10 split. So, 80% of your data for training data, 10% t for validation, 10% for tests. Or another common split is 50%, 25%, 25%. But again, this is assuming that you have enough data to do this type of split and still get reasonable estimates of your model parameters, reasonable notions of how different model complexities compare. Because you have a large enough validation set, and you still have a large enough test set in order to assess the generalization error of the resulting model. And if this isn't the case, we're gonna talk about other methods that allow us to do these same types of notions, but not with this type of hard division between training, validation, and test.
<img src="images/lec3_pic81.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 7:00*
<!--TEASER_END-->
## 2) A brief recap
<img src="images/lec3_pic83.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/FT2HG/a-brief-recap) 1:00*
<!--TEASER_END-->
|
github_jupyter
|
# 1) Defining how we assess performance
## What do we mean by "loss"?
<img src="images/lec3_pic01.png">
<img src="images/lec3_pic02.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/cGUQ3/what-do-we-mean-by-loss) 1:00*
<!--TEASER_END-->
How do we formalize this notion of how much we're losing? And in machine learning, we do this by defining something called a loss function.
And what the loss function specifies is the cost incurred when the true observation is y, and I make some other prediction. So, a bit more explicitly, what we're gonna do, is we're gonna estimate our model parameters. And those are $\hat w$. We're gonna use those to form predictions.
- $f_{\hat w}(x) = \hat f(x)$, it's our predicted value at some input x.
The loss function L, is somehow measuring the difference between these two things.
And there are a couple ways in which we could define loss function. And very common choices include assuming
something that's called absolute error, which just looks at the absolute value of the difference between your true value and your predicted value. And another common choice is something called squared error, where, instead of just looking at the absolute value, you look at the square of that difference. And so that means that you have a very high cost if that difference is large, relative to just absolute error.
<img src="images/lec3_pic03.png">
<img src="images/lec3_pic04.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/cGUQ3/what-do-we-mean-by-loss) 3:30*
<!--TEASER_END-->
# 2) 3 measures of loss and their trends with model complexity
## 1) Training error: assessing loss on the training set
The first measure of error of our predictions that we can look at is something called training error. And we discussed this at a high level in the first course of the specialization, but now let's go through it in a little bit more detail.
So, to define training error, we first have to define training data. So, training data typically you have some dataset which I've shown you are these blue circles here, and we're going to choose our training dataset just some subset of these points. So, the greyed circles are ones that are not included in the training set. The blue circles are the ones that we're keeping in this training set. And then we take our training data and, as we've discussed in previous modules of this course, we use it in order to fit our model, to estimate our model parameters. Just as an example, for example with this dataset here, maybe we choose to fit some quadratic function to the data and like we've talked about in order to fit this quadratic function, we're gonna minimize the residual sum of squares on these training data points.
<img src="images/lec3_pic05.png">
<img src="images/lec3_pic06.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 1:00*
<!--TEASER_END-->
So, now we have our estimated model parameters, w hat. And we want to assess the training error of that estimated model. And the way we do that is first we need to define some lost functions. So, maybe we look at squared error, absolute error.
And then the way training error's defined is simply as the average loss, defined over the training points. So, mathematically what this is is simply:
$$\dfrac{1}{N} \sum_{i=1}^N L(y_i, f_{\hat w}(x_i))$$
- N: are the total number of observations in my training set
And just to remember to be very clear the estimated parameters were estimated on the training set. They were minimizing the residual sum of squares for these training points that we're looking at again and defining this training error.
<img src="images/lec3_pic07.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 2:00*
<!--TEASER_END-->
So, we can go through this pictorially in the following example, where in this case we're specifically looking at using squared error as our loss function. And in this case, our training error is simply $\dfrac{1}{N}$ times the sum of the difference between our actual house sales price and our predicted house sales price squared, where that sum is taken over all houses in our training data set. And what we see is that in this case where we choose squared error as our loss function, then the form of training error is exactly $\dfrac{1}{N}$ times our residual sum of squares. So, just be aware of that when you're computing training error and reporting these numbers. Here we're defining it as the average loss.
<img src="images/lec3_pic08.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 3:00*
<!--TEASER_END-->
More formally we can write our training error as follows and then we can define something that's commonly referred to just as something as RMSE and the full name is root mean square error. And RMSE is simply the square root of our average loss on the training houses. So, the square root of our training error. And the reason one might consider looking at root mean square error is because the units, in this case, are just dollars. Whereas when we thought about our training error, the units were dollars squared.
<img src="images/lec3_pic09.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 3:39*
<!--TEASER_END-->
Now, that we've defined training error, we can look at how training error behaves as model complexity increases. So, to start with let's look at the simplest possible model you might fit, which is just a constant model. So this is the simplest model we're gonna consider, or could consider, and you see that there is pretty significant training error.
Then let's say I fit a linear model. Well, a line, these are all linear models we're looking at, it's linear regression. But just fitting a line to the data. And you see that my training error has gone down.
Then I fit a quadratic function again training error goes down, and what I see is that as I increase my model
complexity to maybe this higher order of polynomial, I have very low training error just this one pink bar here. So, training error decreases quite significantly with model complexity .
So, there's a decrease in training error as you increase your model complexity. And why is that? Well, it's pretty intuitive, because the model was fit on the training points and then I'm saying how well does it fit it? As I increase the model complexity, I'm better and better able to fit my training data points. So, then when I go to assess my training error with these high-complexity models, I have very low training error.
<img src="images/lec3_pic10.png">
<img src="images/lec3_pic11.png">
<img src="images/lec3_pic12.png">
<img src="images/lec3_pic13.png">
<img src="images/lec3_pic14.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 5:00*
<!--TEASER_END-->
So, a natural question is whether a training error is a good measure of predictive performance? And what we're showing here is
one of our high-complexity, high-order polynomial models that had very low training error. So it really fit those training data points well. But how's it gonna perform on some new house?
<img src="images/lec3_pic15.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 6:00*
<!--TEASER_END-->
So, in particular, maybe we're looking at a house in this gray region, so with this range of square feet. Question is, is there something particularly wrong with having $x_t$ square feet? Because what our fitted function is saying is that I believe or I'm predicting that the values of houses with roughly Xt square feet are less valuable than houses with fewer square feet, cuz there's this dip down in this function. Do we really believe that this is a true dip in value, that these houses are just less desirable than houses with fewer or more square feet? Probably not. So, what's going wrong here?
<img src="images/lec3_pic16.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 6:45*
<!--TEASER_END-->
The issue is the fact that training error is overly optimistic when we're going to assess predictive performance. And that's because these parameters, $\hat w$, were fit on the training data. They were fit to minimize residual sum of squares, which can often be related to training error. And then we're using training error to assess predictive performance but that's gonna be very very optimistic as this picture shows. So, in general, having small training error does not imply having good predictive performance unless your training data set is really representative of everything that you might see there out in the world.
<img src="images/lec3_pic17.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/VN4Qo/training-error-assessing-loss-on-the-training-set) 7:30*
<!--TEASER_END-->
## 2) Generalization error: what we really want
So, instead of using training error to assess our predictive performance. What we'd really like to do is analyze something that's called generalization or true error. So, in particular, we really want an estimate of what the loss is averaged over all houses that we might ever see in our neighborhood. But really, in our dataset we only have a few examples of houses that were sold. But there are lots of other houses that are in our neighborhood that we don't have in our dataset, or other houses that
you might imagine having been sold.
<img src="images/lec3_pic18.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 0:30*
<!--TEASER_END-->
Okay, so to compute this estimate over all houses that we might see in our dataset, we'd like to weight these house pairs,
so the pair of house attributes and the house sale's price. By how likely that pair is to have occurred in our dataset. So to do this we can think about defining a distribution and in this case over square feet of houses in our neighborhood.
What this picture is showing is a distribution that says we're very unlikely to see houses with very small or low number of square feet, very small houses. And we're also very unlikely to see really, really massive houses. So there's some bell curve to this, there's some sweet spot of kind of typical houses in our neighborhood, and then the likelihood drops off from there.
<img src="images/lec3_pic19.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 1:30*
<!--TEASER_END-->
Likewise what we can do is define a distribution that says for a given square footage of a house, what's the distribution over
the sales price of that house? ? So let's say the house has 2,640 square feet. Maybe I expect the range of house prices to be somewhere between 680,000 to maybe 950,000. That might be a typical range. But of course, you might see much lower valued houses or higher value, depending on the quality of that house.
<img src="images/lec3_pic20.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 1:39*
<!--TEASER_END-->
Formally when we go to define our generalization error, we're saying that we're taking the average value of our loss weighted by how likely those pairs were in our dataset.
So specifically we estimate our model parameters on our training data set so that's what gives us $\hat w$. That defines the model we're using for prediction, and then we have our loss function, assessing the cost of predicting $f_{\hat w}$ at our square foot x when the true value was y. And then what we're gonna do is we're gonna average over all possible (x,y). But weighted by how likely they are according to those distributions over square feet and value given square feet.
<img src="images/lec3_pic21.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 3:00*
<!--TEASER_END-->
Let's go back to these plots of looking at error versus model complexity. But in this case let's quantify our generalization error as a function of this complexity.
And to do this, what I'm showing by this crazy blue region here. And, it has different gradation going from white to darker blue, is the distribution of houses that I'm likely to see in my dataset. So, this white region here, these are the houses that I'm very likely to see, and then as I go further away from the white region I get to less likely house sale prices given a specific square foot value.
And so what I'm gonna do when I look at thinking about generalization error is I'm gonna take my fitted function where remember this green line was fit on the training data which are these blue circles. And then I'm gonna say, how well does it predict houses in this shaded blue region, weighted by how likely they are, how close to that white region.
Okay, so what I see here is this constant model who really doesn't approximate things well except maybe in this region here. So overall it has a reasonably high generalization error and I can go to my more complex model.
<img src="images/lec3_pic22.png">
<img src="images/lec3_pic23.png">
<img src="images/lec3_pic24.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 5:00*
<!--TEASER_END-->
Then I get to this much higher order polynomial, and when we were looking at training error, the training error was lower, right? But now, when we think about generalization error, we actually see that the generalization error is gonna go up relative to the simpler model.
<img src="images/lec3_pic25.png">
<img src="images/lec3_pic26.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 6:50*
<!--TEASER_END-->
So our generalization error in general will have some shape where it's going down. And then we get to a point where
the error starts increasing. Sorry, that should have been a smoother curve. The error starts increasing because we're getting to these overly complex models that fit the training data really well but don't generalize to other houses that we might see.
But importantly, in contrast to training error we can't actually compute generalization error. Because everything was relative
to this true distribution, the true way in which the world works. How likely houses are to appear in our dataset over all possible square feet and all possible house values. And of course, we don't know what that is. So, this is our ideal picture or
our cartoon of what would happen. But we can't actually go along and compute these different points.
<img src="images/lec3_pic27.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/CDx5h/generalization-error-what-we-really-want) 8:00*
<!--TEASER_END-->
## 3) Test error: what we can actually compute
So we can't compute generalization error, but we want some better measure of our predictive performance than training error gives us. And so this takes us to something called test error, and what test error is going to allow us to do is approximate generalization error.
And the way we're gonna do this is by approximating the error, looking at houses that aren't in our training set.
<img src="images/lec3_pic28.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:00*
<!--TEASER_END-->
So instead of including all these colored houses in our training set, we're gonna shade out some of them, these shaded gray houses and we're gonna make these into what's called a test set.
<img src="images/lec3_pic29.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:15*
<!--TEASER_END-->
And when we go to fit our models, we're just going to fit our models on the training data set. But then when we go to assess
our performance of that model, we can look at these test houses, and these are hopefully going to serve as a proxy of everything out there in the world. So hopefully, our test data set is a good measure of other houses that we might see, or at least in order to think of how well a given model is performing.
<img src="images/lec3_pic30.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 1:25*
<!--TEASER_END-->
So test error is gonna be our average loss computed over the houses in our test data set.
- $N_{test}$: are the number of houses in our test data
- $\hat w$: very important, estimated parameters were fit on the training data set
Okay, so even though this function looks very much like training error, the sum is over the test houses, but the function we're looking at was fit on training data. Okay, so these parameters in this fitted function never saw the test data.
<img src="images/lec3_pic31.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:20*
<!--TEASER_END-->
So just to illustrate this, we might think of fitting a quadratic function through this data, where we're gonna minimize the residual sum of squares on the training points, those blue circles, to get our estimated parameters $\hat w$.
<img src="images/lec3_pic32.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:33*
<!--TEASER_END-->
Then when we go to compute our test error, which in this case again we're gonna use squared error as an example, we're computing this error over the test points, all these grey different circles here. So test error is $\dfrac{1}{N}$ times the sum of the difference between our true house sales prices and our predicted price squared summing over all houses in our test data set.
<img src="images/lec3_pic33.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 2:45*
<!--TEASER_END-->
**Let's summarize our measures of error as a function of model complexity**
- Our training error decreased with increasing model complexity.
- In contrast, our generalization error went down for some period of time. But then we started getting to overly complex models that didn't generalize well, and the generalization error started increasing. So here we have generalization error. Or true error
- Our test error is a noisy approximation of generalization error. Because if our test data setting included everything we might ever see in the world in proportion to how likely it was to be seen, then that would be exactly our generalization error. But of course, our test data set is just some finite data set, and we're using it to approximate generalization error, so it's gonna be some noisy version of this curve here.
Test error is the thing that we can actually compute. Generalization error is the thing that we really want.
<img src="images/lec3_pic34.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/pq0SM/test-error-what-we-can-actually-compute) 3:00*
<!--TEASER_END-->
## 4) Defining overfitting
The notion of overfitting is if you have a model with parameters $\hat w$. In this model, there exists an estimated parameters, I'll just call them $w'$.
The model is overfit with two conditions hold:
- training error ($\hat w$) < training error ($w'$).
- true error ($\hat w$) > true error ($w'$).
Generally, the models are overfit, are the ones that have smaller training error. These are the ones that are really highly fit to the training data set but don't generalize well. Whereas the other points on the other half of this space are the ones that are not really well fit to the training data and also don't generalize well.
<img src="images/lec3_pic35.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/u8c2x/defining-overfitting) 2:00*
<!--TEASER_END-->
## 5) Training/test split
So we've said to assess the performance of our model, we really need to have a test data set carved out from our full data set. So, this raises the question of, how do I think about dividing the data set into training data versus test data?
- If I put too few points in my training set, then I'm not going to estimate my model well. And so, I'm going to have clearly bad predictor performance because of that.
- If I put too few points in my test set, that's gonna be a bad approximation to generalization error.
A general rule of thumb is typically you want just enough points in your test set to approximate generalization error well. And you want all your points in your training data set. Because you want to have as many points in your training data set
to learn a good model.
<img src="images/lec3_pic36.png">
<img src="images/lec3_pic37.png">
<img src="images/lec3_pic38.png">
<img src="images/lec3_pic39.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qn2vj/training-test-split) 1:00*
<!--TEASER_END-->
# 3) 3 sources of error and the bias-variance tradeoff
## 1) Irreducible error and bias
We've talked about three different measures of error. And now in this part, we're gonna talk about three different sources of error. And this is gonna lead us into a conversation of the bias variance trade-off. Okay, so when we were forming our prediction, there are three different sources of error.
- Noise
- Bias
- Variance
<img src="images/lec3_pic40.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 0:30*
<!--TEASER_END-->
**Let's look at the noise term**
As we've mentioned many times in this specialization, data are inherently noisy.
So the way the world works is that there's some true relationship between square feet and the value of a house. Or generically, between x and y. And we're representing that arbitrary relationship defined by the world, by $f_{w(true)}$, which is the notation we're using for that functional relationship.
But of course that's not a perfect description between x and y. The number of square feet and the house value. There are lot of other contributing factors including other attributes of the house that are not included just in square feet or how a person feels when they go in and make a purchase of a house or a personal relationship they might have with the owners. Or lots and lots of other things that we can't ever perfectly capture with just some function between square feet and value, and so that is the noise that's inherent in this process represented by this epsilon term ($\epsilon$). So in particular for any observation $y_i$ it's the sum of this relationship between the square feet and the value plus this noise term $\epsilon_i$ specific to that i. house.
And we've talked before about our assumption that this noise has zero mean because if it didn't that could be shoved into the f function instead. But what we haven't talked about is the spread of that noise. So at any given square feet what kind of variation and house price are we likely to see based on this type of noise that's inherent in our observations. And so this is referred to as the variance of this noise term epsilon. And this is something that's just a property of the data. We don't have control over this. This has nothing to do with our model nor our estimation procedure, it's just something that we have to deal with. And so this is called Irreducible error because it's nothing that we can reduce through choosing a better model or a better estimation procedure.
<img src="images/lec3_pic41.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 2:45*
<!--TEASER_END-->
The things that we can control are bias and variance, so we're gonna focus quite heavily on those two terms. So let's start by talking about bias. And this is basically just an assessment of how well my model can fit the true relationship between x and y.
So to think about this, let's think about how we get data in our data set. So here these points that we observed they're just a random snapshot of N houses that were sold and recorded and we tabulated in our data set. Well, based on that data set,
we fit some function and, thinking about bias, it's intuitive to start which is a very simple model of just a constant function. But what if another set of N houses had been sold? Then we would have had a different data set that we were using. And when we went to fit our model, we would have gotten a different line.
In the first data set, I tended to draw points that were below the true relationship so they happen to have, our houses in our data set happened to have values less than what the world kind of specifies as typical. And on the right hand side I drew points
that tended to lie above the line. So these are pretty extremely different data sets, but what you see is that the fits are pretty similar.
<img src="images/lec3_pic42.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 4:00*
<!--TEASER_END-->
So what we are saying is, over all possible data sets of size N that we might have been presented with of house sales, what do we expect our fit to look like?
There's a continuum ofpossible fits we might have gotten. And for all those possible fits, here this dashed green line represents our average fit, averaged over all those fits weighted by how likely they were to have appeared.
<img src="images/lec3_pic43.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 5:00*
<!--TEASER_END-->
Now we can start talking about bias. What bias is, is it's the difference between this average fit and the true function, $f_{w(true)}$.
That's what this equation shows here, and we're seeing this with this gray shaded region. That's the difference between the true
function and our average fit. And so intuitively what bias is saying is, is our model flexible enough to on average be able to capture the true relationship between square feet and house value. And what we see is that for this very simple constant model, this low complexity model has high bias. It's not flexible enough to have a good approximation to the true relationship. And because of these differences, because of this bias, this leads to errors in our prediction.
<img src="images/lec3_pic44.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/qlMrZ/irreducible-error-and-bias) 6:15*
<!--TEASER_END-->
## 2) Variance and the bias-variance tradeoff
http://scott.fortmann-roe.com/docs/BiasVariance.html
Let's turn to this third component which is a variance.
And what variance is gonna say is, how different can my specific fits to a given data set be from one another, as I'm looking at different possible data sets? And in this case, when we are looking at just this constant model, we showed by that early picture
where I drew points that were mainly above the true relationship and the points mainly below, that the actual resulting fits didn't vary very much. And when you look at the space of all possible observations, you see that the fits, they're fairly
similar, they're fairly stable.
<img src="images/lec3_pic45.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 0:30*
<!--TEASER_END-->
When you look at the variation in these fits, which I'm drawing with these grey bars here. We see that they don't vary very much.
<img src="images/lec3_pic46.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 0:54*
<!--TEASER_END-->
So, for this low complexity model, we see that there's low variance. So, to summarize what this variance is saying is, how much can the fits vary? And if they could vary dramatically from one data set to the other, then you would have very erratic predictions. Your prediction would just be sensitive to what data set you got. So, that would be a source of error in your predictions.
<img src="images/lec3_pic47.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 1:10*
<!--TEASER_END-->
And to see this, we can start looking at high-complexity models. So in particular, let's look at this data set again. And now, let's fit some high-order polynomial to it.
In the right dataset, let's choose two points, which I'm gonna highlight as these pink circles. And let's just move them a little bit. So, out of this whole data set, I've just moved two observations and not too dramatically, but I get a dramatically different fit.
<img src="images/lec3_pic48.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 1:20*
<!--TEASER_END-->
So then, when I think about looking over all possible data sets I might get, I might get some crazy set of curves. There is an average curve. And in this case, the average curve is actually pretty well behaved. Because this wild, wiggly curve is at any point, equally, likely to have been wild above, or wild below. So, on average over all data sets, it's actually a fairly smooth reasonable curve. But if I look at the variation between these fits, it's really large. So, what we're saying is that high-complexity models have high variance.
<img src="images/lec3_pic49.png">
<img src="images/lec3_pic50.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 2:30*
<!--TEASER_END-->
On the other, if I look at the bias of this model, so here again, I'm showing this average fit which was this fairly well behaved curve. And match pretty well to the true relationship between square feet and house value, because my model is really flexible. So on average, it was able to fit pretty precisely that true relationship. So, these high-complexities models have low bias.
<img src="images/lec3_pic51.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 3:00*
<!--TEASER_END-->
We can now talk about this bias-variance tradeoff. So, in particular, we're gonna plot bias and variance as a function of model complexity.
- Model complexity increases, our bias decreases.
- Model complexity increases, variance increases.So, our very simple model had very low variance, and the high-complexity models had high variance.
what we see is there's this natural tradeoff between bias and variance. And one way to summarize this is something that's called mean squared error.
MSE = bias$^2$ + variance
Machine learning is all about this tradeoff between bias and variance. And the goal is finding this sweet spot. This is the sweet spot where we get our minimum error, the minimum contribution of bias and variance, to our prediction errors.
But just like with generalization error, we cannot compute bias and variance, and mean squared error. Well, the reason is because just like with generalization error, they were defined in terms of the true function. Well, bias was defined very
explicitly in terms of the relationship relative to the true function. And when we think about defining variance, we have to average over all possible data sets, and the same was true for bias too. But all possible data sets of size n, we could have gotten from the world, and we just don't know what that is. So, we can't compute these things exactly. But throughout the rest of this course, we're gonna look at ways to optimize this tradeoff between bias and variance in a practical way.
<img src="images/lec3_pic52.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/ZvP40/variance-and-the-bias-variance-tradeoff) 6:00*
<!--TEASER_END-->
## 3) Error vs. amount of data
Let's start with looking at our true error or generalization error. But first, I want to make sure its clear that we are looking at these errors for a fixed model complexity.
If we have very few data points, our fitted function is a pretty poor estimate of the true relationship between x and y. So our true error's gonna be pretty high, so let's say that w hat is not approximated well from few points. But as we get more and
more data, we get a better and better approximation of our model and our true error decreases. But it decreases to some limit.
And what is that limit? Well that limit is the bias plus the noise inherent in the data. Because as we get tons and
tons of observations, well, we're taking our model and fitting it as well as we could ever hope to fit it, because we have every observation out there in the world. But the model might just not be flexible enough to capture the true relationship between x and y, and that is our notion of bias. Plus, of course, there's the error just from the noise in observations that other contribution. Okay, so this difference here Is the bias of the model and noise of the data.
Now let's look at training error. So let's say our training error starts somewhere. But what ends up happening is training error goes up as you get more and more data points. So when we have few data points, so with few data points, a fixed complexity model can fit them reasonably well, where reasonably of course depends on what the complexity of the model is. But as I get more and more and more data points, that same complexity of the model can't hope to fit all these points perfectly well. What is the limit of training error? That limit is exactly the same as the limit of our true error.
The reason is I have tons and tons of points there. That's all points that there could ever be possibly in the world, and I fit my model to it. And if I measure training error, I'm running it to all the possible points there are out there in the world. And that's exactly what our definition of true error is. So they converge to exactly the same point in the limit. Where that difference again, is the bias inherent from the lack of flexibility of the model, plus the noise inherent in the data.
So just to write this down:
- In the limit, I'm getting lots and lots of data points, this curve is gonna flatten out to how well model can fit true relationship $f_{true}$.
- In the limit, true error = training error.
So what we've seen so far in this module are three different measures of error. Our training, our true generalization error as well as our test error approximation of generalization error. And we've seen three different contributions to our errors. Thinking about that inherent noise in the data and then thinking about this notion of bias in variance. And we finally concluded with this discussion on the tradeoff between bias in variance and how bias appears no matter how much data we have. We can't escape the bias from having a specified model of a given complexity.
<img src="images/lec3_pic53.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/lYBeX/error-vs-amount-of-data) 5:00*
<!--TEASER_END-->
# 4) Formally defining and deriving the 3 sources of error
## 1) Formally defining the 3 sources of error
So you mentioned that the training set is just a random sample of some and observations. In this case, some N houses that were sold and recorded, but what if N other houses had been sold and recorded? How would our performance change? So for example, here in this picture we're showing one set of N observations that are used for training data, those are the blue circles. And we fit some quadratic function through this data and here we show some other set of N observations and we see that we get a different fit.
<img src="images/lec3_pic54.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:00*
<!--TEASER_END-->
And to assess our performance of each one of these fits we can think about looking at generalization error.
- So in the first case we might get one generalization error of this specific fit $\hat w(1)$.
- And in the second case we would get some different evaluation of generalization error. Let's call it generalization error of $\hat w(2)$.
<img src="images/lec3_pic55.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:30*
<!--TEASER_END-->
But one thing that we might be interested in is, how do we perform on average for a training data set of N observations?
Because imagine them trying to develop a tool that's gonna be used by real estate agents to form these types of predictions. Well I like to design my tool, package it up and send it out there, and then a real estate agent might come in and have some set of observations of house sales from their neighborhood that they're using to make their predictions. So that might be different than another real estate agent.
And what I'd like to know, is for a given amount of data, some training set of size N, how well should I expect the performance of this model to be, regardless of what specific training dataset I'm looking at? So in these cases what we like to do is average our performance over all possible fits that we might get. What I mean by that is all possible training data sets that might have appeared, and the resulting fits on those data sets.
<img src="images/lec3_pic56.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 1:50*
<!--TEASER_END-->
So formerly, we're gonna define this thing called expected prediction error which is the expected value of our generalization
error, over different training data sets. So very specifically, for a given training data set, we get parameters that are fit to that data set. So I'll call that $\hat w$ of training set. And then for that estimated model, I can evaluate my generalization error and what the expected prediction error is doing is it's taking a weighted average over all possible training sets that I might have seen. Where for each one I get a different set of estimated parameters and thus a different notion of the generalization error.
<img src="images/lec3_pic57.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 3:00*
<!--TEASER_END-->
And to start analyzing this quantity of prediction error, let's specifically look at some target input $x_t$, which might be a house with 2,640 square feet. And let's also take our loss function to be squared error. So in this case when we're talking
specifically about a target point $x_t$. What we can do later after we do the analysis specifically for $x_t$ is we can think about averaging this over all possible $x_t$, over all x all square feet. But in some cases we might actually be interested in one region of our input space in particular. And then when we talk about using squared error in particular, this is gonna allow our analysis to follow through really nicely as we're gonna show not in this video, but in our next even more in
depth video which is also optional.
<img src="images/lec3_pic58.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 4:00*
<!--TEASER_END-->
But under these assumptions of looking specifically at $x_t$ and looking at squared error as our measure of loss. You can show that the average prediction error at xt is simply the sum of three terms which we're gonna go through: $\sigma$ (sigma), bias, variance.
So these terms are yet to be defined, and this is what we're gonna walk through in this video in a much more formal way than we did in the previous set of slides.
<img src="images/lec3_pic59.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 4:35*
<!--TEASER_END-->
So let's start by talking about this first term, sigma squared and what this is gonna represent is the noise we talked about in the earlier videos.
So in particular, remember that we're saying that there's some true relationship between square feet and house value. That that's just a relationship that exists out there in the world, and that's captured by $f_{w(true)}$, but of course that doesn't fully capture how we think about the value of a house. There are other factors at play. And so all those other factors out there in the world are captured by our noise term, which here we write as just an additive term plus epsilon.
So epsilon is our noise, and we said that this noise term has zero meaning cuz if not we can just shove that other component into $f_{w(true)}$. But we're just gonna make the assumption that epsilon has 0 mean then we can start talking about what is the
spread of noise you're likely to see at any point in the input space. And that spread is called the variance. So we denote it by sigma squared and sigma squared is the variance of this noise epsilon.
And as we talked about before, this noise is just noise that's out there in the world, we have no control over it no matter how complicated and interesting of a model, we specify our algorithm for fitting that model. We can't do anything about the fact
that we're using x for our prediction. But there's just inherently some noise in how our observations are generated in the world. So for this reason, this is called our irreducible error. Because it's noise that we can't reduce through any choices that we have control over.
<img src="images/lec3_pic60.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 5:50*
<!--TEASER_END-->
So now let's talk about this second term, bias squared.
And remember that when we talked about bias this was a notion of how well our model could on average fit the true relationship between x and y. But now let's go through this at a much more formal level. And in particular let's just remember that
there's some relationship between square feet and house value in our case which is represented by this orange line. And then from this true world we get some data set and to find a training set which are these blue circles. And using this training data we estimate our model parameters. Well, if we had gotten some other set of endpoints, we would have fit some other functions.
<img src="images/lec3_pic61.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 7:00*
<!--TEASER_END-->
Now, when I look over all possible data sets of size N that I might have gotten, where remember where this blue shaded region here represents the distribution over x and y. So how likely it is to get different combinations of x and y. And let's say, I draw endpoints from this joint distribution over x and y and over all possible values I look at an estimated function. So for example here are the two, estimated functions from the previous slide, those example data sets that I showed. But of course there's a whole continuum of estimated functions that I get for different training sets of size N. Then when I average these estimated functions, these specific fits over all my possible training data sets, what I get is my average fit. So now let's talk about this a little bit more formally. We had already presented this in our previous video.
This $f_{\bar w}$ (f sub w bar). But now, let's define this. This is the expectation of a specific fit on a specific training data set or let me rephrase that, the fit I get on a specific training data set averaged over all possible training data sets of size N that I might get. So that is the formal definition of this $f_{\bar w}$ (f sub w bar), what we have been calling our average fit.
And what we're talking about when we're talking about bias is, we're talking about comparing this average fit to the true relationship. And here remember again, we're focusing specifically on some target $x_t$. And so the bias at $x_t$ is the difference between the true relationship at $x_t$ between $x_t$ and y. So between a given square feet and the house value whatever the true relationship is between that input and the observation versus this average relationship estimated over all possible training data sets.
<img src="images/lec3_pic62.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 9:00*
<!--TEASER_END-->
So that is the formal notion of bias of $x_t$, and let's just remember that when it comes in as our error term, we're looking at bias squared.
<img src="images/lec3_pic63.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 9:25*
<!--TEASER_END-->
**So that's the second term. Now let's turn to this third term which is variance. **
And let's go through this definition where again, we're interested in this average fit $f_{\bar w}$ (f sub w bar), this green dashed line. But that really isn't the quantity of interest. It's gonna be used in our definition here. But the thing that we're really interested in, is over all possible fits we might see. How much do they deviate from this expected fit?
<img src="images/lec3_pic64.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 10:00*
<!--TEASER_END-->
So thinking about again, specifically at our target $x_t$, how much variation is there in the training dataset specific fits across all training datasets we might see?
<img src="images/lec3_pic65.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 10:15*
<!--TEASER_END-->
And that's this variance term and now again, let's define it very formally.
Well let me first state what variance is in general. So variance of some random variable is simply looking at the expected value of that random variable minus its mean squared. So in this context, when we're looking at the variability of these functions at xt, we're taking the expectation and our random quantity is our estimated function for a specific training data set at $x_t$.
And then what's the mean of that random function? The mean is this average fit, this $f_{\bar w}$ (f sub w bar). So we're looking at the difference between fit on a specific training dataset and what I expect to earn averaged over all possible training datasets. I look at that quantity squared and what is my expectation taken over?
Let me just mention that this quantity when I take this squared, represents a notion of how much deviation a specific fit has from the expected fit at $x_t$. And then when I think about what the expectation is taking over, it's taking over all possible
training data sets of size N. So that's my variance term.
And when we think intuitively about why it makes sense that we have the sum of these three terms in this specific form. Well what we're saying is variance is telling us how much can my specific function that I'm using for prediction. I'm just gonna use one of these functions for prediction. I get a training dataset that gives me an $f_{\bar w}$ (f sub w hat), I'm using that for prediction. Well, how much can that deviate from my expected fit over all datasets I might have seen.
So again, going back to our analogy, I'm a real estate agent, I grab my data set, I fit a specific function to that training data. And I wanna know well, how wild of a fit could this be relative to what I might have seen on average over all possible datasets that all these other realtors are using out there?
And so of course, if the function from one realtor to another realtor looking at different data sets can vary dramatically, that can be a source of error in our predictions. But another source of error which the biases is capturing is over all these possible datasets, all these possible realtors. If this average function just can never capture anything close to their true relationship between square feet and house value, then we can't hope to get good predictions either and that's what our bias is capturing. And why are we looking at bias squared? Well, that's putting it on an equal footing of these variance terms because remember bias was just the difference between the true value and our expected value. But these variance terms are looking at
these types of quantities but squared. So that's intuitively why we get bias squared and then finally, what's our third sense of error?
Well let's say, I have no variance in my estimator always very low variance. And the model happens to be a very good
fit so neither of these things are sources of error, I'm doing basically magically perfect on my modeling side, while still inherently there's noise in the data. There are things that just trying to form predictions from square feet alone can't capture. And so that's where irreducible error or this sigma squared is coming through. And so intuitively this is why our
prediction errors are a sum of these three different terms that now we've defined much more formally.
<img src="images/lec3_pic66.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/PB7vp/formally-defining-the-3-sources-of-error) 12:00*
<!--TEASER_END-->
## 2) Formally defining the 3 sources of error
Why specifically these are the three sources of error, and why they appear as sigma squared plus bias squared plus variance.
Let's start by recalling our definition of expected prediction error, which was the expectation over trending data sets of our generalization error. And, here I'm using just a shorthand notation train instead of training set. (train = training set)
So let's plug in the formal definition of our generalization error. And remember that our generalization error was our expectation over all possible input and output pairs, X, Y pairs of our loss. And so that's what is written here on the second line. And then let's remember that we talked about specifying things specifically at a target $x_t$, and under an assumption of
using a loss function of squared error. And so again we're gonna use this to form all of our derivations. And so when we make these two assumptions, then this expected prediction error at $x_t$ simplifies to the following where there's no longer an expectation over x because we're fixing our point in the input space to be $x_t$. And our expectation over y becomes an expectation over yt because we're only interested in the observations that appear for an input at xt. So, the other thing that we've done in this equation is we've plugged in our specific definition of our loss function as our squared error loss. So, for the remainder of this video, we're gonna start with this equation and we're gonna derive why we get this specific form, sigma-squared plus bias squared plus variance.
<img src="images/lec3_pic66.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 2:00*
<!--TEASER_END-->
Expected prediction error at $x_t$
$$= \large E_{train,y_t}[(y_t - f_{\hat w(train)}(x_t))^2]$$
So this is the definition of expected prediction error at $x_t$ that we had on the previous slide, under our assumption of squared error loss. What we can do is we can rewrite this equation as follows, where what we've done is we've simply added and subtracted the true function, the true relationship between x and y, specifically at xt. And because we've just simply added and subtracted these two quantities, nothing in this equation has changed as a result.
$$= \large E_{train,y_t}[((y_t - f_{w(true)}(x_t)) + (f_{w(true)}(x_t) - f_{\hat w(train)}(x_t)))^2]$$
Let's do a little aside here, because it is useful. So if we take the expectation of some quantity:
$$ \large E[(a + b)^2] \\
= E[a^2 + 2ab + b^2] \\
= E[a^2] + 2E[ab] + E[b^2]$$
I'm going to define some shorthand for writing purpose
- $y_t$: y
- $f_{w(true)}$: f
- $f_{\hat w(train)}$: $\hat f$
Now that we've set the stage for this derivation, let's rewrite this term here. So we get the expectation over our training data set and our observation it's remember I'm writing $y_t$ just as y and I'm going to get the first term squared. So I'm going to get y- f. Squared that's my a squared term this first term here. And then I'm gonna get two times the expectation of a times b, and let me again specify what the expectations is over the expectations over training data set and observation Y. And when I so A times B I get Y minus F times F minus F hat. And then the final term is I'm going to get the expectation over my training set and the observation Y. Of B squared, which is F minus F hat squared.
$$= \large E_{train,y}[(y-f)^2] +2E_{train,y}[(y - f)(f- \hat f)] + E_{train,y}[(f- \hat f)^2]$$
Now let's simplify this a bit.
Does anything in this first term depend on my training set? Well y is not a function of the training data, F is not a function of the training data, that's the true function. So this expectation over the training set, that's not relevant for this first term here. And when I think about the expectation over y, well what is this? This is the difference between my observation and the true function. And that's specifically, that's epsilon. So what this term here is, this is epsilon squared. And epsilon has zero mean so if I take the expectation of epsilon squared that's just my variance from the world. That's sigma squared. Okay so
this first term results in sigma squared.
$$ (y - f)^2 = \epsilon^2 =\sigma^2 $$
Now let's look at this second term, you know what, I'm going to write this a little bit differently to make it very clear here. So I'll just say that this first term here is sigma squared by definition. Okay, now let's look at this second term. And again what is Y minus F? Well Y minus F is this epsilon noise term and our noise is a completely independent variable from F or F hat.
- If I take the expectation of A and B, where A and B are independent random variables, then the expectation of A times B is equal to the expectation of A times the expectation of B. So, this is another little aside.
$$E[ab] = E[a]E[b] \text{, where a, b are independent variables.}$$
And, so what I'll get here,
is I'm going to get that this term is the expectation of epsilon times
the expectation of F minus F hat. And what's the expectation of epsilon,
my noise? It's zero,
remember we said that again and again, that we're assuming that epsilon is zero
noise, that can be incorporated into F. This term is zero, the result of this
whole thing is going to be zero. We can ignore that second term.
$$E[(y - f)(f- \hat f)] \\
= E[\epsilon] E[f - \hat f] \\
= 0 \cdot E[f - \hat f] \\
= 0$$
Let's look at this last term and this term for this slide, I'm simply gonna call the mean squared error. I'm gonna define this little equal with a triangle on top is something that I'm defining here. I'm defining this to be equal to something called the mean square error, let me write that out if you want to look it up later. Mean square error of F hat.
$$E[(f- \hat f)^2] = MSE(\hat f)$$
Now that I've gone through and done that, I can say that the result of all this derivation is that I get a quantity sigma squared. Plus mean squared error of F hat.
$$\large E_{train,y}[(y-f)^2] +2E_{train,y}[(y - f)(f- \hat f)] + E_{train,y}[(f- \hat f)^2] \\
= \sigma^2 + MSE(\hat f)$$
But so far we've said a million times that my expected prediction error at $x_t$ is sigma squared plus bias squared plus variance. On the next slide what we're gonna do is we're gonna show how our mean squared error is exactly equal to bias squared plus variance.
<img src="images/lec3_pic74.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 10:00*
<!--TEASER_END-->
What I've done is I've started this slide by writing mean squared error of remember on the previous slide we were calling this
$\hat f$, that was our shorthand notation.
$$MSE[f_{\hat w(train}(x_t)] = \\
E_{train}[(f_{w(true)}(x_t) - f_{\hat w(train)}(x_t))^2]$$
- $f_{\hat w(train)}(x_t) = \hat f$
And so mean squared error of $\hat f$ according to the definition on the previous slide is it's looking at the expectation of F minus F hat squared. And I guess here I can mention when I take this expectation over training data and my observation Y. Does the observation Y appear anywhere in here, F minus F hat? No, so I can get rid of that Y there. If I look at this I'm repeating it here on this next slide where I have the expectation over my training data of my true function, which I had on the last slide
just been denoting as simply F. And the estimated function which I had been denoting, let me be clear it's inside this square that I'm looking at I'd been denoting this as F hat. And both of these quantities were evaluated specifically at $x^t$.
$$= E_{train}[((f_{w(true)}(x_t) - f_{\bar w}(x_t)) + (f_{\bar w}(x_t) - f_{\hat w(train)}(x_t)))^2]$$
Again let's go through expanding this, where in this case, when we rewrite this quantity in a way that's gonna be useful for this derivation, we're gonna add and subtract $f_{\bar w}$ (F sub W bar) and what $f_{\bar w}$, remember that it was the green dashed line in all those bias variance plots. What $f_{\bar w}$ is looking average over all possible training data sets, where for each training data set, I get a specific fitted function and I average all those fitted functions over those different training data sets. That's what results in F sub W bar. It's my average fit that for my specific model that I'm getting
averaging over my training data sets. And so for simplicity here, I'm gonna refer to $f_{\bar w}$ as $\bar f$.
- $f_{\bar w} = \bar f$
Using that same trick of taking the expectation of A plus B squared and completing the square and then passing the expectation through, I'm going to do the same thing here
$$= E_{train}[(f - \bar f)^2] + 2E_{train}[(f - \bar f)(\bar f - \hat f)] + E_{train}[(\bar f - \hat f)^2]$$
Now let's go through and talk about what each of these quantities is.
- And the first thing is let's just remember that $\bar f$ what was the definition of $\bar f$ formerly? It was my expectation over training data sets of $\hat f$ of my fitted function on a specific training data set. I've already taken expectation over the training set here. F is a true relationship. F has nothing to do with the training data. This is a number. This is the mean of a random variable, and it no longer has to do with the training data set either. I've averaged over training data sets. Here there's really no expectation over trending data sets. Nothing is random in terms of the trending data set for this first quantity. So $\bar f = E_{train}[\hat f]$
- This first quantity is really simply $(f - \bar f)^2$, and what is that? That's the difference between the true function and my average, my expected fit. Specifically at $x_t$, but squared. That is bias squared. That's by definition. So $$E_{train}[(f - \bar f)^2] = (f - \bar f)^2 = bias^2(\hat f)$$
Now let's look at this second term, and the second term is not a function of training data. So, this is just like a scaler. It can just come out of the expectation so for this second term I can rewrite this as
$$2E_{train}[(f - \bar f)(\bar f - \hat f)] \\
= 2(f- \bar f) E_{train}[\bar f - \hat f]$$
- Okay. And now let's re-write this term, and just pass the expectation through. And the first thing is again $\bar f$ is not a function of training data, so the result of that is just f bar and then i'm gonna get minus the expectation over my training data of f hat.
$$E_{train}[\bar f - \hat f] = \bar f - E_{train}[\hat f]$$
- So, what is this $E_{train}[\hat f]$? This is the definition of f bar. This is taking my specific fit on a specific, so it's the fit on a specific training data set at $x_t$ And it's taking the expectation over all training data sets. That's exactly the definition of what f bar is, that average fit.
$$E_{train}[\hat f] = \bar f$$
- So, this term here is equal to 0
$$E_{train}[\bar f - \hat f] \\
= \bar f - E_{train}[\hat f] \\
= \bar f - \bar f \\
= 0$$
That just leaves one more quantity to analyze and that's the last term here where what I have is an expectation over a function minus it's mean squared. So, let me just write this in words. It's an expectation of let's just say, so the fact that I can
equivalently write this as F hat minus F bar squared. I hope that's clear that the negative sign there doesn't matter. It gets squared. They're exactly equivalent. And so what is this?
- $\hat f$: this is a random function at $x_t$ which is equal to just a random variable.
- $\bar f$: and this is its mean.
And so the definition of taking the expectation of some random variable minus its mean squared, that's the definition of variance. So, this term is the variance of f hat.
$$E_{train}[(\bar f - \hat f)^2] = E[(\hat f - \bar f)^2] = var(\hat f)$$
<img src="images/lec3_pic75.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 19:00*
<!--TEASER_END-->
That's exactly what we're hoping to show because now we can talk about putting it all together. Where what we see is that our
expected prediction error at $x_t$ we derived to be equal to Sigma squared plus mean squared error. And then we derived the fact that mean squared error is equal to bias squared plus variance. So, we get the end result that our expected prediction error at Xt is sigma squared plus bias squared plus variance, and this represents our three sources of error. And we've know completed our formal derivation of this.
<img src="images/lec3_pic76.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/QiT0N/formally-deriving-why-3-sources-of-error) 20:00*
<!--TEASER_END-->
# 5) Putting the pieces together
## 1) Training/validation/test split for model selection, fitting, and assessment
Let's wrap up by talking about two really important task when you're doing regression. And through this discussion, it's gonna motivate another important concept of thinking about validation sets.
So, the two important task in regression, is first we need to choose a specific model complexity. So for example, when we're talking about polynomial regression, what's the degree of that polynomial? And then for our selected model, we assess its performance. And actually these two steps aren't specific gesture regression. We're gonna see this in all different aspects of machine learning, where we have to specify our model and then we need to assess the performance of that model. So, what we're gonna talk about in this portion of this module generalizes well beyond regression. And for this first task, where we're
talking about choosing the specific model. We're gonna talk about it in terms of sum set of tuning parameters, lambda, which control the model complexity. Again, and for example, lambda might specify the degree of the polynomial and polynomial aggression.
<img src="images/lec3_pic77.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 1:00*
<!--TEASER_END-->
So, let's first talk about how we can think about choosing lambda. And then for a given model specified by lambda, a given model complexity, let's think about how we're gonna assess the performance of that model.
Well, one really naive approach is to do what we've described before, where you take your data set and split it into a training set and a test set. And then, what we're gonna do is for our model selection portion where we're choosing the model complexity lambda.
For every possible choice of lambda, we're gonna estimate model parameters associated with that lambda model on the training set. And the we're gonna test the performance of that fitted model on the test set. And we're gonna tabulate that for every lambda that we're considering. And we're gonna choose our tuning parameters as the ones that minimize this test error. So, the ones that perform best on the test data. And we're gonna call those parameters lambda star.
So, now I have my model. I have my specific degree of polynomial that I'm gonna use. And I wanna go and assess the performance of this specific model. And the way I'm gonna do this is I'm gonna take my test data again. And I'm gonna say, well, okay, I know that test error is an approximation of generalization error. So, I'm just gonna compute the test error for this lambda star fitted model. And I'm gonna use that as my approximation of the performance of this model. Well, what's the issue with this? Is this gonna perform well? No, it's really overly optimistic.
<img src="images/lec3_pic78.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 2:50*
<!--TEASER_END-->
So, this issue is just like what we saw when we weren't dealing with this notion of choosing model complexity. We just assumed that we had a specific model, like a specific degree polynomial. But we wanted to assess the performance of the model. And the naive approach we took there was saying, well, we fit the model to the training data, and then we're gonna use training error to
assess the performance of the model. And we said, that was overly optimistic because we were double dipping. We already used the data to fit our model. And then, so that error was not a good measure of how we're gonna perform on new data.
Well, it's exactly the same notion here and let's walk through why. Most specifically, when we're thinking about choosing our model complexity, we were using our test data to compare between different lambda values. And we chose the lambda value that
minimized the error on that test data that performed the best there. So, you could think of this as having fit lambda, this model complexity tuning parameter, on the test data. And now, we're thinking about using test error as a notion of approximating
how well we'll do on new data. But the issue is, unless our test data represents everything we might see out there in the world,
that's gonna be way too optimistic. Because lambda was chosen, the model was chosen, to do well on the test data and so that won't generalize well to new observations.
<img src="images/lec3_pic79.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 4:00*
<!--TEASER_END-->
So, what's our solution? Well, we can just create two test data sets. They won't both be called test sets, we're gonna call one of them a validation set. So, we're gonna take our entire data set, just to be clear. And now, we're gonna split it into three data sets.
One will be our training data set, one will be what we call our validation set, and the other will be our test set. And then what we're gonna do is, we're going to fit our model parameters always on our training data, for every given model complexity that we're considering. But then we're gonna select our model complexity as the model that performs best on the validation set
has the lowest validation error. And then we're gonna assess the performance of that selected model on the test set. And we're gonna say that that test error is now an approximation of our generalization error. Because that test set was never used in
either fitting our parameters, w hat, or selecting our model complexity lambda, that other tuning parameter. So, that data was completely held out, never touched, and it now forms a fair estimate of our generalization error.
<img src="images/lec3_pic80.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 5:00*
<!--TEASER_END-->
So in summary, we're gonna fit our model parameters for any given complexity on our training set. Then we're gonna, for every fitted model and for every model complexity, we're gonna assess the performance and tabulate this on our validation set. And we're gonna use that to select the optimal set of tuning parameters lambda star. And then for that resulting model, that w hat sub lambda star, we're gonna assess a notion of the generalization error using our test set.
<img src="images/lec3_pic81.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 6:00*
<!--TEASER_END-->
And so a question, is how can we think about doing the split between our training set, validation set, and test set? And there's no hard and fast rule here, there's no one answer that's the right answer. But typical splits that you see out there
are something like an 80-10-10 split. So, 80% of your data for training data, 10% t for validation, 10% for tests. Or another common split is 50%, 25%, 25%. But again, this is assuming that you have enough data to do this type of split and still get reasonable estimates of your model parameters, reasonable notions of how different model complexities compare. Because you have a large enough validation set, and you still have a large enough test set in order to assess the generalization error of the resulting model. And if this isn't the case, we're gonna talk about other methods that allow us to do these same types of notions, but not with this type of hard division between training, validation, and test.
<img src="images/lec3_pic81.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/HNJ0c/training-validation-test-split-for-model-selection-fitting-and-assessment) 7:00*
<!--TEASER_END-->
## 2) A brief recap
<img src="images/lec3_pic83.png">
*Screenshot taken from [Coursera](https://www.coursera.org/learn/ml-regression/lecture/FT2HG/a-brief-recap) 1:00*
<!--TEASER_END-->
| 0.788461 | 0.985732 |
# Stochastic Gradient Langevin Dynamics in MXNet
```
%matplotlib inline
```
In this notebook, we will show how to replicate the toy example in the paper <a name="ref-1"/>[(Welling and Teh, 2011)](#cite-welling2011bayesian). Here we have observed 20 instances from a mixture of Gaussians with tied means:
$$
\begin{aligned}
\theta_1 &\sim N(0, \sigma_1^2)\\
\theta_2 &\sim N(0, \sigma_2^2)\\
x_i &\sim \frac{1}{2}N(0, \sigma_x^2) + \frac{1}{2}N(\theta_1 + \theta_2, \sigma_x^2)
\end{aligned}
$$
We are asked to draw samples from the posterior distribution $p(\theta_1, \theta_2 \mid X)$. In the following, we will use stochastic gradient langevin dynamics (SGLD) to do the sampling.
```
import mxnet as mx
import mxnet.ndarray as nd
import numpy
import logging
import time
import matplotlib.pyplot as plt
def load_synthetic(theta1, theta2, sigmax, num=20):
flag = numpy.random.randint(0, 2, (num,))
X = flag * numpy.random.normal(theta1, sigmax, (num, )) \
+ (1.0 - flag) * numpy.random.normal(theta1 + theta2, sigmax, (num, ))
return X.astype('float32')
class SGLDScheduler(mx.lr_scheduler.LRScheduler):
def __init__(self, begin_rate, end_rate, total_iter_num, factor):
super(SGLDScheduler, self).__init__()
if factor >= 1.0:
raise ValueError("Factor must be less than 1 to make lr reduce")
self.begin_rate = begin_rate
self.end_rate = end_rate
self.total_iter_num = total_iter_num
self.factor = factor
self.b = (total_iter_num - 1.0) / ((begin_rate / end_rate) ** (1.0 / factor) - 1.0)
self.a = begin_rate / (self.b ** (-factor))
self.count = 0
def __call__(self, num_update):
self.base_lr = self.a * ((self.b + num_update) ** (-self.factor))
self.count += 1
return self.base_lr
def synthetic_grad(X, theta, sigma1, sigma2, sigmax, rescale_grad=1.0, grad=None):
if grad is None:
grad = nd.empty(theta.shape, theta.context)
theta1 = theta.asnumpy()[0]
theta2 = theta.asnumpy()[1]
v1 = sigma1 **2
v2 = sigma2 **2
vx = sigmax **2
denominator = numpy.exp(-(X - theta1)**2/(2*vx)) + numpy.exp(-(X - theta1 - theta2)**2/(2*vx))
grad_npy = numpy.zeros(theta.shape)
grad_npy[0] = -rescale_grad*((numpy.exp(-(X - theta1)**2/(2*vx))*(X - theta1)/vx
+ numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta1/v1
grad_npy[1] = -rescale_grad*((numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta2/v2
grad[:] = grad_npy
return grad
```
We first write the generation process. In the paper, the data instances are generated with the following parameter, $\theta_1^2=10, \theta_2^2=1, \theta_x^2=2$.
Also, we need to write a new learning rate schedule as described in the paper $\epsilon_t = a(b+t)^{-r}$
and calculate the gradient. After these preparations, we can go on with the sampling process.
```
numpy.random.seed(100)
mx.random.seed(100)
theta1 = 0
theta2 = 1
sigma1 = numpy.sqrt(10)
sigma2 = 1
sigmax = numpy.sqrt(2)
X = load_synthetic(theta1=theta1, theta2=theta2, sigmax=sigmax, num=100)
minibatch_size = 1
total_iter_num = 1000000
lr_scheduler = SGLDScheduler(begin_rate=0.01, end_rate=0.0001, total_iter_num=total_iter_num,
factor=0.55)
optimizer = mx.optimizer.create('sgld',
learning_rate=None,
rescale_grad=1.0,
lr_scheduler=lr_scheduler,
wd=0)
updater = mx.optimizer.get_updater(optimizer)
theta = mx.random.normal(0, 1, (2,), mx.cpu())
grad = nd.empty((2,), mx.cpu())
samples = numpy.zeros((2, total_iter_num))
start = time.time()
for i in xrange(total_iter_num):
if (i+1)%100000 == 0:
end = time.time()
print "Iter:%d, Time spent: %f" %(i + 1, end-start)
start = time.time()
ind = numpy.random.randint(0, X.shape[0])
synthetic_grad(X[ind], theta, sigma1, sigma2, sigmax, rescale_grad=
X.shape[0] / float(minibatch_size), grad=grad)
updater('theta', grad, theta)
samples[:, i] = theta.asnumpy()
```
We have collected 1000000 samples in the **samples** variable. Now we can draw the density plot. For more about SGLD, the original paper and <a name="ref-2"/>[(Neal, 2011)](#cite-neal2011mcmc) are good references.
```
plt.hist2d(samples[0, :], samples[1, :], (200, 200), cmap=plt.cm.jet)
plt.colorbar()
plt.show()
```
<!--bibtex
@inproceedings{welling2011bayesian,
title={Bayesian learning via stochastic gradient Langevin dynamics},
author={Welling, Max and Teh, Yee W},
booktitle={Proceedings of the 28th International Conference on Machine Learning (ICML-11)},
pages={681--688},
url="http://www.icml-2011.org/papers/398_icmlpaper.pdf",
year={2011}
}
@article{neal2011mcmc,
title={MCMC using Hamiltonian dynamics},
author={Neal, Radford M and others},
journal={Handbook of Markov Chain Monte Carlo},
volume={2},
pages={113--162},
url="www.mcmchandbook.net/HandbookChapter5.pdf",
year={2011}
}
-->
# References
<a name="cite-welling2011bayesian"/><sup>[^](#ref-1) </sup>Welling, Max and Teh, Yee W. 2011. _Bayesian learning via stochastic gradient Langevin dynamics_. [URL](http://www.icml-2011.org/papers/398_icmlpaper.pdf)
<a name="cite-neal2011mcmc"/><sup>[^](#ref-2) </sup>Neal, Radford M and others. 2011. _MCMC using Hamiltonian dynamics_. [URL](www.mcmchandbook.net/HandbookChapter5.pdf)
|
github_jupyter
|
%matplotlib inline
import mxnet as mx
import mxnet.ndarray as nd
import numpy
import logging
import time
import matplotlib.pyplot as plt
def load_synthetic(theta1, theta2, sigmax, num=20):
flag = numpy.random.randint(0, 2, (num,))
X = flag * numpy.random.normal(theta1, sigmax, (num, )) \
+ (1.0 - flag) * numpy.random.normal(theta1 + theta2, sigmax, (num, ))
return X.astype('float32')
class SGLDScheduler(mx.lr_scheduler.LRScheduler):
def __init__(self, begin_rate, end_rate, total_iter_num, factor):
super(SGLDScheduler, self).__init__()
if factor >= 1.0:
raise ValueError("Factor must be less than 1 to make lr reduce")
self.begin_rate = begin_rate
self.end_rate = end_rate
self.total_iter_num = total_iter_num
self.factor = factor
self.b = (total_iter_num - 1.0) / ((begin_rate / end_rate) ** (1.0 / factor) - 1.0)
self.a = begin_rate / (self.b ** (-factor))
self.count = 0
def __call__(self, num_update):
self.base_lr = self.a * ((self.b + num_update) ** (-self.factor))
self.count += 1
return self.base_lr
def synthetic_grad(X, theta, sigma1, sigma2, sigmax, rescale_grad=1.0, grad=None):
if grad is None:
grad = nd.empty(theta.shape, theta.context)
theta1 = theta.asnumpy()[0]
theta2 = theta.asnumpy()[1]
v1 = sigma1 **2
v2 = sigma2 **2
vx = sigmax **2
denominator = numpy.exp(-(X - theta1)**2/(2*vx)) + numpy.exp(-(X - theta1 - theta2)**2/(2*vx))
grad_npy = numpy.zeros(theta.shape)
grad_npy[0] = -rescale_grad*((numpy.exp(-(X - theta1)**2/(2*vx))*(X - theta1)/vx
+ numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta1/v1
grad_npy[1] = -rescale_grad*((numpy.exp(-(X - theta1 - theta2)**2/(2*vx))*(X - theta1-theta2)/vx)/denominator).sum()\
+ theta2/v2
grad[:] = grad_npy
return grad
numpy.random.seed(100)
mx.random.seed(100)
theta1 = 0
theta2 = 1
sigma1 = numpy.sqrt(10)
sigma2 = 1
sigmax = numpy.sqrt(2)
X = load_synthetic(theta1=theta1, theta2=theta2, sigmax=sigmax, num=100)
minibatch_size = 1
total_iter_num = 1000000
lr_scheduler = SGLDScheduler(begin_rate=0.01, end_rate=0.0001, total_iter_num=total_iter_num,
factor=0.55)
optimizer = mx.optimizer.create('sgld',
learning_rate=None,
rescale_grad=1.0,
lr_scheduler=lr_scheduler,
wd=0)
updater = mx.optimizer.get_updater(optimizer)
theta = mx.random.normal(0, 1, (2,), mx.cpu())
grad = nd.empty((2,), mx.cpu())
samples = numpy.zeros((2, total_iter_num))
start = time.time()
for i in xrange(total_iter_num):
if (i+1)%100000 == 0:
end = time.time()
print "Iter:%d, Time spent: %f" %(i + 1, end-start)
start = time.time()
ind = numpy.random.randint(0, X.shape[0])
synthetic_grad(X[ind], theta, sigma1, sigma2, sigmax, rescale_grad=
X.shape[0] / float(minibatch_size), grad=grad)
updater('theta', grad, theta)
samples[:, i] = theta.asnumpy()
plt.hist2d(samples[0, :], samples[1, :], (200, 200), cmap=plt.cm.jet)
plt.colorbar()
plt.show()
| 0.569374 | 0.973494 |
<a href="https://colab.research.google.com/github/sahooamarjeet/ML_Case_Study/blob/master/Customer_Analytics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
```
Mount Gdrive in the collab
```
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
import os;os.listdir("/content/gdrive/My Drive/Colab Notebooks")
df = pd.read_csv("/content/gdrive/My Drive/Colab Notebooks/WA_Fn-UseC_-Marketing-Customer-Value-Analysis.csv")
df.head()
df.shape
# column names
df.columns
```
**2: Analytics on Engaged Customers**
We are going to analyze it to understand how different customers behave and react to different
marketing strategies.
**2.1: Overall Engagement Rate**
* The Response field contains information about whether a customer responded to the marketing efforts
```
# Get the total number of customer responded to the marketing efforts
df.groupby('Response').count()['Customer']
# visualize the customer responded using bar plot
ax = df.groupby('Response').count()['Customer'].plot(kind = 'bar', color = 'orchid', grid = True, figsize = (10,7),
title = "Marketing Engagement")
```
**2.2: Engagement Rates by Offer Type**
The Renew Offer Type column in this DataFrame contains the type of the renewal offer presented to the customers. We are going to look into what types of offers worked best for the engaged customers.
```
by_offer_type_df = df.loc[df['Response'] == 'Yes',
].groupby(['Renew Offer Type']).count()['Customer']/df.groupby('Renew Offer Type').count()['Customer']
by_offer_type_df
ax = (by_offer_type_df*100).plot(kind = 'bar',figsize = (7,7), color = 'blue', grid = True, legend = True)
ax.set_ylabel("Engagment Rate %")
plt.show()
```
**2.3: Offer Type & Vehicle Class**
We are going to understand how customers with different attributes respond differently to different marketing messages. We start looking at the engagements rates by each offer type and vehicle class.
```
by_offer_type_df = df.loc[df['Response'] == 'Yes' #engaged customer
].groupby([
'Renew Offer Type', 'Vehicle Class' # group by the two variables
]).count()['Customer']/df.groupby('Renew Offer Type').count()['Customer']
# Make the previous output more readable using unstack function
# to pivot the data and extract and transform the inner-level groups to columns
by_offer_type_df = by_offer_type_df.unstack().fillna(0)
by_offer_type_df
ax = (by_offer_type_df*100).plot(kind = 'bar', figsize = (10,7), grid = True)
ax.set_ylabel('Engagement Rate %')
plt.show()
```
**2.4: Engagement Rates by Sales Channel**
We are going to analyze how engagement rates differ by different sales channels.
```
by_sales_channel_df = df.loc[df['Response'] == 'Yes'].groupby([
'Sales Channel'
]).count()['Customer']/df.groupby('Sales Channel').count()['Customer']
by_sales_channel_df
ax = (by_sales_channel_df*100).plot(
kind = 'bar',
figsize = (7,7),
color = 'palegreen',
grid = True)
ax.set_ylabel('Engagement Rate (%)')
plt.show()
```
**2.5: Sales Channel & Vehicle Size**
We are going to see whether customers with various vehicle sizes respond differently to different
sales channels.
```
by_sales_channel_df = df.loc[df['Response'] == 'Yes'].groupby([
'Sales Channel','Vehicle Size'
]).count()['Customer']/df.groupby('Sales Channel').count()['Customer']
by_sales_channel_df
#unstack
by_sales_channel_df = by_sales_channel_df.unstack().fillna(0)
by_sales_channel_df
ax = (by_sales_channel_df*100).plot(kind = 'bar', figsize = (10,7), grid = True)
ax.set_ylabel('Engagement Rate (%)')
plt.show()
```
|
github_jupyter
|
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
import os;os.listdir("/content/gdrive/My Drive/Colab Notebooks")
df = pd.read_csv("/content/gdrive/My Drive/Colab Notebooks/WA_Fn-UseC_-Marketing-Customer-Value-Analysis.csv")
df.head()
df.shape
# column names
df.columns
# Get the total number of customer responded to the marketing efforts
df.groupby('Response').count()['Customer']
# visualize the customer responded using bar plot
ax = df.groupby('Response').count()['Customer'].plot(kind = 'bar', color = 'orchid', grid = True, figsize = (10,7),
title = "Marketing Engagement")
by_offer_type_df = df.loc[df['Response'] == 'Yes',
].groupby(['Renew Offer Type']).count()['Customer']/df.groupby('Renew Offer Type').count()['Customer']
by_offer_type_df
ax = (by_offer_type_df*100).plot(kind = 'bar',figsize = (7,7), color = 'blue', grid = True, legend = True)
ax.set_ylabel("Engagment Rate %")
plt.show()
by_offer_type_df = df.loc[df['Response'] == 'Yes' #engaged customer
].groupby([
'Renew Offer Type', 'Vehicle Class' # group by the two variables
]).count()['Customer']/df.groupby('Renew Offer Type').count()['Customer']
# Make the previous output more readable using unstack function
# to pivot the data and extract and transform the inner-level groups to columns
by_offer_type_df = by_offer_type_df.unstack().fillna(0)
by_offer_type_df
ax = (by_offer_type_df*100).plot(kind = 'bar', figsize = (10,7), grid = True)
ax.set_ylabel('Engagement Rate %')
plt.show()
by_sales_channel_df = df.loc[df['Response'] == 'Yes'].groupby([
'Sales Channel'
]).count()['Customer']/df.groupby('Sales Channel').count()['Customer']
by_sales_channel_df
ax = (by_sales_channel_df*100).plot(
kind = 'bar',
figsize = (7,7),
color = 'palegreen',
grid = True)
ax.set_ylabel('Engagement Rate (%)')
plt.show()
by_sales_channel_df = df.loc[df['Response'] == 'Yes'].groupby([
'Sales Channel','Vehicle Size'
]).count()['Customer']/df.groupby('Sales Channel').count()['Customer']
by_sales_channel_df
#unstack
by_sales_channel_df = by_sales_channel_df.unstack().fillna(0)
by_sales_channel_df
ax = (by_sales_channel_df*100).plot(kind = 'bar', figsize = (10,7), grid = True)
ax.set_ylabel('Engagement Rate (%)')
plt.show()
| 0.432543 | 0.933794 |
## Introduction
I was thinking for a while about my master thesis topic and I wanted it was related to data mining and artificial intelligence because I want to learn more about this field. I want to work with machine learning and for that, I have to study it more. Writing a thesis is a great way to get more experience with it. <br>
I thought I will use this dataset as a part of my thesis topic.
### Machine learning in medicine
This field of study takes a more and more important role in our life. AI helps not only in the IT section but also in medicine. It supports doctors, farceurs, helps with validating data about patients, and even helps with diagnosing disease. <br><br>
**In this kernel we will try to:**
* Analise data of patients with heart problems.
* Find what plays a key role in causing heart disease
* Process data
* And make a prediction model
### Dataset explenation
* age: The person's age in years
* sex: The person's sex (1 = male, 0 = female)
* cp: The chest pain experienced (Value 1: typical angina, Value 2: atypical angina, Value 3: non-anginal pain, Value 4: * asymptomatic)
* trestbps: The person's resting blood pressure (mm Hg on admission to the hospital)
* chol: The person's cholesterol measurement in mg/dl
* fbs: The person's fasting blood sugar (> 120 mg/dl, 1 = true; 0 = false)
* restecg: Resting electrocardiographic measurement (0 = normal, 1 = having ST-T wave abnormality, 2 = showing probable or * definite left ventricular hypertrophy by Estes' criteria)
* thalach: The person's maximum heart rate achieved
* exang: Exercise induced angina (1 = yes; 0 = no)
* oldpeak: ST depression induced by exercise relative to rest ('ST' relates to positions on the ECG plot. See more here)
* slope: the slope of the peak exercise ST segment (Value 1: upsloping, Value 2: flat, Value 3: downsloping)
* ca: The number of major vessels (0-3)
* thal: A blood disorder called thalassemia (3 = normal; 6 = fixed defect; 7 = reversable defect)
* target: Heart disease (0 = no, 1 = yes)
```
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import export_graphviz
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from pdpbox import pdp, info_plots
import shap
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import math
dataset = pd.read_csv('Data/heart.csv')
dataset.head()
```
### That's how our dataset looks like
# Analysing data
We will start with small changes in our dataset so we can better understand what is going on on the plots.
```
dt = dataset.copy() # make copy of dataset
dt['sex'][dt['sex'] == 0] = 'female'
dt['sex'][dt['sex'] == 1] = 'male'
dt['cp'][dt['cp'] == 0] = 'typical angina'
dt['cp'][dt['cp'] == 1] = 'atypical angina'
dt['cp'][dt['cp'] == 2] = 'non-anginal pain'
dt['cp'][dt['cp'] == 3] = 'asymptomatic'
dt['slope'][dt['slope'] == 0] = 'upsloping'
dt['slope'][dt['slope'] == 1] = 'flat'
dt['slope'][dt['slope'] == 2] = 'downsloping'
dt['target'][dt['target'] == 0] = 'healthy'
dt['target'][dt['target'] == 1] = "sick"
dt.head()
countplot = sns.countplot(x='target', data=dt)
```
We can see our data is more-less balanced. Now let's check how many rows we have in our dataset.
```
dt['age'].size
```
### 303 rows
It might be enough for studying machine learning and data visualization - which means it suits our needs. However, it's not enough to fully analyze heart disease and make a prediction model with at least 95% accuracy.
We can start with checking if in our dataset gender has some impact on disease
```
sns.countplot(x='target', hue='sex', data=dt)
```
Looks a bit odd. More male is in both groups which can mean there is much more male in our dataset than female. Let's check it out.
```
pie, ax = plt.subplots(figsize=[10,6])
data = dt.groupby("sex").size() # data for Pie chart
labels = ['Female', 'Male']
plt.pie(x=data, autopct="%.1f%%", explode=[0.025]*2,labels=labels, pctdistance=0.5)
plt.title("Gender", fontsize=14);
```
Yes, as I thought the majority of people in this dataset are male. From what we can read on the internet mostly men suffer from heart disease which may explain why we have in dataset more men. Next we will take a look on a age.
```
dt["age"] = dt["age"].astype(float)
dt["age"].plot.hist()
dt["age"].mean() # the age mean
```
Clearly, most patients in the dataset are people older than 50 years old. The mean is 54 years old. Which isn't anything surprising. Mostly older people have problems with the heart. What is more interesting is the number of people age above 65 years old. However, it's just a small dataset so we cannot be sure why there are fewer people who are very old (65 years old and more)
## Chest Pain Type Analysis
```
sns.countplot(dt['cp'])
plt.xlabel('Chest Type')
plt.ylabel('Count')
plt.title('Chest Type vs Count State')
plt.show()
sns.countplot(x="cp", hue="target", data=dt)
```
In this plot, I divide data into 4 groups depending on the type of chest pain and compare it to target (is patient healthy or not)
<br>
We can see that above 100 people with **typical angina** pain are healthy. And on the other side, the major of people with **non-anginal pain** have heart disease<br><br><br>
To do that we will need create a model. This time we will use **Random Forest** with depth 3. We don't have many cases so we cannot use higher depth. I'm still not sure if there will not be any **overfit**.
```
X = dataset.drop("target", axis=1) # X = all data apart of survived column
y = dataset["target"] # y = only column survived
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model_RFC = RandomForestClassifier(max_depth=3)
model_RFC.fit(X_train, y_train)
```
<br><br>
Now I'm going to use **Partial Dependence Plot**. It's something I just learned so let's try it out.
```
base_features = dataset.columns.values.tolist()
base_features.remove('target')
nr_of_vessles = dataset.columns.values.tolist()
nr_of_vessles.remove('target')
# ca - number of major vessels
pdp_dist = pdp.pdp_isolate(model=model_RFC, dataset=X_test, model_features=base_features, feature='ca')
pdp.pdp_plot(pdp_dist, 'ca')
plt.show()
```
### Result of Partial Dependence Plot on 'ca' value
We see that line drops down when number of 'ca' increase but what does it mean? <br> It means that when number of major blood vessels **increases**, the probability of heart disease **decrease**.
## Let's build a Logistic Regression model as well
and check **confusion matrix** and accuracy for both models
```
model_lr = LogisticRegression()
model_lr.fit(X_train,y_train)
# confusion matrix for random forest
prediction = model_RFC.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_RFC.score(X_test,y_test)*100
print("Accuracy of Random Forest = ", acc);
prediction = model_lr.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_lr.score(X_test,y_test)*100
print("Accuracy of Logistic Regression= ", acc);
```
### From the result we can see **Random Forest** model has a bit better result. Its accuracy is better by 1.6%
Later I'll continue analising this dataset and I'm going to check other predictin models and find out which one has the best result
|
github_jupyter
|
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import export_graphviz
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from pdpbox import pdp, info_plots
import shap
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import math
dataset = pd.read_csv('Data/heart.csv')
dataset.head()
dt = dataset.copy() # make copy of dataset
dt['sex'][dt['sex'] == 0] = 'female'
dt['sex'][dt['sex'] == 1] = 'male'
dt['cp'][dt['cp'] == 0] = 'typical angina'
dt['cp'][dt['cp'] == 1] = 'atypical angina'
dt['cp'][dt['cp'] == 2] = 'non-anginal pain'
dt['cp'][dt['cp'] == 3] = 'asymptomatic'
dt['slope'][dt['slope'] == 0] = 'upsloping'
dt['slope'][dt['slope'] == 1] = 'flat'
dt['slope'][dt['slope'] == 2] = 'downsloping'
dt['target'][dt['target'] == 0] = 'healthy'
dt['target'][dt['target'] == 1] = "sick"
dt.head()
countplot = sns.countplot(x='target', data=dt)
dt['age'].size
sns.countplot(x='target', hue='sex', data=dt)
pie, ax = plt.subplots(figsize=[10,6])
data = dt.groupby("sex").size() # data for Pie chart
labels = ['Female', 'Male']
plt.pie(x=data, autopct="%.1f%%", explode=[0.025]*2,labels=labels, pctdistance=0.5)
plt.title("Gender", fontsize=14);
dt["age"] = dt["age"].astype(float)
dt["age"].plot.hist()
dt["age"].mean() # the age mean
sns.countplot(dt['cp'])
plt.xlabel('Chest Type')
plt.ylabel('Count')
plt.title('Chest Type vs Count State')
plt.show()
sns.countplot(x="cp", hue="target", data=dt)
X = dataset.drop("target", axis=1) # X = all data apart of survived column
y = dataset["target"] # y = only column survived
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model_RFC = RandomForestClassifier(max_depth=3)
model_RFC.fit(X_train, y_train)
base_features = dataset.columns.values.tolist()
base_features.remove('target')
nr_of_vessles = dataset.columns.values.tolist()
nr_of_vessles.remove('target')
# ca - number of major vessels
pdp_dist = pdp.pdp_isolate(model=model_RFC, dataset=X_test, model_features=base_features, feature='ca')
pdp.pdp_plot(pdp_dist, 'ca')
plt.show()
model_lr = LogisticRegression()
model_lr.fit(X_train,y_train)
# confusion matrix for random forest
prediction = model_RFC.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_RFC.score(X_test,y_test)*100
print("Accuracy of Random Forest = ", acc);
prediction = model_lr.predict(X_test)
confusion_matrix(y_test, prediction)
acc = model_lr.score(X_test,y_test)*100
print("Accuracy of Logistic Regression= ", acc);
| 0.560493 | 0.981524 |
```
from datascience import *
path_data = '../data/'
import matplotlib
matplotlib.use('Agg')
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
```
# Iteration
It is often the case in programming – especially when dealing with randomness – that we want to repeat a process multiple times. For example, recall the game of betting on one roll of a die with the following rules:
- If the die shows 1 or 2 spots, my net gain is -1 dollar.
- If the die shows 3 or 4 spots, my net gain is 0 dollars.
- If the die shows 5 or 6 spots, my net gain is 1 dollar.
The function `bet_on_one_roll` takes no argument. Each time it is called, it simulates one roll of a fair die and returns the net gain in dollars.
```
def bet_on_one_roll():
"""Returns my net gain on one bet"""
x = np.random.choice(np.arange(1, 7)) # roll a die once and record the number of spots
if x <= 2:
return -1
elif x <= 4:
return 0
elif x <= 6:
return 1
```
Playing this game once is easy:
```
bet_on_one_roll()
```
To get a sense of how variable the results are, we have to play the game over and over again. We could run the cell repeatedly, but that's tedious, and if we wanted to do it a thousand times or a million times, forget it.
A more automated solution is to use a `for` statement to loop over the contents of a sequence. This is called *iteration*. A `for` statement begins with the word `for`, followed by a name we want to give each item in the sequence, followed by the word `in`, and ending with an expression that evaluates to a sequence. The indented body of the `for` statement is executed once *for each item in that sequence*.
```
for animal in make_array('cat', 'dog', 'rabbit'):
print(animal)
```
It is helpful to write code that exactly replicates a `for` statement, without using the `for` statement. This is called *unrolling* the loop.
A `for` statement simple replicates the code inside it, but before each iteration, it assigns a new value from the given sequence to the name we chose. For example, here is an unrolled version of the loop above.
```
animal = make_array('cat', 'dog', 'rabbit').item(0)
print(animal)
animal = make_array('cat', 'dog', 'rabbit').item(1)
print(animal)
animal = make_array('cat', 'dog', 'rabbit').item(2)
print(animal)
```
Notice that the name `animal` is arbitrary, just like any name we assign with `=`.
Here we use a `for` statement in a more realistic way: we print the results of betting five times on the die as described earlier. This is called *simulating* the results of five bets. We use the word *simulating* to remind ourselves that we are not physically rolling dice and exchanging money but using Python to mimic the process.
To repeat a process `n` times, it is common to use the sequence `np.arange(n)` in the `for` statement. It is also common to use a very short name for each item. In our code we will use the name `i` to remind ourselves that it refers to an item.
```
for i in np.arange(5):
print(bet_on_one_roll())
```
In this case, we simply perform exactly the same (random) action several times, so the code in the body of our `for` statement does not actually refer to `i`.
## Augmenting Arrays
While the `for` statement above does simulate the results of five bets, the results are simply printed and are not in a form that we can use for computation. Any array of results would be more useful. Thus a typical use of a `for` statement is to create an array of results, by augmenting the array each time.
The `append` method in `NumPy` helps us do this. The call `np.append(array_name, value)` evaluates to a new array that is `array_name` augmented by `value`. When you use `append`, keep in mind that all the entries of an array must have the same type.
```
pets = make_array('Cat', 'Dog')
np.append(pets, 'Another Pet')
```
This keeps the array `pets` unchanged:
```
pets
```
But often while using `for` loops it will be convenient to mutate an array – that is, change it – when augmenting it. This is done by assigning the augmented array to the same name as the original.
```
pets = np.append(pets, 'Another Pet')
pets
```
## Example: Betting on 5 Rolls
We can now simulate five bets on the die and collect the results in an array that we will call the *collection array*. We will start out by creating an empty array for this, and then append the outcome of each bet. Notice that the body of the `for` loop contains two statements. Both statements are executed for each item in the given sequence.
```
outcomes = make_array()
for i in np.arange(5):
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
outcomes
```
Let us rewrite the cell with the `for` statement unrolled:
```
outcomes = make_array()
i = np.arange(5).item(0)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(1)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(2)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(3)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(4)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
outcomes
```
The contents of the array are likely to be different from the array that we got by running the previous cell, but that is because of randomness in rolling the die. The process for creating the array is exactly the same.
By capturing the results in an array we have given ourselves the ability to use array methods to do computations. For example, we can use `np.count_nonzero` to count the number of times money changed hands.
```
np.count_nonzero(outcomes)
```
## Example: Betting on 300 Rolls
Iteration is a powerful technique. For example, we can see the variation in the results of 300 bets by running exactly the same code for 300 bets instead of five.
```
outcomes = make_array()
for i in np.arange(300):
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
```
The array `outcomes` contains the results of all 300 bets.
```
len(outcomes)
```
To see how often the three different possible results appeared, we can use the array `outcomes` and `Table` methods.
```
outcome_table = Table().with_column('Outcome', outcomes)
outcome_table.group('Outcome').barh(0)
```
Not surprisingly, each of the three outcomes -1, 0, and 1 appeared about about 100 of the 300 times, give or take. We will examine the "give or take" amounts more closely in later chapters.
|
github_jupyter
|
from datascience import *
path_data = '../data/'
import matplotlib
matplotlib.use('Agg')
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
import numpy as np
def bet_on_one_roll():
"""Returns my net gain on one bet"""
x = np.random.choice(np.arange(1, 7)) # roll a die once and record the number of spots
if x <= 2:
return -1
elif x <= 4:
return 0
elif x <= 6:
return 1
bet_on_one_roll()
for animal in make_array('cat', 'dog', 'rabbit'):
print(animal)
animal = make_array('cat', 'dog', 'rabbit').item(0)
print(animal)
animal = make_array('cat', 'dog', 'rabbit').item(1)
print(animal)
animal = make_array('cat', 'dog', 'rabbit').item(2)
print(animal)
for i in np.arange(5):
print(bet_on_one_roll())
pets = make_array('Cat', 'Dog')
np.append(pets, 'Another Pet')
pets
pets = np.append(pets, 'Another Pet')
pets
outcomes = make_array()
for i in np.arange(5):
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
outcomes
outcomes = make_array()
i = np.arange(5).item(0)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(1)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(2)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(3)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
i = np.arange(5).item(4)
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
outcomes
np.count_nonzero(outcomes)
outcomes = make_array()
for i in np.arange(300):
outcome_of_bet = bet_on_one_roll()
outcomes = np.append(outcomes, outcome_of_bet)
len(outcomes)
outcome_table = Table().with_column('Outcome', outcomes)
outcome_table.group('Outcome').barh(0)
| 0.468547 | 0.977392 |
# TIME EVOLUTION OF THE EFT COUNTER-TERMS
```
import numpy as np
from scipy.interpolate import interp1d,InterpolatedUnivariateSpline
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('default')
```
## Quijote simulations
```
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=67.11, Ob0=0.049, Om0= 0.2685)
```
* Best fit values from the Quijote simulation
```
z = np.array([0,0.5,1,2,3])
c2 = np.array([2.629, 0.977, 0.392, 0.100, 0.000])
c2upper = np.array([0.008, 0.004, 0.002, 0.001, 0.000])
c2lower = np.array([0.008, 0.004, 0.002, 0.001, 0.002])
var = c2lower * c2lower
err = np.array([c2lower,c2upper])
```
* Interpolation
```
z_int = np.array([0,0.5,1,2,3, 10, 1100])
c2_int = np.array([2.629, 0.977, 0.392, 0.100, -0.002, 0.0, 0.0])
c2_interpolate = interp1d(z_int, c2_int, kind='cubic')
z_vector = np.linspace(0,3,num=1000)
c2_int = c2_interpolate(z_vector)
```
## C2 model
* Counter-term evolution parametrisation
```
def c2model(redshift,theta):
m, n, a = theta
return m * np.exp(-a * redshift) + n
```
* Best fit
```
def log_likelihood(theta, data, covariance):
m, n, a = theta
model = c2model(z, theta)
X = data - model
C = covariance
X_C = X/C
return -0.5 * np.dot(X, X /C)
from scipy.optimize import minimize
theta_true = np.array([2.5, 0, 2])
nll = lambda *args: -log_likelihood(*args)
initial = theta_true
soln = minimize(nll, initial, args=(c2, var))
p_ml = soln.x
print("Maximum likelihood estimates:")
print(p_ml)
```
* Likelihood analysis
```
def log_prior(theta):
return 1/theta
def log_probability(theta, data, covariance):
return log_prior(theta) + log_likelihood(theta, data, covariance)
import emcee
pos = soln.x + 1e-4 * np.random.randn(500, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability, args=(c2, var))
sampler.run_mcmc(pos, 10000, progress=True);
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "n", "a"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
tau = sampler.get_autocorr_time()
print(tau);
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
from IPython.display import display, Math
best_fit = np.empty(3)
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
best_fit[i] = mcmc[1]
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{+{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
import corner
m_true, n_true, a_true = best_fit
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, n_true, a_true]);
c2_best = c2model(z_vector,best_fit)
```
### Plot
```
plt.plot(z, c2, '--', label='Interpolation')
plt.plot(z_vector, c2_best, 'r', label='Parametrization')
plt.errorbar(z, c2, yerr=err, fmt='.w', capsize=0)
plt.scatter(z, c2, s=50, c='k', label='Best fit')
plt.title('Time evolution of counterterms')
plt.xlabel('Redshift, $z$')
plt.ylabel('$ c^2_{s}/k_{NL}^2$ $(Mpc^2/h^2)$')
plt.legend(fontsize=14, frameon=False)
# plt.savefig('counterterm.pdf')
plt.show()
```
|
github_jupyter
|
import numpy as np
from scipy.interpolate import interp1d,InterpolatedUnivariateSpline
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('default')
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=67.11, Ob0=0.049, Om0= 0.2685)
z = np.array([0,0.5,1,2,3])
c2 = np.array([2.629, 0.977, 0.392, 0.100, 0.000])
c2upper = np.array([0.008, 0.004, 0.002, 0.001, 0.000])
c2lower = np.array([0.008, 0.004, 0.002, 0.001, 0.002])
var = c2lower * c2lower
err = np.array([c2lower,c2upper])
z_int = np.array([0,0.5,1,2,3, 10, 1100])
c2_int = np.array([2.629, 0.977, 0.392, 0.100, -0.002, 0.0, 0.0])
c2_interpolate = interp1d(z_int, c2_int, kind='cubic')
z_vector = np.linspace(0,3,num=1000)
c2_int = c2_interpolate(z_vector)
def c2model(redshift,theta):
m, n, a = theta
return m * np.exp(-a * redshift) + n
def log_likelihood(theta, data, covariance):
m, n, a = theta
model = c2model(z, theta)
X = data - model
C = covariance
X_C = X/C
return -0.5 * np.dot(X, X /C)
from scipy.optimize import minimize
theta_true = np.array([2.5, 0, 2])
nll = lambda *args: -log_likelihood(*args)
initial = theta_true
soln = minimize(nll, initial, args=(c2, var))
p_ml = soln.x
print("Maximum likelihood estimates:")
print(p_ml)
def log_prior(theta):
return 1/theta
def log_probability(theta, data, covariance):
return log_prior(theta) + log_likelihood(theta, data, covariance)
import emcee
pos = soln.x + 1e-4 * np.random.randn(500, 3)
nwalkers, ndim = pos.shape
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_probability, args=(c2, var))
sampler.run_mcmc(pos, 10000, progress=True);
fig, axes = plt.subplots(3, figsize=(10, 7), sharex=True)
samples = sampler.get_chain()
labels = ["m", "n", "a"]
for i in range(ndim):
ax = axes[i]
ax.plot(samples[:, :, i], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
ax.set_ylabel(labels[i])
ax.yaxis.set_label_coords(-0.1, 0.5)
axes[-1].set_xlabel("step number");
tau = sampler.get_autocorr_time()
print(tau);
flat_samples = sampler.get_chain(discard=100, thin=15, flat=True)
print(flat_samples.shape)
from IPython.display import display, Math
best_fit = np.empty(3)
for i in range(ndim):
mcmc = np.percentile(flat_samples[:, i], [16, 50, 84])
q = np.diff(mcmc)
best_fit[i] = mcmc[1]
txt = "\mathrm{{{3}}} = {0:.3f}_{{-{1:.3f}}}^{{+{2:.3f}}}"
txt = txt.format(mcmc[1], q[0], q[1], labels[i])
display(Math(txt))
import corner
m_true, n_true, a_true = best_fit
fig = corner.corner(
flat_samples, labels=labels, truths=[m_true, n_true, a_true]);
c2_best = c2model(z_vector,best_fit)
plt.plot(z, c2, '--', label='Interpolation')
plt.plot(z_vector, c2_best, 'r', label='Parametrization')
plt.errorbar(z, c2, yerr=err, fmt='.w', capsize=0)
plt.scatter(z, c2, s=50, c='k', label='Best fit')
plt.title('Time evolution of counterterms')
plt.xlabel('Redshift, $z$')
plt.ylabel('$ c^2_{s}/k_{NL}^2$ $(Mpc^2/h^2)$')
plt.legend(fontsize=14, frameon=False)
# plt.savefig('counterterm.pdf')
plt.show()
| 0.669745 | 0.886617 |
# CNTK 208: Training Acoustic Model with Connectionist Temporal Classification (CTC) Criteria
This tutorial assumes familiarity with 10\* CNTK tutorials and basic knowledge of data representation in acoustic modelling tasks. It introduces some CNTK building blocks that can be used in training deep networks for speech recognition on the example of CTC training criteria.
## Introduction
CNTK implementation of CTC is based on the paper by A. Graves et al. *"Connectionist temporal classification: labeling unsegmented sequence data with recurrent neural networks"*. CTC is a popular training criteria for sequence learning tasks, such as speech or handwriting. It doesn't require segmentation of training data nor post-processing of network outpus to convert them to labels. Thereby, it significantly simplifies training and decoding processes while achieving state of the art accuracy.
CTC training runs on several sequences in parallel either on GPU or CPU, achieving maximal utilization of the hardware.

First let us import some of the necessary libraries including CNTK and setup the testing environment.
```
import os
import cntk as C
import numpy as np
# Select the right target device
import cntk.tests.test_utils
cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system)
data_dir = os.path.join("..", "Tests", "EndToEndTests", "Speech", "Data")
print("Current directory {0}".format(os.getcwd()))
if os.path.exists(data_dir):
if os.path.realpath(data_dir) != os.path.realpath(os.getcwd()):
os.chdir(data_dir)
print("Changed to data directory {0}".format(data_dir))
else:
print("Data directory not available locally. Downloading data.")
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
for dir in ['GlobalStats', 'Features']:
if not os.path.exists(dir):
os.mkdir(dir)
for file in ['glob_0000.scp', 'glob_0000.write.scp', 'glob_0000.mlf', 'state_ctc.list', 'GlobalStats/mean.363', 'GlobalStats/var.363', 'Features/000000000.chunk']:
if os.path.exists(file):
print('Already downloaded %s' % file)
else:
print('Downloading %s' % file)
urlretrieve('https://github.com/Microsoft/CNTK/raw/release/2.5/Tests/EndToEndTests/Speech/Data/%s' % file, file)
```
## Read data
CNTK consumes Acoustic Model (AM) training data in HTK/MLF format and typically expects 3 input files
* [SCP file with features](https://github.com/Microsoft/CNTK/blob/master/Tests/EndToEndTests/Speech/Data/glob_0000.scp). SCP file contains mapping of utterance ids to corresponding feature files.
* [MLF file with labels](https://github.com/Microsoft/CNTK/blob/master/Tests/EndToEndTests/Speech/Data/glob_0000.mlf). MLF (master label file) is a traditional format for representing transcription alignment to features. Even though the referenced MLF file contains label boundaries, they are not needed during CTC training and ignored. For more details on feature/label formats, refer to a copy of HTK book, e.g. [here](http://www1.icsi.berkeley.edu/Speech/docs/HTKBook3.2/)
* [States list file](https://github.com/Microsoft/CNTK/blob/master/Tests/EndToEndTests/Speech/Data/state_ctc.list). This file contains the list of all labels (states) in the training set. The blank label, required by CTC, is located in the end of the file at index (line) 132, assuming 0-based indexing.
CNTK provides flexible and efficient readers `HTKFeatureDeserializer`/`HTKMLFDeserializer` for acoustic features and labels. These readers follow [convention over configuration principle](https://en.wikipedia.org/wiki/Convention_over_configuration) and greatly simply training procedure. At the same time, they take care of various optimizations of reading from disk/network, CPU and GPU asynchronous prefetching which resuls in significant speed up of model training.
**Note**: Currently, CTC training expects label and feature inputs of **the same dimension**, yet the labels don't have to be aligned. An easy way to generate the label file is to have uniform (equal) distribution of the labels across the feature frames. Obviously, some labels will be mis-aligned with this setup, but CTC criteria will take care of it during training, see the original publication for reference.
```
# Type of features/labels and dimensions are application specific
# Here we use rather small dimensional feature and the label set for the sake of keeping the train set compact.
feature_dimension = 33
feature = C.sequence.input((feature_dimension))
label_dimension = 133
label = C.sequence.input((label_dimension))
train_feature_filepath = "glob_0000.scp"
train_label_filepath = "glob_0000.mlf"
mapping_filepath = "state_ctc.list"
try:
train_feature_stream = C.io.HTKFeatureDeserializer(
C.io.StreamDefs(speech_feature = C.io.StreamDef(shape = feature_dimension, scp = train_feature_filepath)))
train_label_stream = C.io.HTKMLFDeserializer(
mapping_filepath, C.io.StreamDefs(speech_label = C.io.StreamDef(shape = label_dimension, mlf = train_label_filepath)), True)
train_data_reader = C.io.MinibatchSource([train_feature_stream, train_label_stream], frame_mode = False)
train_input_map = {feature: train_data_reader.streams.speech_feature, label: train_data_reader.streams.speech_label}
except RuntimeError:
print ("ERROR: not able to read features or labels")
```
## Model creation
In this block we first normalize the features and define a model with LSTM Layers. We normalize the input features to zero mean and unit variance by subtracting the mean vector and multiplying by [inverse](https://en.wikipedia.org/wiki/Multiplicative_inverse) standard deviation, which are stored in separate files.
```
feature_mean = np.fromfile(os.path.join("GlobalStats", "mean.363"), dtype=float, count=feature_dimension)
feature_inverse_stddev = np.fromfile(os.path.join("GlobalStats", "var.363"), dtype=float, count=feature_dimension)
feature_normalized = (feature - feature_mean) * feature_inverse_stddev
with C.default_options(activation=C.sigmoid):
z = C.layers.Sequential([
C.layers.For(range(3), lambda: C.layers.Recurrence(C.layers.LSTM(1024))),
C.layers.Dense(label_dimension)
])(feature_normalized)
```
### Define training hyperparameters
CTC criteria (loss) function is implemented by combination of the `labels_to_graph` and `forward_backward` functions. These functions are designed to generalize forward-backward viterbi-like functions which are very common in sequential modelling problems, e.g. speech or handwriting. `labels_to_graph` is designed to convert the input label sequence into graph representation suitable for particular forward-backward procedure, and `forward_backward` function performs the procedure itself. Currently, these functions only support CTC, and it's their default configuration.
```
mbsize = 1024
mbs_per_epoch = 10
max_epochs = 5
criteria = C.forward_backward(C.labels_to_graph(label), z, blankTokenId=132, delayConstraint=3)
err = C.edit_distance_error(z, label, squashInputs=True, tokensToIgnore=[132])
# Learning rate parameter schedule per sample:
# Use 0.01 for the first 3 epochs, followed by 0.001 for the remaining
lr = C.learning_parameter_schedule_per_sample([(3, .01), (1,.001)])
mm = C.momentum_schedule([(1000, 0.9), (0, 0.99)], mbsize)
learner = C.momentum_sgd(z.parameters, lr, mm)
trainer = C.Trainer(z, (criteria, err), learner)
```
## Train
```
C.logging.log_number_of_parameters(z)
progress_printer = C.logging.progress_print.ProgressPrinter(tag='Training', num_epochs = max_epochs)
for epoch in range(max_epochs):
for mb in range(mbs_per_epoch):
minibatch = train_data_reader.next_minibatch(mbsize, input_map = train_input_map)
trainer.train_minibatch(minibatch)
progress_printer.update_with_trainer(trainer, with_metric = True)
print('Trained on a total of ' + str(trainer.total_number_of_samples_seen) + ' frames')
progress_printer.epoch_summary(with_metric = True)
# Uncomment to save the model
# z.save('CTC_' + str(max_epochs) + 'epochs_' + str(mbsize) + 'mbsize_' + str(mbs_per_epoch) + 'mbs.model')
```
## Evaluate
```
test_feature_filepath = "glob_0000.write.scp"
test_feature_stream = C.io.HTKFeatureDeserializer(
C.io.StreamDefs(speech_feature = C.io.StreamDef(shape = feature_dimension, scp = test_feature_filepath)))
test_data_reader = C.io.MinibatchSource([test_feature_stream, train_label_stream], frame_mode = False)
test_input_map = {feature: test_data_reader.streams.speech_feature, label: test_data_reader.streams.speech_label}
num_test_minibatches = 2
test_result = 0.0
for i in range(num_test_minibatches):
test_minibatch = test_data_reader.next_minibatch(mbsize, input_map = test_input_map)
eval_error = trainer.test_minibatch(test_minibatch)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
round(test_result / num_test_minibatches,2)
```
|
github_jupyter
|
import os
import cntk as C
import numpy as np
# Select the right target device
import cntk.tests.test_utils
cntk.tests.test_utils.set_device_from_pytest_env() # (only needed for our build system)
data_dir = os.path.join("..", "Tests", "EndToEndTests", "Speech", "Data")
print("Current directory {0}".format(os.getcwd()))
if os.path.exists(data_dir):
if os.path.realpath(data_dir) != os.path.realpath(os.getcwd()):
os.chdir(data_dir)
print("Changed to data directory {0}".format(data_dir))
else:
print("Data directory not available locally. Downloading data.")
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
for dir in ['GlobalStats', 'Features']:
if not os.path.exists(dir):
os.mkdir(dir)
for file in ['glob_0000.scp', 'glob_0000.write.scp', 'glob_0000.mlf', 'state_ctc.list', 'GlobalStats/mean.363', 'GlobalStats/var.363', 'Features/000000000.chunk']:
if os.path.exists(file):
print('Already downloaded %s' % file)
else:
print('Downloading %s' % file)
urlretrieve('https://github.com/Microsoft/CNTK/raw/release/2.5/Tests/EndToEndTests/Speech/Data/%s' % file, file)
# Type of features/labels and dimensions are application specific
# Here we use rather small dimensional feature and the label set for the sake of keeping the train set compact.
feature_dimension = 33
feature = C.sequence.input((feature_dimension))
label_dimension = 133
label = C.sequence.input((label_dimension))
train_feature_filepath = "glob_0000.scp"
train_label_filepath = "glob_0000.mlf"
mapping_filepath = "state_ctc.list"
try:
train_feature_stream = C.io.HTKFeatureDeserializer(
C.io.StreamDefs(speech_feature = C.io.StreamDef(shape = feature_dimension, scp = train_feature_filepath)))
train_label_stream = C.io.HTKMLFDeserializer(
mapping_filepath, C.io.StreamDefs(speech_label = C.io.StreamDef(shape = label_dimension, mlf = train_label_filepath)), True)
train_data_reader = C.io.MinibatchSource([train_feature_stream, train_label_stream], frame_mode = False)
train_input_map = {feature: train_data_reader.streams.speech_feature, label: train_data_reader.streams.speech_label}
except RuntimeError:
print ("ERROR: not able to read features or labels")
feature_mean = np.fromfile(os.path.join("GlobalStats", "mean.363"), dtype=float, count=feature_dimension)
feature_inverse_stddev = np.fromfile(os.path.join("GlobalStats", "var.363"), dtype=float, count=feature_dimension)
feature_normalized = (feature - feature_mean) * feature_inverse_stddev
with C.default_options(activation=C.sigmoid):
z = C.layers.Sequential([
C.layers.For(range(3), lambda: C.layers.Recurrence(C.layers.LSTM(1024))),
C.layers.Dense(label_dimension)
])(feature_normalized)
mbsize = 1024
mbs_per_epoch = 10
max_epochs = 5
criteria = C.forward_backward(C.labels_to_graph(label), z, blankTokenId=132, delayConstraint=3)
err = C.edit_distance_error(z, label, squashInputs=True, tokensToIgnore=[132])
# Learning rate parameter schedule per sample:
# Use 0.01 for the first 3 epochs, followed by 0.001 for the remaining
lr = C.learning_parameter_schedule_per_sample([(3, .01), (1,.001)])
mm = C.momentum_schedule([(1000, 0.9), (0, 0.99)], mbsize)
learner = C.momentum_sgd(z.parameters, lr, mm)
trainer = C.Trainer(z, (criteria, err), learner)
C.logging.log_number_of_parameters(z)
progress_printer = C.logging.progress_print.ProgressPrinter(tag='Training', num_epochs = max_epochs)
for epoch in range(max_epochs):
for mb in range(mbs_per_epoch):
minibatch = train_data_reader.next_minibatch(mbsize, input_map = train_input_map)
trainer.train_minibatch(minibatch)
progress_printer.update_with_trainer(trainer, with_metric = True)
print('Trained on a total of ' + str(trainer.total_number_of_samples_seen) + ' frames')
progress_printer.epoch_summary(with_metric = True)
# Uncomment to save the model
# z.save('CTC_' + str(max_epochs) + 'epochs_' + str(mbsize) + 'mbsize_' + str(mbs_per_epoch) + 'mbs.model')
test_feature_filepath = "glob_0000.write.scp"
test_feature_stream = C.io.HTKFeatureDeserializer(
C.io.StreamDefs(speech_feature = C.io.StreamDef(shape = feature_dimension, scp = test_feature_filepath)))
test_data_reader = C.io.MinibatchSource([test_feature_stream, train_label_stream], frame_mode = False)
test_input_map = {feature: test_data_reader.streams.speech_feature, label: test_data_reader.streams.speech_label}
num_test_minibatches = 2
test_result = 0.0
for i in range(num_test_minibatches):
test_minibatch = test_data_reader.next_minibatch(mbsize, input_map = test_input_map)
eval_error = trainer.test_minibatch(test_minibatch)
test_result = test_result + eval_error
# Average of evaluation errors of all test minibatches
round(test_result / num_test_minibatches,2)
| 0.460046 | 0.934574 |
# 범주형 데이터 처리
범주형 데이터 Categorical Data
* 명목형 자료(nominal data)
* 숫자로 바꾸어도 그 값이 크고 작음을 나타내는 것이 아니라 단순히 범주를 표시
* 예) 성별(주민번호), 혈액형
* 순서형 자료(ordinal data)
* 범주의 순서가 상대적으로 비교 가능,
* 예) 비만도(저체중, 정상, 과체중, 비만, 고도비만), 학점,선호도
* 대부분 수치형 자료를 그룹화 하여 순서형 자료로 바꿀수 있다.
```
import pandas as pd
```
### 샘플데이터
```
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['A', 'B', 'B', 'A', 'A', 'F']})
df
```
### 범주형 데이터로 변환
가공하지 않은 성적을 범주형 데이터로 변환합니다.
```
df["grade"] = df["raw_grade"].astype("category")
df
df.info()
```
### 범주 확인 및 범주의 이름 변경
범주에 더 의미 있는 이름을 붙일 수 있다.
Series.cat.categories로 할당하는 것이 적합하다.
```
df["grade"].cat.categories
df["grade"].cat.categories = ["very good", "good", "very bad"]
df
df["grade"]
```
### 새로운 범주의 설정
범주의 순서를 바꾸고 동시에 누락된 범주를 추가한다.
cat.set_categories() 함수에 리스트 형식으로 인자를 넣어줘야 한다.
Series.cat에 속하는 메소드는 기본적으로 새로운 Series를 리턴한다.
```
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df
```
### 범주형 데이터의 정렬
정렬은 사전 순서(ABC순서)가 아닌, 해당 범주에서 지정된 순서대로 배열된다.
very bad, bad, medium, good, very good 의 순서로 기재되어 있기 때문에 정렬 결과도 해당 순서대로 배열된다.
```
df.sort_values(by="grade")
```
### 범주형 데이터의 그룹화
범주의 열을 기준으로 그룹화하면 빈 범주도 표시된다.
```
df.groupby("grade").size()
```
## Binning
수치형 데이터를 범주형 데이터로 변환할 수 있다. 숫자데이터를 카테고리화 하는 기능을 가지고 있다.
* pd.cut() : 나누는 구간의 경계값을 지정하여 구간을 나눈다.
* pd.qcut() : 구간 경계선을 지정하지 않고 데이터 갯수가 같도록 지정한 수의 구간으로 나눈다.
```
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]
```
### pd.cut() - 동일 길이로 나누어서 범주 만들기(equal-length buckets categorization)
* pd.cut()함수는 인자로 (카테고리화 숫자데이터, 구간의 구분값)를 넣어 쉽게 카테고리화 할 수 있다.
* pd.cut()함수로 잘린 데이터는 카테고리 자료형 Series로 반환되게 된다.
ages가 5개의 구간 분값에 의해 4구간의 카테고리 자료형으로 반환된다.
```
# 18 ~ 25 / 25 ~ 35 / 35 ~ 60 / 60 ~ 100 이렇게 총 4구간
cats = pd.cut(ages,bins)
cats
```
cats.codes 를 통해, ages의 각 성분이 몇번째 구간에 속해있는지 정수index처럼 표시되는 것을 알 수 있다.
20은 0=첫번째 구간에, 27은 1=두번째 구간에 속한다는 것을 알 수 있다.
```
cats.codes
```
cats.value_counts() 를 통해서, 값x 각 구간에 따른 성분의 갯수를 확인할 수 있다.
value_counts()는 카테고리 자료형(Series)에서 각 구간에 속한 성분의 갯수도 파악할 수 있다.
```
cats.value_counts()
```
pd.cut()을 호출시, labes = [ 리스트]형식으로 인자를 추가하면 각 카테고리명을 직접 지정해 줄 수 있다.
```
group_names = ["Youth", "YoungAdult", "MiddleAged", "Senior"]
pd.cut(ages, bins, labels= group_names)
```
#### pd.cut() 구간의 개수로 나누기
2번째 인자에서 각 구간 구분값(bins)이 리스트형식으로 넣어줬던 것을 –>
나눌 구간의 갯수만 입력해준다.
(성분의 최소값 ~ 최대값를 보고 동일 간격으로 구간을 나눈다.)
```
import numpy as np
data = np.random.rand(20)
data
# 20개의 data성분에 대해, 동일한 길이의 구간으로 4개를 나누었고,
# 기준은 소수2번째 자리까지를 기준으로 한다.
pd.cut(data, 4, precision = 2 )
```
### pd.qcut() - 동일 개수로 나누어서 범주 만들기 (equal-size buckets categorization)
pandas에서는 qcut이라는 함수도 제공한다.
* 지정한 갯수만큼 구간을 정의한다.
* pd.cut() 함수는 최대값 쵯소값만 고려해서 구간을 나눈 것에 비해
* pd.qcut() 함수는 데이터 분포를 고려하여 각 구간에 동일한 양의 데이터가 들어가도록 분위 수를 구분값으로 구간을 나누는 함수다.
```
data2 = np.random.randn(100)
data2
cats = pd.qcut(data2, 4)
```
* cats = pd.qcut(data2, 4)를 통해 4개의 구간을 나눈다.
* 최소값<—>최대값 사이를 4등분 하는 것이 아니라, 분포까지 고려해서 4분위로 나눈 다음, 구간을 결정하게 된다.
* cut함수와 달리, 각 구간의 길이가 동일하다고 말할 수 없다.
```
cats
```
# 실습하기
다음과 같은 실습 데이터를 생성하고 실습을 진행하시오.
```
import numpy as np
np.random.seed(7)
df = pd.DataFrame(np.random.randint(160, 190, 100), columns=['height'])
df
```
**문제1**
level1 이라는 이름의 컬럼을 추가하는데 height를 3개의 구간으로 나누어 A, B, C 등급으로 표현하시오.
### 문제2
```
titanic = pd.read_csv('../data/titanic.csv')
```
titanic 데이터셋에서 `Age` 컬럼을 pd.cut()을 이용해 같은 길이의 구간을 가지는 다섯 개의 그룹을 만들어 보자.
### 문제 3
위에서 나눈 나이 구간별 생존율을 구하시오.(hint. groupby(['AgeBand']) )
|
github_jupyter
|
import pandas as pd
df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['A', 'B', 'B', 'A', 'A', 'F']})
df
df["grade"] = df["raw_grade"].astype("category")
df
df.info()
df["grade"].cat.categories
df["grade"].cat.categories = ["very good", "good", "very bad"]
df
df["grade"]
df["grade"] = df["grade"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
df
df.sort_values(by="grade")
df.groupby("grade").size()
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
bins = [18, 25, 35, 60, 100]
# 18 ~ 25 / 25 ~ 35 / 35 ~ 60 / 60 ~ 100 이렇게 총 4구간
cats = pd.cut(ages,bins)
cats
cats.codes
cats.value_counts()
group_names = ["Youth", "YoungAdult", "MiddleAged", "Senior"]
pd.cut(ages, bins, labels= group_names)
import numpy as np
data = np.random.rand(20)
data
# 20개의 data성분에 대해, 동일한 길이의 구간으로 4개를 나누었고,
# 기준은 소수2번째 자리까지를 기준으로 한다.
pd.cut(data, 4, precision = 2 )
data2 = np.random.randn(100)
data2
cats = pd.qcut(data2, 4)
cats
import numpy as np
np.random.seed(7)
df = pd.DataFrame(np.random.randint(160, 190, 100), columns=['height'])
df
titanic = pd.read_csv('../data/titanic.csv')
| 0.191214 | 0.967899 |
# Genotype data formatting
This module implements a collection of workflows used to format genotype data.
## Overview
The module streamlines conversion between PLINK and VCF formats (possibly more to add), specifically:
1. Conversion between VCF and PLINK formats
2. Split data (by specified input, by chromosomes, by genes)
3. Merge data (by specified input, by chromosomes)
## Input
Depending on the analysis task, input files are specified in one of the following formats:
1. A single Whole genome data in VCF format, or in PLINK bim/bed/fam bundle; Or,
2. A list of VCF or PLINK bed file
3. A singular column file containing a list of VCF or PLINK bed file
4. A two column file containing a list of per chromosome VCF or PLINK bed file where the first column is chrom and 2nd column is file name
## Output
Genotype data after reformatting.
## Examples
Minimal working example data-set as well as the singularity container `bioinfo.sif` can be downloaded from [Google Drive](https://drive.google.com/drive/u/0/folders/1ahIZGnmjcGwSd-BI91C9ayd_Ya8sB2ed).
### PLINK file merger
```
sos run genotype_formatting.ipynb merge_plink \
--genoFile data/genotype/chr1.bed data/genotype/chr6.bed \
--cwd output/genotype \
--name chr1_chr6 \
--container container/bioinfo.sif
```
### Split by genes
```
```
## Command interface
```
sos run genotype_formatting.ipynb -h
[global]
# Work directory & output directory
parameter: cwd = path
# The filename name for containers
parameter: container = ''
# For cluster jobs, number commands to run per job
parameter: job_size = 1
# Wall clock time expected
parameter: walltime = "5h"
# Memory expected
parameter: mem = "16G"
# Number of threads
parameter: numThreads = 20
# the path to a bed file or VCF file, a vector of bed files or VCF files, or a text file listing the bed files or VCF files to process
parameter: genoFile = paths
# use this function to edit memory string for PLINK input
from sos.utils import expand_size
cwd = f"{cwd:a}"
import os
def get_genotype_file(geno_file_paths):
#
def valid_geno_file(x):
suffixes = path(x).suffixes
if suffixes[-1] == '.bed':
return True
if len(suffixes)>1 and ''.join(suffixes[-2:]) == ".vcf.gz":
return True
return False
#
def complete_geno_path(x, geno_file):
if not valid_geno_file(x):
raise ValueError(f"Genotype file {x} should be VCF (end with .vcf.gz) or PLINK bed file (end with .bed)")
if not os.path.isfile(x):
# relative path
if not os.path.isfile(f'{geno_file:ad}/' + x):
raise ValueError(f"Cannot find genotype file {x}")
else:
x = f'{geno_file:ad}/' + x
return x
#
def format_chrom(chrom):
if chrom.startswith('chr'):
chrom = chrom[3:]
return chrom
# Inputs are either VCF or bed, or a vector of them
if len(geno_file_paths) > 1:
if all([valid_geno_file(x) for x in geno_file_paths]):
return paths(geno_file_paths)
else:
raise ValueError(f"Invalid input {geno_file_paths}")
# Input is one genotype file or text list of genotype files
geno_file = geno_file_paths[0]
if valid_geno_file(geno_file):
return paths(geno_file)
else:
units = [x.strip().split() for x in open(geno_file).readlines() if x.strip() and not x.strip().startswith('#')]
if all([len(x) == 1 for x in units]):
return paths([complete_geno_path(x[0], geno_file) for x in units])
elif all([len(x) == 2 for x in units]):
genos = dict([(format_chrom(x[0]), path(complete_geno_path(x[1], geno_file))) for x in units])
else:
raise ValueError(f"{geno_file} should contain one column of file names, or two columns of chrom number and corresponding file name")
return genos
genoFile = get_genotype_file(genoFile)
```
## PLINK to VCF
```
[plink_to_vcf]
if isinstance(genoFile, dict):
genoFile = genoFile.values()
input: genoFile, group_by = 1
output: f'{cwd}/{_input:bn}.vcf.gz',
f'{cwd}/{_input:bn}.vcf.gz.tbi'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
bash: expand= "${ }", stderr = f'{_output[0]}.stderr', stdout = f'{_output[0]}.stdout', container = container, volumes = [f'{_input:ad}:{_input:ad}']
plink --bfile ${_input:n} \
--recode vcf-iid \
--out ${_output[0]:nn} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e06} --output-chr chrMT
bgzip -l 9 ${_output[0]:n}
tabix -f -p vcf ${_output[0]}
```
## VCF to PLINK
```
[vcf_to_plink]
if isinstance(genoFile, dict):
genoFile = genoFile.values()
input: genoFile, group_by = 1
output: f'{cwd}/{_input:nn}.bed'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: container = container, expand= "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
plink --vcf ${_input} \
--vcf-half-call m \
--vcf-require-gt \
--allow-extra-chr \
--make-bed --out ${_output:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e06}
```
## Split PLINK by genes
```
[plink_by_gene_1]
# cis window size
parameter: window = 500000
# Region definition
parameter: region_list = path
regions = [x.strip().split() for x in open(region_list).readlines() if x.strip() and not x.strip().startswith('#')]
input: genoFile, for_each = 'regions'
output: f'{cwd}/{region_list:bn}_plink_files/{_input:bn}.{_regions[3]}.bed'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: expand= "${ }", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout', container = container, volumes = [f'{region_list:ad}:{region_list:ad}']
plink --bfile ${_input:an} \
--make-bed \
--out ${_output[0]:n} \
--chr ${_regions[0]} \
--from-bp ${f'1' if (int(_regions[1]) - window) < 0 else f'{(int(_regions[1]) - window)}'} \
--to-bp ${int(_regions[2]) + window} \
--allow-no-sex --output-chr chrMT || touch ${_output}
```
## Split PLINK by Chromosome
```
[plink_by_chrom_1]
stop_if(len(paths(genoFile))>1, msg = "This workflow expects one input genotype file.")
parameter: chrom = list
input: genoFile, for_each = "chrom"
output: f'{cwd}/{_input:bn}.{_chrom}.bed'
# look up for genotype file
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: expand= "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout', container = container, volumes = [f'{genoFile:ad}:{genoFile:ad}']
##### Get the locus genotypes for $[_chrom]
plink --bfile $[_input:an] \
--make-bed \
--out $[_output[0]:n] \
--chr $[_chrom] \
--allow-no-sex --output-chr chrMT || true
[plink_by_chrom_2, plink_by_gene_2]
input: group_by = "all"
output: f'{_input[0]:d}/plink_files_list.txt'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
python: expand= "${ }", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout', container = container
import csv
import pandas as pd
data_tempt = pd.DataFrame({
"#id" : [x.split(".")[-2] for x in [${_input:r,}]],
"dir" : [${_input:r,}]
})
data_tempt.to_csv("${_output}",index = False,sep = "\t" )
```
## Split VCF by Chromosome
**FIXME: add this as needed**
## Merge PLINK files
```
[merge_plink]
skip_if(len(genoFile) == 1)
# File prefix for the analysis output
parameter: name = str
input: genoFile, group_by = 'all'
output: f"{cwd}/{name}.merge_list", f"{cwd}/{name}.bed"
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[1]:bn}'
with open(_output[0], 'w') as f:
f.write('\n'.join([str(f'{x:n}') for x in _input[1:]]))
bash: container=container, expand= "${ }", stderr = f'{_output[1]:n}.stderr', stdout = f'{_output[1]:n}.stdout'
plink \
--bfile ${_input[0]:n} \
--merge-list ${_output[0]} \
--make-bed \
--out ${_output[1]:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e06}
```
## Merge VCF files
```
[merge_vcf]
skip_if(len(genoFile) == 1)
# File prefix for the analysis output
parameter: name = str
input: genoFile, group_by = 'all'
output: f"{cwd}/{name}.vcf.gz"
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: container=container, expand= "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
bcftools concat -Oz ${_input} > ${_output}
tabix -p vcf ${_output}
```
|
github_jupyter
|
sos run genotype_formatting.ipynb merge_plink \
--genoFile data/genotype/chr1.bed data/genotype/chr6.bed \
--cwd output/genotype \
--name chr1_chr6 \
--container container/bioinfo.sif
```
## Command interface
## PLINK to VCF
## VCF to PLINK
## Split PLINK by genes
## Split PLINK by Chromosome
## Split VCF by Chromosome
**FIXME: add this as needed**
## Merge PLINK files
## Merge VCF files
| 0.332527 | 0.8474 |
---
_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._
---
# Assignment 1
In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data.
Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.
The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates.
Here is a list of some of the variants you might encounter in this dataset:
* 04/20/2009; 04/20/09; 4/20/09; 4/3/09
* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;
* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009
* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009
* Feb 2009; Sep 2009; Oct 2010
* 6/2008; 12/2009
* 2009; 2010
Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:
* Assume all dates in xx/xx/xx format are mm/dd/yy
* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)
* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).
* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).
* Watch out for potential typos as this is a raw, real-life derived dataset.
With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.
For example if the original series was this:
0 1999
1 2010
2 1978
3 2015
4 1985
Your function should return this:
0 2
1 4
2 0
3 1
4 3
Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.
*This function should return a Series of length 500 and dtype int.*
```
import pandas as pd
import numpy as np
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df = pd.Series(doc)
df.head(10)
# df.shape
def date_sorter():
# Extract dates
df_dates = df.str.replace(r'(\d+\.\d+)', '')
df_dates = df_dates.str.extractall(r'[\s\.,\-/]*?(?P<ddmonthyyyy>\d\d[\s\.,\-/]+(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthddyyyy>(?:Jan.*\b|February|March|April|May|June|July|August|September|October|November|Dec.*\b|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+\d\d[\s\.,\-/]+?(?:19|20)?\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthyyyy>(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyyyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+\d\d)|' + \
r'(?P<mmyyyy>[0-1]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<year>(?:19|20)\d\d)')
# Munge dates
df_dates = df_dates.fillna('')
df_dates = df_dates.sum(axis=1).apply(pd.to_datetime)
df_dates = df_dates.reset_index()
df_dates.columns = ['index', 'match', 'dates']
# Sort dates
df_dates.sort_values(by='dates', inplace=True)
result = df_dates.loc[:, 'index'].astype('int32')
# Unit test & Sanity check
assert result.shape[0] == 500
assert result[0].dtype == 'int32'
return result
date_sorter()
```
|
github_jupyter
|
import pandas as pd
import numpy as np
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df = pd.Series(doc)
df.head(10)
# df.shape
def date_sorter():
# Extract dates
df_dates = df.str.replace(r'(\d+\.\d+)', '')
df_dates = df_dates.str.extractall(r'[\s\.,\-/]*?(?P<ddmonthyyyy>\d\d[\s\.,\-/]+(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthddyyyy>(?:Jan.*\b|February|March|April|May|June|July|August|September|October|November|Dec.*\b|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+\d\d[\s\.,\-/]+?(?:19|20)?\d\d)|' + \
r'[\s\.,\-/]*?(?P<monthyyyy>(?:January|February|March|April|May|June|July|August|September|October|November|December|' + \
'Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)[\s\.,\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyyyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<mmddyy>[0-3]?\d[\-/]+[0-3]?\d[\-/]+\d\d)|' + \
r'(?P<mmyyyy>[0-1]?\d[\-/]+(?:19|20)\d\d)|' + \
r'(?P<year>(?:19|20)\d\d)')
# Munge dates
df_dates = df_dates.fillna('')
df_dates = df_dates.sum(axis=1).apply(pd.to_datetime)
df_dates = df_dates.reset_index()
df_dates.columns = ['index', 'match', 'dates']
# Sort dates
df_dates.sort_values(by='dates', inplace=True)
result = df_dates.loc[:, 'index'].astype('int32')
# Unit test & Sanity check
assert result.shape[0] == 500
assert result[0].dtype == 'int32'
return result
date_sorter()
| 0.31216 | 0.909867 |
# Realization of Recursive Filters
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*
## Quantization of Variables and Operations
As for [non-recursive filters](../nonrecursive_filters/quantization_effects.ipynb#Quantization-Effects), the practical realization of recursive filters may suffer from the quantization of variables and algebraic operations. The effects of [coefficient quantization](quantization_of_coefficients.ipynb) were already discussed. This section takes a look at the quantization of variables. We limit the investigations to the recursive part of a second-order section (SOS), since any recursive filter of order $N \geq 2$ can be [decomposed into SOSs](cascaded_structures.ipynb).
The computation of the output signal $y[k] = \mathcal{H}\{ x[k] \}$ by a difference equation involves a number of multiplications and additions. As discussed already for [non-recursive filters](../nonrecursive_filters/quantization_effects.ipynb#Quantization-of-Signals-and-Operations), multiplying two numbers in a binary representation (e.g. [two's complement](https://en.wikipedia.org/wiki/Two's_complement) or [floating point](https://en.wikipedia.org/wiki/Floating_point)) requires requantization of the result to keep the word length constant. The addition of two numbers may fall outside the maximum/minimum values of the representation and may suffer from clipping.
The resulting round-off and clipping errors depend on the number and sequence of algebraic operations. These depend on the structure used for implementation of the SOSs. For ease of illustration we limit our discussion to the [direct form I and II](direct_forms.ipynb). Similar insights can be achieved in a similar manner for other structures.
### Analysis of Round-Off Errors
Round-off errors are a consequence of reducing the word length after a multiplication. In order to investigate the influence of these errors on a recursive filter, the statistical model for [round-off errors in multipliers](../nonrecursive_filters/quantization_effects.ipynb#Model-for-round-off-errors-in-multipliers) as introduced for non-recursive filters is used. We furthermore neglect clipping.
The difference equation for the recursive part of a SOS realized in direct form I or II is given as
\begin{equation}
y[k] = x[k] - a_1 \, y[k-1] - a_2 \, y[k-2]
\end{equation}
where $a_0 = 1$, $a_1$ and $a_2$ denote the coefficients of the recursive part. Introducing the requantization after the multipliers into the difference equation yields the output signal $y_Q[k]$
\begin{equation}
y_Q[k] = x[k] - \mathcal{Q} \{ a_{1} \, y[k-1] \} - \mathcal{Q} \{ a_{2} \, y[k-2] \}
\end{equation}
where $\mathcal{Q} \{ \cdot \}$ denotes the requantizer. Requantization is a non-linear process which results in a requantization error. If the value to be requantized is much larger that the quantization step $Q$, the average statistical properties of this error can be modeled as additive uncorrelated white noise. Introducing the error into above difference equation gives
\begin{equation}
y_Q[k] = x[k] - a_1 \, y[k-1] - e_1[k] - a_2 \, y[k-2] - e_2[k]
\end{equation}
where the two white noise sources $e_1[k]$ and $e_2[k]$ are assumed to be uncorrelated to each other. This difference equation can be split into a set of two difference equations
\begin{align}
y_Q[k] &= y[k] + e[k] \\
y[k] &= x[k] - a_1 \, y[k-1] - a_2 \, y[k-2] \\
e[k] &= - e_1[k] - e_2[k] - a_1 \, e[k-1] - a_2 \, e[k-2]
\end{align}
The first difference equation computes the desired output signal $y[k]$ as a result of the input signal $x[k]$. The second one the additive error $e[k]$ due to requantization as a result of the requantization error $- (e_1[k] + e_2[k])$ injected into the recursive filter.
The power spectral density (PSD) $\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the error $e[k]$ is then given as
\begin{equation}
\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = | H(\mathrm{e}^{\,\mathrm{j}\,\Omega})|^2 \cdot (\Phi_{e_1 e_1}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) + \Phi_{e_2 e_2}(\mathrm{e}^{\,\mathrm{j}\,\Omega}))
\end{equation}
According to the model for the requantization errors, their PSDs are given as $\Phi_{e_1 e_1}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \Phi_{e_2 e_2}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{Q^2}{12}$. Introducing this together with the transfer function of the SOS yields
\begin{equation}
\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \left| \frac{1}{1 + a_1 \, \mathrm{e}^{\,-\mathrm{j}\,\Omega} + a_2 \, \mathrm{e}^{\,-\mathrm{j}\,2\,\Omega}} \right|^2 \cdot \frac{Q^2}{6}
\end{equation}
#### Example - Round-off error of a SOS
The following example evaluates the error $e[k] = y_Q[k] - y[k]$ for a SOS which only consists of a recursive part. The desired system response $y[k]$ is computed numerically by floating point operations with double precision, $y_Q[k]$ is computed by applying a uniform midtread quantizer after the multiplications. The system is excited by uniformly distributed white noise. Besides the PSD $\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$, the signal-to-noise ratio (SNR) $10 \cdot \log_{10} \left( \frac{\sigma_y^2}{\sigma_e^2} \right)$ in dB of the filter is evaluated.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
N = 8192 # length of signals
w = 8 # wordlength for requantization of multiplications
def uniform_midtread_quantizer(x):
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def no_quantizer(x):
return x
def sos_df1(x, a, requantize=None):
y = np.zeros(len(x)+2) # initial value appended
for k in range(len(x)):
y[k] = x[k] - requantize(a[1]*y[k-1]) - requantize(a[2]*y[k-2])
return y[0:-2]
# cofficients of the SOS
p = 0.90*np.array([np.exp(1j*np.pi/3), np.exp(-1j*np.pi/3)])
a = np.poly(p)
# quantization step
Q = 1/(2**(w-1))
# compute input signal
x = np.random.uniform(low=-1, high=1, size=N)
# compute output signals w and w/o requantization
yQ = sos_df1(x, a, requantize=uniform_midtread_quantizer)
y = sos_df1(x, a, requantize=no_quantizer)
# compute requantization error
e = yQ-y
# Signal-to-noise ratio
SNR = 10*np.log10(np.var(y)/np.var(e))
print('SNR due to requantization: %f dB'%SNR)
# estimate PSD of requantization error
nf, Pxx = sig.welch(e, window='hamming', nperseg=256, noverlap=128)
Pxx = .5*Pxx # due to normalization in scipy.signal
Om = 2*np.pi*nf
# compute frequency response of system
w, H = sig.freqz([1,0,0], a)
# plot results
plt.figure(figsize=(10,4))
plt.plot(Om, Pxx/Q**2 * 12, 'b', label=r'$|\hat{\Phi}_{ee}(e^{j \Omega})|$')
plt.plot(w, np.abs(H)**2 * 2, 'g', label=r'$|H(e^{j \Omega})|^2$')
plt.title('Estimated PSD and transfer function of requantization noise')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$Q^2/12$')
plt.axis([0, np.pi, 0, 100])
plt.legend()
plt.grid();
```
### Small Limit Cycles
Besides the requantization noise, recursive filters may be subject to periodic oscillations present at the output. These undesired oscillations are termed *limit cycles*. Small limit cycles emerge from the additive round-off noise due to requantization after a multiplication. The feedback in a recursive filter leads to a feedback of the requantization noise. This may lead to a periodic output signal with an amplitude range of some quantization steps $Q$, even after the input signal is zero. The presence, amplitude and frequency of small limit cycles depends on the location of poles and the structure of the filter. A detailed treatment of this phenomenon is beyond the scope of this notebook and can be found in the literature.
#### Example - Small limit cycles of a SOS
The following example illustrates small limit cycles for the system investigated in the previous example. The input signal is uniformly distributed white noise till time-index $k=256$ and zero for the remainder.
```
# compute input signal
x = np.random.uniform(low=-1, high=1, size=256)
x = np.concatenate((x, np.zeros(1024)))
# compute output signal
yQ = sos_df1(x, a, requantize=uniform_midtread_quantizer)
# plot results
np.seterr(divide='ignore')
plt.figure(figsize=(10, 3))
plt.plot(20*np.log10(np.abs(yQ)))
plt.title('Level of output signal')
plt.xlabel(r'$k$')
plt.ylabel(r'$|y_Q[k]|$ in dB')
plt.grid()
plt.figure(figsize=(10, 3))
k = np.arange(1000, 1050)
plt.stem(k, yQ[k]/Q)
plt.title('Output signal for zero input')
plt.xlabel(r'$k$')
plt.ylabel(r'$y_Q[k] / Q$ ')
plt.axis([k[0], k[-1], -3, 3])
plt.grid();
```
**Exercise**
* Estimate the period of the small limit cycles. How is it related to the poles of the system?
* What amplitude range is spanned?
Solution: The period of the small limit cycles can be estimated from the second illustration as $P = 6$. The normalized frequency of a harmonic exponential signal with the same periodicity is given as $\Omega_0 = \frac{2 \pi}{P} = \frac{\pi}{3}$. The poles of the system can be extracted from the code of the first example as $z_{\infty 0,1} = 0.9 \cdot e^{\pm j \frac{\pi}{3}}$. The periodicity of the small limit cycles is hence linked to the normalized frequency of the poles. The amplitude range spanned by the small limit cycles is $\pm 2 Q$.
### Large Limit Cycles
Large limit cycles are periodic oscillations of a recursive filter due to overflows in the multiplications/additions. As for small limit cycles, large limit cycles may be present even after the input signal is zero. Their level is typically in the range of the minimum/maximum value of the requantizer. Large limit cycles should therefore be avoided in a practical implementation. The presence of large limit cycles depends on the scaling of input signal and coefficients, as well as the strategy used to cope for clipping. Amongst others, they can be avoided by proper scaling of the coefficients to prevent overflow. Again, a detailed treatment of this phenomenon is beyond the scope of this notebook and can be found in the literature.
#### Example - Large limit cycles of a SOS
The following example illustrates large limit cycles for the system investigated in the first example. In order to trigger large limit cycles, the coefficients of the filter have been doubled. The input signal is uniformly distributed white noise till time-index $k=256$ and zero for the remainder.
```
def uniform_midtread_quantizer(x, xmin=1):
# limiter
x = np.copy(x)
if x <= -xmin:
x = -1
if x > xmin - Q:
x = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
# compute input signal
x = np.random.uniform(low=-1, high=1, size=256)
x = np.concatenate((x, np.zeros(1024)))
# compute output signal
yQ = sos_df1(x, 2*a, requantize=uniform_midtread_quantizer)
# plot results
plt.figure(figsize=(10, 3))
plt.plot(20*np.log10(np.abs(yQ)))
plt.title('Level of output signal')
plt.xlabel(r'$k$')
plt.ylabel(r'$|y_Q[k]|$ in dB')
plt.grid()
plt.figure(figsize=(10, 3))
k = np.arange(1000, 1050)
plt.stem(k, yQ[k])
plt.title('Output signal for zero input')
plt.xlabel(r'$k$')
plt.ylabel(r'$y_Q[k]$ ')
plt.grid();
```
**Exercise**
* Determine the period of the large limit cycles. How is it related to the poles of the system?
Solution: The period of the large limit cycles can be estimated from the second illustration as $P = 6$. The normalized frequency of a harmonic exponential signal with the same periodicity is given as $\Omega_0 = \frac{2 \pi}{P} = \frac{\pi}{3}$. The poles of the system can be extracted from the code of the first example as $z_{\infty 0,1} = 0.9 \cdot e^{\pm j \frac{\pi}{3}}$. The periodicity of the large limit cycles is hence linked to the normalized frequency of the poles.
**Copyright**
This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples, 2016-2018*.
|
github_jupyter
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as sig
N = 8192 # length of signals
w = 8 # wordlength for requantization of multiplications
def uniform_midtread_quantizer(x):
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def no_quantizer(x):
return x
def sos_df1(x, a, requantize=None):
y = np.zeros(len(x)+2) # initial value appended
for k in range(len(x)):
y[k] = x[k] - requantize(a[1]*y[k-1]) - requantize(a[2]*y[k-2])
return y[0:-2]
# cofficients of the SOS
p = 0.90*np.array([np.exp(1j*np.pi/3), np.exp(-1j*np.pi/3)])
a = np.poly(p)
# quantization step
Q = 1/(2**(w-1))
# compute input signal
x = np.random.uniform(low=-1, high=1, size=N)
# compute output signals w and w/o requantization
yQ = sos_df1(x, a, requantize=uniform_midtread_quantizer)
y = sos_df1(x, a, requantize=no_quantizer)
# compute requantization error
e = yQ-y
# Signal-to-noise ratio
SNR = 10*np.log10(np.var(y)/np.var(e))
print('SNR due to requantization: %f dB'%SNR)
# estimate PSD of requantization error
nf, Pxx = sig.welch(e, window='hamming', nperseg=256, noverlap=128)
Pxx = .5*Pxx # due to normalization in scipy.signal
Om = 2*np.pi*nf
# compute frequency response of system
w, H = sig.freqz([1,0,0], a)
# plot results
plt.figure(figsize=(10,4))
plt.plot(Om, Pxx/Q**2 * 12, 'b', label=r'$|\hat{\Phi}_{ee}(e^{j \Omega})|$')
plt.plot(w, np.abs(H)**2 * 2, 'g', label=r'$|H(e^{j \Omega})|^2$')
plt.title('Estimated PSD and transfer function of requantization noise')
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$Q^2/12$')
plt.axis([0, np.pi, 0, 100])
plt.legend()
plt.grid();
# compute input signal
x = np.random.uniform(low=-1, high=1, size=256)
x = np.concatenate((x, np.zeros(1024)))
# compute output signal
yQ = sos_df1(x, a, requantize=uniform_midtread_quantizer)
# plot results
np.seterr(divide='ignore')
plt.figure(figsize=(10, 3))
plt.plot(20*np.log10(np.abs(yQ)))
plt.title('Level of output signal')
plt.xlabel(r'$k$')
plt.ylabel(r'$|y_Q[k]|$ in dB')
plt.grid()
plt.figure(figsize=(10, 3))
k = np.arange(1000, 1050)
plt.stem(k, yQ[k]/Q)
plt.title('Output signal for zero input')
plt.xlabel(r'$k$')
plt.ylabel(r'$y_Q[k] / Q$ ')
plt.axis([k[0], k[-1], -3, 3])
plt.grid();
def uniform_midtread_quantizer(x, xmin=1):
# limiter
x = np.copy(x)
if x <= -xmin:
x = -1
if x > xmin - Q:
x = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
# compute input signal
x = np.random.uniform(low=-1, high=1, size=256)
x = np.concatenate((x, np.zeros(1024)))
# compute output signal
yQ = sos_df1(x, 2*a, requantize=uniform_midtread_quantizer)
# plot results
plt.figure(figsize=(10, 3))
plt.plot(20*np.log10(np.abs(yQ)))
plt.title('Level of output signal')
plt.xlabel(r'$k$')
plt.ylabel(r'$|y_Q[k]|$ in dB')
plt.grid()
plt.figure(figsize=(10, 3))
k = np.arange(1000, 1050)
plt.stem(k, yQ[k])
plt.title('Output signal for zero input')
plt.xlabel(r'$k$')
plt.ylabel(r'$y_Q[k]$ ')
plt.grid();
| 0.620737 | 0.99406 |
```
%matplotlib inline
```
PyTorch: 새 autograd Function 정의하기
----------------------------------------
$y=\sin(x)$ 을 예측할 수 있도록, $-\pi$ 부터 $pi$ 까지
유클리드 거리(Euclidean distance)를 최소화하도록 3차 다항식을 학습합니다.
다항식을 $y=a+bx+cx^2+dx^3$ 라고 쓰는 대신 $y=a+b P_3(c+dx)$ 로 다항식을 적겠습니다.
여기서 $P_3(x)=rac{1}{2}\left(5x^3-3x
ight)$ 은 3차
`르장드르 다항식(Legendre polynomial)`_ 입니다.
https://en.wikipedia.org/wiki/Legendre_polynomials
이 구현은 PyTorch 텐서 연산을 사용하여 순전파 단계를 계산하고, PyTorch autograd를 사용하여
변화도(gradient)를 계산합니다.
아래 구현에서는 $P_3'(x)$ 을 수행하기 위해 사용자 정의 autograd Function를 구현합니다.
수학적으로는 $P_3'(x)=rac{3}{2}\left(5x^2-1
ight)$ 입니다.
```
import torch
import math
class LegendrePolynomial3(torch.autograd.Function):
"""
torch.autograd.Function을 상속받아 사용자 정의 autograd Function을 구현하고,
텐서 연산을 하는 순전파 단계와 역전파 단계를 구현해보겠습니다.
"""
@staticmethod
def forward(ctx, input):
"""
순전파 단계에서는 입력을 갖는 텐서를 받아 출력을 갖는 텐서를 반환합니다.
ctx는 컨텍스트 객체(context object)로 역전파 연산을 위한 정보 저장에 사용합니다.
ctx.save_for_backward 메소드를 사용하여 역전파 단계에서 사용할 어떤 객체도
저장(cache)해 둘 수 있습니다.
"""
ctx.save_for_backward(input)
return 0.5 * (5 * input ** 3 - 3 * input)
@staticmethod
def backward(ctx, grad_output):
"""
역전파 단계에서는 출력에 대한 손실(loss)의 변화도(gradient)를 갖는 텐서를 받고,
입력에 대한 손실의 변화도를 계산해야 합니다.
"""
input, = ctx.saved_tensors
return grad_output * 1.5 * (5 * input ** 2 - 1)
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # GPU에서 실행하려면 이 주석을 제거하세요
# 입력값과 출력값을 갖는 텐서들을 생성합니다.
# requires_grad=False가 기본값으로 설정되어 역전파 단계 중에 이 텐서들에 대한 변화도를 계산할
# 필요가 없음을 나타냅니다.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# 가중치를 갖는 임의의 텐서를 생성합니다. 3차 다항식이므로 4개의 가중치가 필요합니다:
# y = a + b * P3(c + d * x)
# 이 가중치들이 수렴(convergence)하기 위해서는 정답으로부터 너무 멀리 떨어지지 않은 값으로
# 초기화가 되어야 합니다.
# requires_grad=True로 설정하여 역전파 단계 중에 이 텐서들에 대한 변화도를 계산할 필요가
# 있음을 나타냅니다.
a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True)
c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True)
learning_rate = 5e-6
for t in range(2000):
# 사용자 정의 Function을 적용하기 위해 Function.apply 메소드를 사용합니다.
# 여기에 'P3'라고 이름을 붙였습니다.
P3 = LegendrePolynomial3.apply
# 순전파 단계: 연산을 하여 예측값 y를 계산합니다;
# 사용자 정의 autograd 연산을 사용하여 P3를 계산합니다.
y_pred = a + b * P3(c + d * x)
# 손실을 계산하고 출력합니다.
loss = (y_pred - y).pow(2).sum()
if t % 100 == 99:
print(t, loss.item())
# autograd를 사용하여 역전파 단계를 계산합니다.
loss.backward()
# 경사하강법(gradient descent)을 사용하여 가중치를 갱신합니다.
with torch.no_grad():
a -= learning_rate * a.grad
b -= learning_rate * b.grad
c -= learning_rate * c.grad
d -= learning_rate * d.grad
# 가중치 갱신 후에는 변화도를 직접 0으로 만듭니다.
a.grad = None
b.grad = None
c.grad = None
d.grad = None
print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')
```
|
github_jupyter
|
%matplotlib inline
import torch
import math
class LegendrePolynomial3(torch.autograd.Function):
"""
torch.autograd.Function을 상속받아 사용자 정의 autograd Function을 구현하고,
텐서 연산을 하는 순전파 단계와 역전파 단계를 구현해보겠습니다.
"""
@staticmethod
def forward(ctx, input):
"""
순전파 단계에서는 입력을 갖는 텐서를 받아 출력을 갖는 텐서를 반환합니다.
ctx는 컨텍스트 객체(context object)로 역전파 연산을 위한 정보 저장에 사용합니다.
ctx.save_for_backward 메소드를 사용하여 역전파 단계에서 사용할 어떤 객체도
저장(cache)해 둘 수 있습니다.
"""
ctx.save_for_backward(input)
return 0.5 * (5 * input ** 3 - 3 * input)
@staticmethod
def backward(ctx, grad_output):
"""
역전파 단계에서는 출력에 대한 손실(loss)의 변화도(gradient)를 갖는 텐서를 받고,
입력에 대한 손실의 변화도를 계산해야 합니다.
"""
input, = ctx.saved_tensors
return grad_output * 1.5 * (5 * input ** 2 - 1)
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # GPU에서 실행하려면 이 주석을 제거하세요
# 입력값과 출력값을 갖는 텐서들을 생성합니다.
# requires_grad=False가 기본값으로 설정되어 역전파 단계 중에 이 텐서들에 대한 변화도를 계산할
# 필요가 없음을 나타냅니다.
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# 가중치를 갖는 임의의 텐서를 생성합니다. 3차 다항식이므로 4개의 가중치가 필요합니다:
# y = a + b * P3(c + d * x)
# 이 가중치들이 수렴(convergence)하기 위해서는 정답으로부터 너무 멀리 떨어지지 않은 값으로
# 초기화가 되어야 합니다.
# requires_grad=True로 설정하여 역전파 단계 중에 이 텐서들에 대한 변화도를 계산할 필요가
# 있음을 나타냅니다.
a = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
b = torch.full((), -1.0, device=device, dtype=dtype, requires_grad=True)
c = torch.full((), 0.0, device=device, dtype=dtype, requires_grad=True)
d = torch.full((), 0.3, device=device, dtype=dtype, requires_grad=True)
learning_rate = 5e-6
for t in range(2000):
# 사용자 정의 Function을 적용하기 위해 Function.apply 메소드를 사용합니다.
# 여기에 'P3'라고 이름을 붙였습니다.
P3 = LegendrePolynomial3.apply
# 순전파 단계: 연산을 하여 예측값 y를 계산합니다;
# 사용자 정의 autograd 연산을 사용하여 P3를 계산합니다.
y_pred = a + b * P3(c + d * x)
# 손실을 계산하고 출력합니다.
loss = (y_pred - y).pow(2).sum()
if t % 100 == 99:
print(t, loss.item())
# autograd를 사용하여 역전파 단계를 계산합니다.
loss.backward()
# 경사하강법(gradient descent)을 사용하여 가중치를 갱신합니다.
with torch.no_grad():
a -= learning_rate * a.grad
b -= learning_rate * b.grad
c -= learning_rate * c.grad
d -= learning_rate * d.grad
# 가중치 갱신 후에는 변화도를 직접 0으로 만듭니다.
a.grad = None
b.grad = None
c.grad = None
d.grad = None
print(f'Result: y = {a.item()} + {b.item()} * P3({c.item()} + {d.item()} x)')
| 0.744935 | 0.964656 |
# Proyecto de Investigación: Modelos Numéricos
---
```
import ipywidgets as widgets
from matplotlib import pyplot as plt
import numpy as np
from numpy import linalg as LA
from math import sqrt, pi
from project import eigen
```
## Modelo Matemático en Ecuaciones Diferenciales
$ m(x) \frac{\partial^{2}u}{\partial t^{2}} + EJ \frac{\partial^{4}u(x, t)}{\partial x^{4}} = p(x)g(t) \quad\text{;}\quad \Omega = \{x \in \mathbb{R}: 0 \leq x \leq L \}$
Condiciones de borde:
$\forall t \quad\text{en}\quad x = 0
\begin{cases}
u(0, t) = 0 \\
\frac{\partial u}{\partial x}|_{(0,t)} = 0
\end{cases}
$
$\forall t \quad\text{en}\quad x = L
\begin{cases}
EJ \frac{\partial^{2}u(x, t)}{\partial x^{2}} = M_{L} g_{c}(t) \\
EJ \frac{\partial^{3}u(x, t)}{\partial x^{3}} = 0
\end{cases}
$
Condiciones iniciales nulas.
## Datos del sistema
```
L = 420
EJ = 9.45e11
m = 64.8
p = 100
ML = 50000
```
## Modelo Matemático Discreto de N Grados de Libertad
Discretización del dominio
```
widgetN = widgets.FloatText()
display(widgetN)
N = 20
h = L/N
```
Forma discreta de la ecuación diferencial, de la forma:
$\mathbb{M} \cdot \bar{\ddot{u(t)}} + \mathbb{K} \cdot \bar{u(t)} = g(t) \cdot \bar{f_{p}} + g_{c}(t) \cdot \bar{f_{M}}$
Donde:
```
M = np.zeros((N, N))
for i in range(0, N):
M[i, i] = m
K = np.zeros((N, N))
K[0, 0:3] = [7, -4, 1]
K[1, 0:4] = [-4, 6, -4, 1]
# Completar para N puntos de discretización
if (N > 4):
for i in range(2, N-2):
K[i, i-2:i+3] = [1, -4, 6, -4, 1]
K[N-2, N-4:N] = [1, -4, 5, -2]
K[N-1, N-3:N] = [2, -4, 2]
K = (EJ/h**4) * K
fp = np.full((N,), p)
fM = np.zeros((N,))
fM[N-2] = -1
fM[N-1] = 2
fM = (ML/(h*h)) * fM[:]
```
## Modelo Matemático de 1 Grado de Libertad
Se realiza un cambio de base del espacio generado por los modos naturales de vibración del modelo matemátiico discreto de N grados de libertad.
Es un problema de autovalores y autovectores, de la forma:
$(\mathbb{K} - \omega^{2}\mathbb{M}) \cdot \bar{\phi} = \bar{0}$
Se opera para llegar a la forma:
$(\mathbb{A} - \lambda \cdot \mathbb{I}) \cdot \bar{\phi} = \bar{0}$
Y se procede mediante el método de iteración inversa para calcular el primer autovalor y autovector:
```
A = LA.inv(M).dot(K)
alpha, phi = eigen.inverse_iteration(A)
# Se transforma el array 1D en 2D para poder transponerle y tratarlo
# como un vector columna. Debe existir una forma mejor CORREGIR
phi = np.array(phi)[np.newaxis].transpose()
```
Obtención del primer autovector por izquierda de la matriz $(\mathbb{K} - \omega^{2}\mathbb{M})$ se puede obtener como el primer autovector por derecha de la matriz $(\mathbb{K}^{T} - \omega^{2} \cdot \mathbb{M}^{T})$:
```
A = LA.inv(M.transpose()).dot(K.transpose())
alpha_n, phi_n = eigen.inverse_iteration(A) # Obtención del menor autovalor mediante método de potencia inversa
# Se transforma el array 1D en 2D para poder transponerle y tratarlo
# como un vector columna. Debe existir una forma mejor CORREGIR
phi_n = np.array(phi_n)[np.newaxis].transpose()
```
Falta explicar la descomposición modal (?)
Así es posible obtener la siguiente ecuación dinámica de un grado de libertad:
$\ddot{q(t)} + \omega_{n}^{2} \cdot q(t) = g(t) \cdot b_{p} + g_{c}(t) \cdot b_{M}$
```
mn = phi_n.transpose().dot(M.dot(phi))[0, 0]
kn = phi_n.transpose().dot(K.dot(phi))[0, 0]
print("mn = " + str(mn) + " - kn = " + str(kn))
wn = sqrt(kn/mn)
print(sqrt(alpha), wn)
T = 2*pi/wn
print("T = " + str(T))
bp = phi_n.transpose().dot(fp)/mn
bp = bp[0]
bM = phi_n.transpose().dot(fM)/mn
bM = bM[0]
print("bp = " + str(bp) + " - bM = " + str(bM))
```
Es importante destacar que el método iterativo de potencias inverso solo se puede implementar en este caso de 1 grado de libertad, ya que solo es posible obtener el eigenpar menor (que es el fundamental y el que se necesita). Para el caso general es necesario la aplicación de otro método numérico.
## Resolución de la ecuación diferencial por reducción de orden
Siendo la ecuación diferencial de la forma:
$\ddot{q(t)} + \omega_{n}^{2} \cdot q(t) = g(t) \cdot b_{p} + g_{c}(t) \cdot b_{M}$
Se puede resolver mediante reducción de orden definiendo $y_{1}(t) = q(t)$ y $y_{2}(t) = \frac{dy_{1}}{dt}$, entonces:
$
\begin{cases}
\frac{dy_{1}}{dt} = y_{2}(t) \\
\frac{dy_{2}}{dt} = - \omega_{n}^{2} \cdot y_{1}(t) + g(t) \cdot b_{p} + g_{c}(t) \cdot b_{M}
\end{cases}
$
---
# Sistemas de Ecuaciones Diferenciales Ordinarias de Primer Orden
## Método de Euler
Es uno de los métodos más antiguos y mejor conocidos de **integración numérica** de ecuaciones diferenciales. Fue ideado por Euler hace más de 200 años. Es un método fácil de entender y de usar, pero no es tan preciso como otros métodos. En general, se trata de un **Método de Runge-Kutta de Primer Orden**.
Siendo la ecuación diferencial de primer orden de la forma:
$\frac{dy}{dt} = f(x, y)$
La expresión para su cálculo es:
$y_{i+1} = y_{i} + h \cdot f(x_{i}, y_{i})$
```
A = np.array([[0, 1], [-wn**2, 0]])
# Condiciones iniciales del sistema
y0 = np.zeros((2,))
t0 = 0
# Incremento de tiempo
Dt = 1e-2
# Cantidad de puntos
tf = 6*T
NDt = np.ceil(tf/Dt).astype(int) # Se redondea al entero superior y se castea a un entero
# Dimensionamiento
t = np.zeros((NDt,))
y = np.zeros((2, NDt))
ya = np.zeros((2,))
k1 = np.zeros((2,))
# Cargas externas
# g(t) impulso unitario en t=0
g = np.zeros((NDt,))
g[0] = 1
B = np.zeros((2, NDt))
B[1, :] = bp * g
# Inicialización del sistema
t[0] = t0
y[:, 0] = y0
for i in range(0, NDt-1):
ya = y[:, i]
ta = t[i]
k1 = Dt * A.dot(ya) + B[:, i]
y[:, i+1] = ya + k1
t[i+1] = ta + Dt # Esto se puede simplificar con np.arange al principio
plt.plot(t, y[0, :], label='u(L, t)')
plt.plot(t, g, label='g(t)')
plt.title('u(L, t)')
plt.xlabel('t [s]')
plt.grid()
plt.legend()
```
### Método de Euler modificado
```
A = np.array([[0, 1], [-wn**2, 0]])
# Condiciones iniciales del sistema
y0 = np.zeros((2,))
t0 = 0
# Incremento de tiempo
Dt = 1e-2
# Cantidad de puntos
tf = 6*T
NDt = np.ceil(tf/Dt).astype(int) # Se redondea al entero superior y se castea a un entero
# Dimensionamiento
t = np.zeros((NDt,))
y = np.zeros((2, NDt))
yg = np.zeros((2,))
ya = np.zeros((2,))
k1 = np.zeros((2,))
k2 = np.zeros((2,))
# Cargas externas
# g(t) impulso unitario en t=0
g = np.zeros((NDt,))
g[0] = 1
B = np.zeros((2, NDt))
B[1, :] = bp * g
# Inicialización del sistema
t[0] = t0
y[:, 0] = y0
for i in range(0, NDt-1):
ya = y[:, i]
ta = t[i]
k1 = Dt * A.dot(ya) + B[:, i]
yg = ya + k1/2
tg = ta + Dt/2
k2 = Dt * A.dot(yg) + B[:, i]
y[:, i+1] = ya + k2
t[i+1] = ta + Dt # Esto se puede simplificar con np.arange al principio
plt.plot(t, y[0, :], label='u(L, t)')
plt.plot(t, g, label='g(t)')
plt.title('u(L, t)')
plt.xlabel('t [s]')
plt.grid()
plt.legend()
```
|
github_jupyter
|
import ipywidgets as widgets
from matplotlib import pyplot as plt
import numpy as np
from numpy import linalg as LA
from math import sqrt, pi
from project import eigen
L = 420
EJ = 9.45e11
m = 64.8
p = 100
ML = 50000
widgetN = widgets.FloatText()
display(widgetN)
N = 20
h = L/N
M = np.zeros((N, N))
for i in range(0, N):
M[i, i] = m
K = np.zeros((N, N))
K[0, 0:3] = [7, -4, 1]
K[1, 0:4] = [-4, 6, -4, 1]
# Completar para N puntos de discretización
if (N > 4):
for i in range(2, N-2):
K[i, i-2:i+3] = [1, -4, 6, -4, 1]
K[N-2, N-4:N] = [1, -4, 5, -2]
K[N-1, N-3:N] = [2, -4, 2]
K = (EJ/h**4) * K
fp = np.full((N,), p)
fM = np.zeros((N,))
fM[N-2] = -1
fM[N-1] = 2
fM = (ML/(h*h)) * fM[:]
A = LA.inv(M).dot(K)
alpha, phi = eigen.inverse_iteration(A)
# Se transforma el array 1D en 2D para poder transponerle y tratarlo
# como un vector columna. Debe existir una forma mejor CORREGIR
phi = np.array(phi)[np.newaxis].transpose()
A = LA.inv(M.transpose()).dot(K.transpose())
alpha_n, phi_n = eigen.inverse_iteration(A) # Obtención del menor autovalor mediante método de potencia inversa
# Se transforma el array 1D en 2D para poder transponerle y tratarlo
# como un vector columna. Debe existir una forma mejor CORREGIR
phi_n = np.array(phi_n)[np.newaxis].transpose()
mn = phi_n.transpose().dot(M.dot(phi))[0, 0]
kn = phi_n.transpose().dot(K.dot(phi))[0, 0]
print("mn = " + str(mn) + " - kn = " + str(kn))
wn = sqrt(kn/mn)
print(sqrt(alpha), wn)
T = 2*pi/wn
print("T = " + str(T))
bp = phi_n.transpose().dot(fp)/mn
bp = bp[0]
bM = phi_n.transpose().dot(fM)/mn
bM = bM[0]
print("bp = " + str(bp) + " - bM = " + str(bM))
A = np.array([[0, 1], [-wn**2, 0]])
# Condiciones iniciales del sistema
y0 = np.zeros((2,))
t0 = 0
# Incremento de tiempo
Dt = 1e-2
# Cantidad de puntos
tf = 6*T
NDt = np.ceil(tf/Dt).astype(int) # Se redondea al entero superior y se castea a un entero
# Dimensionamiento
t = np.zeros((NDt,))
y = np.zeros((2, NDt))
ya = np.zeros((2,))
k1 = np.zeros((2,))
# Cargas externas
# g(t) impulso unitario en t=0
g = np.zeros((NDt,))
g[0] = 1
B = np.zeros((2, NDt))
B[1, :] = bp * g
# Inicialización del sistema
t[0] = t0
y[:, 0] = y0
for i in range(0, NDt-1):
ya = y[:, i]
ta = t[i]
k1 = Dt * A.dot(ya) + B[:, i]
y[:, i+1] = ya + k1
t[i+1] = ta + Dt # Esto se puede simplificar con np.arange al principio
plt.plot(t, y[0, :], label='u(L, t)')
plt.plot(t, g, label='g(t)')
plt.title('u(L, t)')
plt.xlabel('t [s]')
plt.grid()
plt.legend()
A = np.array([[0, 1], [-wn**2, 0]])
# Condiciones iniciales del sistema
y0 = np.zeros((2,))
t0 = 0
# Incremento de tiempo
Dt = 1e-2
# Cantidad de puntos
tf = 6*T
NDt = np.ceil(tf/Dt).astype(int) # Se redondea al entero superior y se castea a un entero
# Dimensionamiento
t = np.zeros((NDt,))
y = np.zeros((2, NDt))
yg = np.zeros((2,))
ya = np.zeros((2,))
k1 = np.zeros((2,))
k2 = np.zeros((2,))
# Cargas externas
# g(t) impulso unitario en t=0
g = np.zeros((NDt,))
g[0] = 1
B = np.zeros((2, NDt))
B[1, :] = bp * g
# Inicialización del sistema
t[0] = t0
y[:, 0] = y0
for i in range(0, NDt-1):
ya = y[:, i]
ta = t[i]
k1 = Dt * A.dot(ya) + B[:, i]
yg = ya + k1/2
tg = ta + Dt/2
k2 = Dt * A.dot(yg) + B[:, i]
y[:, i+1] = ya + k2
t[i+1] = ta + Dt # Esto se puede simplificar con np.arange al principio
plt.plot(t, y[0, :], label='u(L, t)')
plt.plot(t, g, label='g(t)')
plt.title('u(L, t)')
plt.xlabel('t [s]')
plt.grid()
plt.legend()
| 0.378 | 0.983534 |
# Chunking using RNN and also Bi-LSTM
## Importing required Libraries
```
# Let us import required Libraries
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import matplotlib.pyplot as plt
import numpy as np
from gensim.models import KeyedVectors
```
## Dataset
```
# Let us declare and load the dataset
DATA_DIR = '{}.txt'
def get_data(file):
with open(file, 'r', encoding='latin1') as fp:
content = fp.readlines()
data, sent = [], []
for line in content:
if not line.strip():
if sent: data.append(sent)
sent = []
else:
word, pos, tag = line.strip().split()
tag = tag.split('-')[0]
sent.append((word, pos, tag))
return data
train_data = get_data(DATA_DIR.format('train'))
test_data = get_data(DATA_DIR.format('test'))
# Declaring some of the factors
MAX_LEN = 78
empty_token = '<UNK>'
empty_pos = '^'
empty_tag = '$'
pad_value = 0
embed_dim = 300
# Now, we have to seperate the words and corresponding tags, so we will do the following
sentences_train = [' '.join([tup[0].lower() for tup in sent]) for sent in train_data]
sent_tokenizer = Tokenizer(oov_token=empty_token, filters='\t\n')
sent_tokenizer.fit_on_texts(sentences_train)
sentences_train = sent_tokenizer.texts_to_sequences(sentences_train)
sentences_train = pad_sequences(sentences_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_WORDS = len(sent_tokenizer.word_index)
sentences_test = [' '.join([tup[0].lower() for tup in sent]) for sent in test_data]
sentences_test = sent_tokenizer.texts_to_sequences(sentences_test)
sentences_test = pad_sequences(sentences_test, padding='post', value=pad_value, maxlen=MAX_LEN)
postags_train = [' '.join([tup[1].lower() for tup in sent]) for sent in train_data]
pos_tokenizer = Tokenizer(oov_token=empty_pos, filters='\t\n')
pos_tokenizer.fit_on_texts(postags_train)
postags_train = pos_tokenizer.texts_to_sequences(postags_train)
postags_train = pad_sequences(postags_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_POS = len(pos_tokenizer.word_index)
postags_test = [' '.join([tup[1].lower() for tup in sent]) for sent in test_data]
postags_test = pos_tokenizer.texts_to_sequences(postags_test)
postags_test = pad_sequences(postags_test, padding='post', value=pad_value, maxlen=MAX_LEN)
tags_train = [[tup[2] for tup in sent] for sent in train_data]
tag_tokenizer = Tokenizer(oov_token=empty_tag, filters='\t\n')
tag_tokenizer.fit_on_texts(tags_train)
tags_train = tag_tokenizer.texts_to_sequences(tags_train)
tags_train = pad_sequences(tags_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_TAGS = len(tag_tokenizer.word_index)
tags_test = [[tup[2] for tup in sent] for sent in test_data]
tags_test = tag_tokenizer.texts_to_sequences(tags_test)
tags_test = pad_sequences(tags_test, padding='post', value=pad_value, maxlen=MAX_LEN)
# Sentences, their Pos Tags and Chunk tags
l=8936
sentences_train,sentences_validation=sentences_train[:l*85//100],sentences_train[1+l*85//100:]
postags_train,postags_validation=postags_train[:l*85//100],postags_train[1+l*85//100:]
tags_train,tags_validation=tags_train[:l*85//100],tags_train[1+l*85//100:]
print(len(sentences_train),len(sentences_validation))
print(len(postags_train),len(postags_validation))
print(len(tags_train),len(tags_validation))
```
## Word-Embedding Model
```
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
def load_word2vec():
embedding_matrix = np.random.normal(size=(NUM_WORDS + 1, embed_dim))
for word, idx in sent_tokenizer.word_index.items():
if word in wv.vocab:
embedding_matrix[idx] = wv.word_vec(word)
return embedding_matrix
# Loading Word2Vec Model
word2vec=load_word2vec()
def ignore_accuracy_of_class(class_to_ignore=0):
def acc(y_true, y_pred):
y_true_class=tf.cast(y_true, tf.int64)
y_pred_class = K.argmax(y_pred, axis=-1)
ignore_mask = K.cast(K.not_equal(y_pred_class, class_to_ignore), 'int32')
matches = K.cast(K.equal(y_true_class, y_pred_class), 'int32') * ignore_mask
accuracy = K.sum(matches) / K.maximum(K.sum(ignore_mask), 1)
return accuracy
return acc
custom_acc = ignore_accuracy_of_class(pad_value)
```
## Bi-LSTM Model
```
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, LSTM, Input, Bidirectional, TimeDistributed, Embedding, Concatenate,SimpleRNN, RNN
def model_with_pos():
with strategy.scope():
custom_emb = keras.initializers.Constant(word2vec)
regularizer = tf.keras.regularizers.l1_l2(l1=1e-4, l2=1e-3)
word_inputs = Input(shape=(MAX_LEN,), dtype='int32')
pos_inputs = Input(shape=(MAX_LEN,), dtype='int32')
word_emb = Embedding(NUM_WORDS + 1, embed_dim, embeddings_initializer=custom_emb, trainable=True)(word_inputs)
pos_emb = Embedding(NUM_POS + 1, 25, trainable=True)(pos_inputs)
emb = Concatenate(axis=-1)([word_emb, pos_emb])
lstm = Bidirectional(LSTM(32, return_sequences=True, kernel_regularizer=regularizer))(emb)
td = TimeDistributed(Dense(NUM_TAGS + 1, activation='softmax', kernel_regularizer=regularizer))(lstm)
model = Model(inputs=[word_inputs, pos_inputs], outputs=[td])
return model
```
## Summary of the Model
```
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
print("REPLICAS: ", strategy.num_replicas_in_sync)
K.clear_session()
model = model_with_pos()
print(model.summary())
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
losses, val_losses = [], []
accs, val_accs = [], []
model_name = 'bilstm_chunker.h5'
stopper = EarlyStopping(monitor='acc', patience=5, mode='max')
checkpointer = ModelCheckpoint(filepath=model_name, monitor='val_acc', mode='max', save_best_only=True, verbose=2)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(0.001), metrics=[custom_acc])
history = model.fit(
x = [sentences_train, postags_train],
y = tags_train,
validation_data = ([sentences_validation, postags_validation], tags_validation),
callbacks = [stopper],
epochs = 10,
batch_size = 128 * strategy.num_replicas_in_sync,
verbose = 1,
)
losses += list(history.history['loss'])
val_losses += list(history.history['val_loss'])
accs += list(history.history['acc'])
val_accs += list(history.history['val_acc'])
```
## Plots
```
fig, ax = plt.subplots(1, 2, figsize=(18, 6))
ax[0].plot(losses, label='Loss (training data)')
ax[0].plot(val_losses, label='Loss (validation data)')
ax[0].set_title('Loss Trend')
ax[0].set_ylabel('Loss value')
ax[0].set_xlabel('No. of epochs')
ax[0].legend(loc="upper left")
ax[1].plot(accs, label='Accuracy (training data)')
ax[1].plot(val_accs, label='Accuracy (validation data)')
ax[1].set_title('Accuracy Trend')
ax[1].set_ylabel('Accuracy value')
ax[1].set_xlabel('No. of epochs')
ax[1].legend(loc="upper left")
plt.tight_layout()
plt.savefig('loss_acc_trend.png', bbox_inches='tight')
plt.show()
```
## RNN Model
```
def rnn_model_with_pos():
with strategy.scope():
custom_emb = keras.initializers.Constant(word2vec)
regularizer = tf.keras.regularizers.l1_l2(l1=1e-4, l2=1e-3)
word_inputs = Input(shape=(MAX_LEN,), dtype='int32')
pos_inputs = Input(shape=(MAX_LEN,), dtype='int32')
word_emb = Embedding(NUM_WORDS + 1, embed_dim, embeddings_initializer=custom_emb, trainable=True)(word_inputs)
pos_emb = Embedding(NUM_POS + 1, 25, trainable=True)(pos_inputs)
emb = Concatenate(axis=-1)([word_emb, pos_emb])
rnn = SimpleRNN(32, return_sequences=True, kernel_regularizer=regularizer)(emb)
td = TimeDistributed(Dense(NUM_TAGS + 1, activation='softmax', kernel_regularizer=regularizer))(rnn)
model = Model(inputs=[word_inputs, pos_inputs], outputs=[td])
return model
```
## Summary of the Model
```
rnn_model = rnn_model_with_pos()
print(rnn_model.summary())
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
rnn_losses, rnn_val_losses = [], []
rnn_accs, rnn_val_accs = [], []
rnn_model_name = 'rnn_chunker.h5'
stopper = EarlyStopping(monitor='acc', patience=5, mode='max')
checkpointer = ModelCheckpoint(filepath=rnn_model_name, monitor='val_acc', mode='max', save_best_only=True, verbose=2)
rnn_model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(0.001), metrics=[custom_acc])
rnn_history = rnn_model.fit(
x = [sentences_train, postags_train],
y = tags_train,
validation_data = ([sentences_validation, postags_validation], tags_validation),
callbacks = [stopper],
epochs = 10,
batch_size = 128 * strategy.num_replicas_in_sync,
verbose = 1,
)
rnn_losses += list(rnn_history.history['loss'])
rnn_val_losses += list(rnn_history.history['val_loss'])
rnn_accs += list(rnn_history.history['acc'])
rnn_val_accs += list(rnn_history.history['val_acc'])
```
## Plots
```
fig, ax = plt.subplots(1, 2, figsize=(18, 6))
ax[0].plot(rnn_losses, label='Loss (training data)')
ax[0].plot(rnn_val_losses, label='Loss (validation data)')
ax[0].set_title('Loss Trend')
ax[0].set_ylabel('Loss value')
ax[0].set_xlabel('No. of epochs')
ax[0].legend(loc="upper left")
ax[1].plot(rnn_accs, label='Accuracy (training data)')
ax[1].plot(rnn_val_accs, label='Accuracy (validation data)')
ax[1].set_title('Accuracy Trend')
ax[1].set_ylabel('Accuracy value')
ax[1].set_xlabel('No. of epochs')
ax[1].legend(loc="upper left")
plt.tight_layout()
plt.savefig('loss_acc_trend.png', bbox_inches='tight')
plt.show()
```
|
github_jupyter
|
# Let us import required Libraries
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
import matplotlib.pyplot as plt
import numpy as np
from gensim.models import KeyedVectors
# Let us declare and load the dataset
DATA_DIR = '{}.txt'
def get_data(file):
with open(file, 'r', encoding='latin1') as fp:
content = fp.readlines()
data, sent = [], []
for line in content:
if not line.strip():
if sent: data.append(sent)
sent = []
else:
word, pos, tag = line.strip().split()
tag = tag.split('-')[0]
sent.append((word, pos, tag))
return data
train_data = get_data(DATA_DIR.format('train'))
test_data = get_data(DATA_DIR.format('test'))
# Declaring some of the factors
MAX_LEN = 78
empty_token = '<UNK>'
empty_pos = '^'
empty_tag = '$'
pad_value = 0
embed_dim = 300
# Now, we have to seperate the words and corresponding tags, so we will do the following
sentences_train = [' '.join([tup[0].lower() for tup in sent]) for sent in train_data]
sent_tokenizer = Tokenizer(oov_token=empty_token, filters='\t\n')
sent_tokenizer.fit_on_texts(sentences_train)
sentences_train = sent_tokenizer.texts_to_sequences(sentences_train)
sentences_train = pad_sequences(sentences_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_WORDS = len(sent_tokenizer.word_index)
sentences_test = [' '.join([tup[0].lower() for tup in sent]) for sent in test_data]
sentences_test = sent_tokenizer.texts_to_sequences(sentences_test)
sentences_test = pad_sequences(sentences_test, padding='post', value=pad_value, maxlen=MAX_LEN)
postags_train = [' '.join([tup[1].lower() for tup in sent]) for sent in train_data]
pos_tokenizer = Tokenizer(oov_token=empty_pos, filters='\t\n')
pos_tokenizer.fit_on_texts(postags_train)
postags_train = pos_tokenizer.texts_to_sequences(postags_train)
postags_train = pad_sequences(postags_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_POS = len(pos_tokenizer.word_index)
postags_test = [' '.join([tup[1].lower() for tup in sent]) for sent in test_data]
postags_test = pos_tokenizer.texts_to_sequences(postags_test)
postags_test = pad_sequences(postags_test, padding='post', value=pad_value, maxlen=MAX_LEN)
tags_train = [[tup[2] for tup in sent] for sent in train_data]
tag_tokenizer = Tokenizer(oov_token=empty_tag, filters='\t\n')
tag_tokenizer.fit_on_texts(tags_train)
tags_train = tag_tokenizer.texts_to_sequences(tags_train)
tags_train = pad_sequences(tags_train, padding='post', value=pad_value, maxlen=MAX_LEN)
NUM_TAGS = len(tag_tokenizer.word_index)
tags_test = [[tup[2] for tup in sent] for sent in test_data]
tags_test = tag_tokenizer.texts_to_sequences(tags_test)
tags_test = pad_sequences(tags_test, padding='post', value=pad_value, maxlen=MAX_LEN)
# Sentences, their Pos Tags and Chunk tags
l=8936
sentences_train,sentences_validation=sentences_train[:l*85//100],sentences_train[1+l*85//100:]
postags_train,postags_validation=postags_train[:l*85//100],postags_train[1+l*85//100:]
tags_train,tags_validation=tags_train[:l*85//100],tags_train[1+l*85//100:]
print(len(sentences_train),len(sentences_validation))
print(len(postags_train),len(postags_validation))
print(len(tags_train),len(tags_validation))
import gensim.downloader as api
wv = api.load('word2vec-google-news-300')
def load_word2vec():
embedding_matrix = np.random.normal(size=(NUM_WORDS + 1, embed_dim))
for word, idx in sent_tokenizer.word_index.items():
if word in wv.vocab:
embedding_matrix[idx] = wv.word_vec(word)
return embedding_matrix
# Loading Word2Vec Model
word2vec=load_word2vec()
def ignore_accuracy_of_class(class_to_ignore=0):
def acc(y_true, y_pred):
y_true_class=tf.cast(y_true, tf.int64)
y_pred_class = K.argmax(y_pred, axis=-1)
ignore_mask = K.cast(K.not_equal(y_pred_class, class_to_ignore), 'int32')
matches = K.cast(K.equal(y_true_class, y_pred_class), 'int32') * ignore_mask
accuracy = K.sum(matches) / K.maximum(K.sum(ignore_mask), 1)
return accuracy
return acc
custom_acc = ignore_accuracy_of_class(pad_value)
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, LSTM, Input, Bidirectional, TimeDistributed, Embedding, Concatenate,SimpleRNN, RNN
def model_with_pos():
with strategy.scope():
custom_emb = keras.initializers.Constant(word2vec)
regularizer = tf.keras.regularizers.l1_l2(l1=1e-4, l2=1e-3)
word_inputs = Input(shape=(MAX_LEN,), dtype='int32')
pos_inputs = Input(shape=(MAX_LEN,), dtype='int32')
word_emb = Embedding(NUM_WORDS + 1, embed_dim, embeddings_initializer=custom_emb, trainable=True)(word_inputs)
pos_emb = Embedding(NUM_POS + 1, 25, trainable=True)(pos_inputs)
emb = Concatenate(axis=-1)([word_emb, pos_emb])
lstm = Bidirectional(LSTM(32, return_sequences=True, kernel_regularizer=regularizer))(emb)
td = TimeDistributed(Dense(NUM_TAGS + 1, activation='softmax', kernel_regularizer=regularizer))(lstm)
model = Model(inputs=[word_inputs, pos_inputs], outputs=[td])
return model
import tensorflow as tf
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
AUTO = tf.data.experimental.AUTOTUNE
print("REPLICAS: ", strategy.num_replicas_in_sync)
K.clear_session()
model = model_with_pos()
print(model.summary())
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
losses, val_losses = [], []
accs, val_accs = [], []
model_name = 'bilstm_chunker.h5'
stopper = EarlyStopping(monitor='acc', patience=5, mode='max')
checkpointer = ModelCheckpoint(filepath=model_name, monitor='val_acc', mode='max', save_best_only=True, verbose=2)
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(0.001), metrics=[custom_acc])
history = model.fit(
x = [sentences_train, postags_train],
y = tags_train,
validation_data = ([sentences_validation, postags_validation], tags_validation),
callbacks = [stopper],
epochs = 10,
batch_size = 128 * strategy.num_replicas_in_sync,
verbose = 1,
)
losses += list(history.history['loss'])
val_losses += list(history.history['val_loss'])
accs += list(history.history['acc'])
val_accs += list(history.history['val_acc'])
fig, ax = plt.subplots(1, 2, figsize=(18, 6))
ax[0].plot(losses, label='Loss (training data)')
ax[0].plot(val_losses, label='Loss (validation data)')
ax[0].set_title('Loss Trend')
ax[0].set_ylabel('Loss value')
ax[0].set_xlabel('No. of epochs')
ax[0].legend(loc="upper left")
ax[1].plot(accs, label='Accuracy (training data)')
ax[1].plot(val_accs, label='Accuracy (validation data)')
ax[1].set_title('Accuracy Trend')
ax[1].set_ylabel('Accuracy value')
ax[1].set_xlabel('No. of epochs')
ax[1].legend(loc="upper left")
plt.tight_layout()
plt.savefig('loss_acc_trend.png', bbox_inches='tight')
plt.show()
def rnn_model_with_pos():
with strategy.scope():
custom_emb = keras.initializers.Constant(word2vec)
regularizer = tf.keras.regularizers.l1_l2(l1=1e-4, l2=1e-3)
word_inputs = Input(shape=(MAX_LEN,), dtype='int32')
pos_inputs = Input(shape=(MAX_LEN,), dtype='int32')
word_emb = Embedding(NUM_WORDS + 1, embed_dim, embeddings_initializer=custom_emb, trainable=True)(word_inputs)
pos_emb = Embedding(NUM_POS + 1, 25, trainable=True)(pos_inputs)
emb = Concatenate(axis=-1)([word_emb, pos_emb])
rnn = SimpleRNN(32, return_sequences=True, kernel_regularizer=regularizer)(emb)
td = TimeDistributed(Dense(NUM_TAGS + 1, activation='softmax', kernel_regularizer=regularizer))(rnn)
model = Model(inputs=[word_inputs, pos_inputs], outputs=[td])
return model
rnn_model = rnn_model_with_pos()
print(rnn_model.summary())
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
rnn_losses, rnn_val_losses = [], []
rnn_accs, rnn_val_accs = [], []
rnn_model_name = 'rnn_chunker.h5'
stopper = EarlyStopping(monitor='acc', patience=5, mode='max')
checkpointer = ModelCheckpoint(filepath=rnn_model_name, monitor='val_acc', mode='max', save_best_only=True, verbose=2)
rnn_model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(0.001), metrics=[custom_acc])
rnn_history = rnn_model.fit(
x = [sentences_train, postags_train],
y = tags_train,
validation_data = ([sentences_validation, postags_validation], tags_validation),
callbacks = [stopper],
epochs = 10,
batch_size = 128 * strategy.num_replicas_in_sync,
verbose = 1,
)
rnn_losses += list(rnn_history.history['loss'])
rnn_val_losses += list(rnn_history.history['val_loss'])
rnn_accs += list(rnn_history.history['acc'])
rnn_val_accs += list(rnn_history.history['val_acc'])
fig, ax = plt.subplots(1, 2, figsize=(18, 6))
ax[0].plot(rnn_losses, label='Loss (training data)')
ax[0].plot(rnn_val_losses, label='Loss (validation data)')
ax[0].set_title('Loss Trend')
ax[0].set_ylabel('Loss value')
ax[0].set_xlabel('No. of epochs')
ax[0].legend(loc="upper left")
ax[1].plot(rnn_accs, label='Accuracy (training data)')
ax[1].plot(rnn_val_accs, label='Accuracy (validation data)')
ax[1].set_title('Accuracy Trend')
ax[1].set_ylabel('Accuracy value')
ax[1].set_xlabel('No. of epochs')
ax[1].legend(loc="upper left")
plt.tight_layout()
plt.savefig('loss_acc_trend.png', bbox_inches='tight')
plt.show()
| 0.469034 | 0.832781 |
# DS1000E Rigol Waveform Examples
**Scott Prahl**
**March 2021**
This notebook illustrates shows how to extract signals from a `.wfm` file created by a the Rigol DS1000E scope. It also validates that the process works by comparing with `.csv` and screenshots.
Two different `.wfm` files are examined one for the DS1052E scope and one for the DS1102E scope. The accompanying `.csv` files seem to have t=0 in the zero in the center of the waveform.
*If RigolWFM is not installed, uncomment the following cell (i.e., delete the #) and run (shift-enter)*
```
#!pip install RigolWFM
import numpy as np
import matplotlib.pyplot as plt
try:
import RigolWFM.wfm as rigol
except ModuleNotFoundError:
print('RigolWFM not installed. To install, uncomment and run the cell below.')
print('Once installation is successful, rerun this cell again.')
repo = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/"
```
A list of Rigol scopes in the DS1000E family is:
```
print(rigol.DS1000E_scopes[:])
```
## DS1052E
We will start with a `.wfm` file from a Rigol DS1052E scope. This test file accompanies [wfm_view.exe](http://meteleskublesku.cz/wfm_view/) a freeware program from <http://www.hakasoft.com.au>.
The waveform looks like
<img src="https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1052E.png" width="50%">
Now let's look at plot of the data from the corresponding `.csv` file created by [wfm_view.exe](http://meteleskublesku.cz/wfm_view/)
```
csv_filename_52 = "https://raw.githubusercontent.com/scottprahl/RigolWFM/master/wfm/DS1052E.csv"
csv_data = np.genfromtxt(csv_filename_52, delimiter=',', skip_header=19, skip_footer=1, encoding='latin1').T
center_time = csv_data[0][-1]*1e6/2
plt.subplot(211)
plt.plot(csv_data[0]*1e6,csv_data[1], color='green')
plt.title("DS1052E from .csv file")
plt.ylabel("Volts (V)")
plt.xlim(center_time-0.6,center_time+0.6)
plt.xticks([])
plt.subplot(212)
plt.plot(csv_data[0]*1e6,csv_data[2], color='red')
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.xlim(center_time-0.6,center_time+0.6)
plt.show()
```
### Now for the `.wfm` data
First a textual description.
```
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1052E.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, '1000E')
description = w.describe()
print(description)
ch = w.channels[0]
plt.subplot(211)
plt.plot(ch.times*1e6, ch.volts, color='green')
plt.title("DS1052E from .wfm file")
plt.ylabel("Volts (V)")
plt.xlim(-0.6,0.6)
plt.xticks([])
ch = w.channels[1]
plt.subplot(212)
plt.plot(ch.times*1e6, ch.volts, color='red')
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.xlim(-0.6,0.6)
plt.show()
```
## DS1102E-B
### First the `.csv` data
This file only has one active channel. Let's look at what the accompanying `.csv` data looks like.
```
csv_filename = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-B.csv"
my_data = np.genfromtxt(csv_filename, delimiter=',', skip_header=2).T
plt.plot(my_data[0]*1e6, my_data[1])
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.title("DS1102E-B with a single trace")
plt.show()
```
### Now for the `wfm` data
First let's have look at the description of the internal file structure. We see that only channel 1 has been enabled.
```
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-B.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, 'DS1102E')
description = w.describe()
print(description)
w.plot()
plt.xlim(-6,6)
plt.show()
```
## DS1102E-E
[Contributed by @Stapelberg](https://github.com/scottprahl/RigolWFM/issues/11#issue-718562669)
This file uses a 10X probe. First let's have look at the description of the internal file structure. We see that only channel 1 has been enabled and it has a 10X probe.
```
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-E.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, 'DS1102E')
description = w.describe()
print(description)
w.plot()
#plt.xlim(-6,6)
plt.show()
```
|
github_jupyter
|
#!pip install RigolWFM
import numpy as np
import matplotlib.pyplot as plt
try:
import RigolWFM.wfm as rigol
except ModuleNotFoundError:
print('RigolWFM not installed. To install, uncomment and run the cell below.')
print('Once installation is successful, rerun this cell again.')
repo = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/"
print(rigol.DS1000E_scopes[:])
csv_filename_52 = "https://raw.githubusercontent.com/scottprahl/RigolWFM/master/wfm/DS1052E.csv"
csv_data = np.genfromtxt(csv_filename_52, delimiter=',', skip_header=19, skip_footer=1, encoding='latin1').T
center_time = csv_data[0][-1]*1e6/2
plt.subplot(211)
plt.plot(csv_data[0]*1e6,csv_data[1], color='green')
plt.title("DS1052E from .csv file")
plt.ylabel("Volts (V)")
plt.xlim(center_time-0.6,center_time+0.6)
plt.xticks([])
plt.subplot(212)
plt.plot(csv_data[0]*1e6,csv_data[2], color='red')
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.xlim(center_time-0.6,center_time+0.6)
plt.show()
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1052E.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, '1000E')
description = w.describe()
print(description)
ch = w.channels[0]
plt.subplot(211)
plt.plot(ch.times*1e6, ch.volts, color='green')
plt.title("DS1052E from .wfm file")
plt.ylabel("Volts (V)")
plt.xlim(-0.6,0.6)
plt.xticks([])
ch = w.channels[1]
plt.subplot(212)
plt.plot(ch.times*1e6, ch.volts, color='red')
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.xlim(-0.6,0.6)
plt.show()
csv_filename = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-B.csv"
my_data = np.genfromtxt(csv_filename, delimiter=',', skip_header=2).T
plt.plot(my_data[0]*1e6, my_data[1])
plt.xlabel("Time (µs)")
plt.ylabel("Volts (V)")
plt.title("DS1102E-B with a single trace")
plt.show()
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-B.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, 'DS1102E')
description = w.describe()
print(description)
w.plot()
plt.xlim(-6,6)
plt.show()
# raw=true is needed because this is a binary file
wfm_url = "https://github.com/scottprahl/RigolWFM/raw/master/wfm/DS1102E-E.wfm" + "?raw=true"
w = rigol.Wfm.from_url(wfm_url, 'DS1102E')
description = w.describe()
print(description)
w.plot()
#plt.xlim(-6,6)
plt.show()
| 0.450601 | 0.951414 |
# PostgreSQL: use sql magic %sql
* pip install ipython-sql
* doc: https://pypi.org/project/ipython-sql/
---
* author: [Prasert Kanawattanachai]([email protected])
* YouTube: https://www.youtube.com/prasertcbs
* [Chulalongkorn Business School](https://www.cbs.chula.ac.th/en/)
---
```
from IPython.display import YouTubeVideo
YouTubeVideo('bgHPGiE0rkg', width=720, height=405)
import pandas as pd
import psycopg2 # postgresql db driver
print(f'pandas version: {pd.__version__}')
print(f'psycopg2 version: {psycopg2.__version__}')
%load_ext sql
```
## PostgresSQL connection
```
import getpass
host='192.168.211.137'
port=5432
dbname='disney'
user=input('user name: ')
pwd=getpass.getpass('password: ')
```
### connection string
* sqlalchemy connection string format
* doc: https://docs.sqlalchemy.org/en/13/core/engines.html
```
connection_string=f'postgresql+psycopg2://{user}:{pwd}@{host}:{port}/{dbname}'
connection_string
%sql postgresql+psycopg2://postgres:[email protected]:5432/disney
%sql select * from movie_gross limit 10;
%sql $connection_string
%sql select * from movie_gross limit 5;
%sql select * from movie_gross limit 5;
%%sql
select * from movie_gross limit 5;
%%sql
select *
from movie_gross
limit 5;
%%sql
select * from pg_catalog.pg_tables
where schemaname != 'pg_catalog' and schemaname != 'information_schema';
%%sql
select table_catalog, table_name, column_name, data_type
from information_schema.columns
where table_name='movie_gross'
```
## psql \ command
```
# pip install pgspecial
%sql \l
%sql \d
%sql \d movie_gross
```
## %sql (single line) vs %%sql (multi-line)
```
%sql select * from movie_gross limit 3;
%%sql
select * from movie_gross limit 3;
%%sql
select *
from movie_gross
where genre = 'Adventure'
limit 5;
%%sql
select * from movie_gross
where extract(year from release_date)=2016
limit 5;
%%sql
drop view if exists vw_adventure;
%%sql
create view vw_adventure as
select * from movie_gross
where genre='Adventure';
%%sql
select * from vw_adventure where total_gross > 100e6;
%%sql
update movie_gross
set movie_title = lower(movie_title);
%%sql
select * from movie_gross limit 5;
```
## save results
```
rs=%sql select * from movie_gross where movie_title ilike '%dog%' limit 5;
rs
type(rs)
df=rs.DataFrame()
df
type(df)
%%sql
select * from movie_gross limit 5;
```
## SqlMagic
```
%config SqlMagic
%config SqlMagic.autopandas = True
%config SqlMagic.displaycon = False
df=%sql select * from movie_gross limit 5;
df
type(df)
```
## multiline %%sql to pandas.DataFrame()
```
%%sql df2 <<
select *
from movie_gross
where genre = 'Adventure'
limit 5;
df2
type(df2)
df2.info()
df2['release_date']=df2['release_date'].astype('datetime64')
df2.info()
df2[df2['release_date'].dt.year == 1993]
```
## switch to other database username@dbname
```
%sql \l
dbname='demo'
connection_string=f'postgresql://{user}:{pwd}@{host}:{port}/{dbname}'
%config SqlMagic.displaycon = True
%sql $connection_string
%sql \d
%%sql
select * from province2 limit 5;
%sql postgres@disney
%sql \d
%sql select * from disney_char limit 5;
%sql postgres@demo
%sql \d
%sql \d province2
%%sql
select * from province2 order by population desc limit 10;
%%sql
select * into south from province2 where region='ใต้'
%%sql
select * from south
%sql \d
%%sql
drop table south;
%sql \d
```
---
|
github_jupyter
|
from IPython.display import YouTubeVideo
YouTubeVideo('bgHPGiE0rkg', width=720, height=405)
import pandas as pd
import psycopg2 # postgresql db driver
print(f'pandas version: {pd.__version__}')
print(f'psycopg2 version: {psycopg2.__version__}')
%load_ext sql
import getpass
host='192.168.211.137'
port=5432
dbname='disney'
user=input('user name: ')
pwd=getpass.getpass('password: ')
connection_string=f'postgresql+psycopg2://{user}:{pwd}@{host}:{port}/{dbname}'
connection_string
%sql postgresql+psycopg2://postgres:[email protected]:5432/disney
%sql select * from movie_gross limit 10;
%sql $connection_string
%sql select * from movie_gross limit 5;
%sql select * from movie_gross limit 5;
%%sql
select * from movie_gross limit 5;
%%sql
select *
from movie_gross
limit 5;
%%sql
select * from pg_catalog.pg_tables
where schemaname != 'pg_catalog' and schemaname != 'information_schema';
%%sql
select table_catalog, table_name, column_name, data_type
from information_schema.columns
where table_name='movie_gross'
# pip install pgspecial
%sql \l
%sql \d
%sql \d movie_gross
%sql select * from movie_gross limit 3;
%%sql
select * from movie_gross limit 3;
%%sql
select *
from movie_gross
where genre = 'Adventure'
limit 5;
%%sql
select * from movie_gross
where extract(year from release_date)=2016
limit 5;
%%sql
drop view if exists vw_adventure;
%%sql
create view vw_adventure as
select * from movie_gross
where genre='Adventure';
%%sql
select * from vw_adventure where total_gross > 100e6;
%%sql
update movie_gross
set movie_title = lower(movie_title);
%%sql
select * from movie_gross limit 5;
rs=%sql select * from movie_gross where movie_title ilike '%dog%' limit 5;
rs
type(rs)
df=rs.DataFrame()
df
type(df)
%%sql
select * from movie_gross limit 5;
%config SqlMagic
%config SqlMagic.autopandas = True
%config SqlMagic.displaycon = False
df=%sql select * from movie_gross limit 5;
df
type(df)
%%sql df2 <<
select *
from movie_gross
where genre = 'Adventure'
limit 5;
df2
type(df2)
df2.info()
df2['release_date']=df2['release_date'].astype('datetime64')
df2.info()
df2[df2['release_date'].dt.year == 1993]
%sql \l
dbname='demo'
connection_string=f'postgresql://{user}:{pwd}@{host}:{port}/{dbname}'
%config SqlMagic.displaycon = True
%sql $connection_string
%sql \d
%%sql
select * from province2 limit 5;
%sql postgres@disney
%sql \d
%sql select * from disney_char limit 5;
%sql postgres@demo
%sql \d
%sql \d province2
%%sql
select * from province2 order by population desc limit 10;
%%sql
select * into south from province2 where region='ใต้'
%%sql
select * from south
%sql \d
%%sql
drop table south;
%sql \d
| 0.18665 | 0.66689 |
<a href="https://colab.research.google.com/github/tuanyuan2008/cs4641/blob/master/randomized-optimization/4_peaks.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
! pip3 install mlrose
import mlrose
import numpy as np
import matplotlib.pyplot as plt
import timeit
```
Four Peaks Problem
=======
```
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
fitness.evaluate(state)
```
Simulated Annealing
-------
```
# Define decay schedules
schedule = mlrose.GeomDecay()
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
dic = {}
def tune_schedule(schedule):
fit_scores = []
for i in range(50):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = schedule,
max_attempts = 10, max_iters = 1000,
init_state = state)
print('Iteration ', str(i))
print('The best state found is: ', best_state)
print('The fitness at the best state is: ', best_fitness)
print('\n')
fit_scores.append(best_fitness)
print('The average fitness is '+ str(sum(fit_scores) / 50) + ' for ' + str(schedule) + '.')
return sum(fit_scores) / 50
dic[schedule] = tune_schedule(schedule)
schedule = mlrose.ExpDecay()
dic[schedule] = tune_schedule(schedule)
print('\n')
schedule = mlrose.ArithDecay()
dic[schedule] = tune_schedule(schedule)
SA_best_schedule = max(dic, key=lambda key: dic[key])
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = SA_best_schedule,
max_attempts = a, max_iters = 1000,
init_state = state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
SA_best_attempt = best_score_index
iters = range(100, 5000, 100)
def SA_iters(iters, best_schedule, best_attempt):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = best_schedule,
max_attempts = best_attempt, max_iters = a,
init_state = state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
SA_results = SA_iters(iters, SA_best_schedule, SA_best_attempt)
SA_best_iter = SA_results[0]
SA_fit_scores = SA_results[1]
```
Genetic Algorithms
-----
```
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
pop = range(50, 1000, 50)
def tune_pop(pop):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(pop)
for i, a in enumerate(pop):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=a, mutation_prob=0.1, max_attempts=10, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 50
plt.plot(fit_scores)
plt.title("Fitness at Various Populations")
plt.xlabel("(Population - 50) / 50")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with population ' + str(best_score_index) + '.')
return best_score_index
optimal_pop = tune_pop(pop)
def tune_rate(optimal_pop):
rate = 0.01
best_score = 0
best_score_index = -1
fit_scores = [0] * 20
for i in range(20):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=rate, max_attempts=10, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 0.05
rate += 0.05
plt.plot(fit_scores)
plt.title("Fitness at Various Rates of Mutation")
plt.xlabel("Rate * 20")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with mutation rate ' + str(best_score_index) + '.')
return best_score_index
optimal_rate = tune_rate(optimal_pop)
def GA_tune_attempts(optimal_pop, optimal_rate):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=optimal_rate, max_attempts=a, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
optimal_attempt = GA_tune_attempts(optimal_pop, optimal_rate)
def GA_iters(optimal_pop, optimal_rate, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=optimal_rate, max_attempts=optimal_attempt, max_iters=a)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
GA_best_iter, GA_fit_scores = GA_iters(optimal_pop, optimal_rate, optimal_attempt)
```
MIMIC
-----
```
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
pop = range(50, 1000, 50)
def MIMIC_tune_pop(pop):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(pop)
for i, a in enumerate(pop):
best_state, best_fitness = mlrose.mimic(problem, pop_size=a, keep_pct=0.2, max_attempts=10,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 50
plt.plot(fit_scores)
plt.title("Fitness at Various Populations")
plt.xlabel("(Population - 50) / 50")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with population ' + str(best_score_index) + '.')
return best_score_index
MIMIC_optimal_pop = MIMIC_tune_pop(pop)
def MIMIC_tune_rate(optimal_pop):
rate = 0.01
best_score = 0
best_score_index = -1
fit_scores = [0] * 20
for i in range(20):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=rate, max_attempts=10,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 0.05
rate += 0.05
plt.plot(fit_scores)
plt.title("Fitness at Various Proportion of Samples Kept")
plt.xlabel("Rate * 20")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with ' + str(best_score_index) + ' proportion samples kept.')
return best_score_index
MIMIC_optimal_rate = MIMIC_tune_rate(MIMIC_optimal_pop)
def MIMIC_tune_attempts(optimal_pop, optimal_rate):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=optimal_rate, max_attempts=a,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
MIMIC_optimal_attempt = MIMIC_tune_attempts(MIMIC_optimal_pop, MIMIC_optimal_rate)
def MIMIC_iters(optimal_pop, optimal_rate, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=optimal_rate, max_attempts=optimal_attempt,
max_iters=a)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
MIMIC_best_iter, MIMIC_fit_scores = MIMIC_iters(MIMIC_optimal_pop, MIMIC_optimal_rate, MIMIC_optimal_attempt)
```
Randomized Hill Climbing
-----
```
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
restarts = range(0, 500, 5)
def tune_restarts(restarts):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(restarts)
for i, a in enumerate(restarts):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=10, max_iters=np.inf, restarts=a,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 5
plt.plot(fit_scores)
plt.title("Fitness at Various # Restarts")
plt.xlabel("# Restarts / 5")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with ' + str(best_score_index) + ' restarts.')
return best_score_index
optimal_restarts = tune_restarts(restarts)
def RHC_tune_attempts(optimal_restarts):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=a, max_iters=np.inf, restarts=optimal_restarts,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
RHC_optimal_attempt = RHC_tune_attempts(optimal_restarts)
def RHC_iters(optimal_restarts, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=optimal_attempt, max_iters=a, restarts=optimal_restarts,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
RHC_best_iter, RHC_fit_scores = RHC_iters(optimal_restarts, RHC_optimal_attempt)
```
|
github_jupyter
|
! pip3 install mlrose
import mlrose
import numpy as np
import matplotlib.pyplot as plt
import timeit
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
fitness.evaluate(state)
# Define decay schedules
schedule = mlrose.GeomDecay()
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
dic = {}
def tune_schedule(schedule):
fit_scores = []
for i in range(50):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = schedule,
max_attempts = 10, max_iters = 1000,
init_state = state)
print('Iteration ', str(i))
print('The best state found is: ', best_state)
print('The fitness at the best state is: ', best_fitness)
print('\n')
fit_scores.append(best_fitness)
print('The average fitness is '+ str(sum(fit_scores) / 50) + ' for ' + str(schedule) + '.')
return sum(fit_scores) / 50
dic[schedule] = tune_schedule(schedule)
schedule = mlrose.ExpDecay()
dic[schedule] = tune_schedule(schedule)
print('\n')
schedule = mlrose.ArithDecay()
dic[schedule] = tune_schedule(schedule)
SA_best_schedule = max(dic, key=lambda key: dic[key])
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = SA_best_schedule,
max_attempts = a, max_iters = 1000,
init_state = state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
SA_best_attempt = best_score_index
iters = range(100, 5000, 100)
def SA_iters(iters, best_schedule, best_attempt):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.simulated_annealing(problem, schedule = best_schedule,
max_attempts = best_attempt, max_iters = a,
init_state = state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
SA_results = SA_iters(iters, SA_best_schedule, SA_best_attempt)
SA_best_iter = SA_results[0]
SA_fit_scores = SA_results[1]
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
pop = range(50, 1000, 50)
def tune_pop(pop):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(pop)
for i, a in enumerate(pop):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=a, mutation_prob=0.1, max_attempts=10, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 50
plt.plot(fit_scores)
plt.title("Fitness at Various Populations")
plt.xlabel("(Population - 50) / 50")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with population ' + str(best_score_index) + '.')
return best_score_index
optimal_pop = tune_pop(pop)
def tune_rate(optimal_pop):
rate = 0.01
best_score = 0
best_score_index = -1
fit_scores = [0] * 20
for i in range(20):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=rate, max_attempts=10, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 0.05
rate += 0.05
plt.plot(fit_scores)
plt.title("Fitness at Various Rates of Mutation")
plt.xlabel("Rate * 20")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with mutation rate ' + str(best_score_index) + '.')
return best_score_index
optimal_rate = tune_rate(optimal_pop)
def GA_tune_attempts(optimal_pop, optimal_rate):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=optimal_rate, max_attempts=a, max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
optimal_attempt = GA_tune_attempts(optimal_pop, optimal_rate)
def GA_iters(optimal_pop, optimal_rate, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.genetic_alg(problem, pop_size=optimal_pop, mutation_prob=optimal_rate, max_attempts=optimal_attempt, max_iters=a)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
GA_best_iter, GA_fit_scores = GA_iters(optimal_pop, optimal_rate, optimal_attempt)
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
pop = range(50, 1000, 50)
def MIMIC_tune_pop(pop):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(pop)
for i, a in enumerate(pop):
best_state, best_fitness = mlrose.mimic(problem, pop_size=a, keep_pct=0.2, max_attempts=10,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 50
plt.plot(fit_scores)
plt.title("Fitness at Various Populations")
plt.xlabel("(Population - 50) / 50")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with population ' + str(best_score_index) + '.')
return best_score_index
MIMIC_optimal_pop = MIMIC_tune_pop(pop)
def MIMIC_tune_rate(optimal_pop):
rate = 0.01
best_score = 0
best_score_index = -1
fit_scores = [0] * 20
for i in range(20):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=rate, max_attempts=10,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 0.05
rate += 0.05
plt.plot(fit_scores)
plt.title("Fitness at Various Proportion of Samples Kept")
plt.xlabel("Rate * 20")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with ' + str(best_score_index) + ' proportion samples kept.')
return best_score_index
MIMIC_optimal_rate = MIMIC_tune_rate(MIMIC_optimal_pop)
def MIMIC_tune_attempts(optimal_pop, optimal_rate):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=optimal_rate, max_attempts=a,
max_iters=np.inf)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
MIMIC_optimal_attempt = MIMIC_tune_attempts(MIMIC_optimal_pop, MIMIC_optimal_rate)
def MIMIC_iters(optimal_pop, optimal_rate, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.mimic(problem, pop_size=optimal_pop, keep_pct=optimal_rate, max_attempts=optimal_attempt,
max_iters=a)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
MIMIC_best_iter, MIMIC_fit_scores = MIMIC_iters(MIMIC_optimal_pop, MIMIC_optimal_rate, MIMIC_optimal_attempt)
fitness = mlrose.FourPeaks(t_pct=0.15)
state = np.array([1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0])
problem = mlrose.DiscreteOpt(length = 12, fitness_fn = fitness,
maximize = True, max_val = 12)
restarts = range(0, 500, 5)
def tune_restarts(restarts):
best_score = 0
best_score_index = -1
fit_scores = [0] * len(restarts)
for i, a in enumerate(restarts):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=10, max_iters=np.inf, restarts=a,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = i * 5
plt.plot(fit_scores)
plt.title("Fitness at Various # Restarts")
plt.xlabel("# Restarts / 5")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained with ' + str(best_score_index) + ' restarts.')
return best_score_index
optimal_restarts = tune_restarts(restarts)
def RHC_tune_attempts(optimal_restarts):
attempts = range(10, 110, 10)
best_score = 0
best_score_index = -1
fit_scores = [0] * len(attempts)
for i, a in enumerate(attempts):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=a, max_iters=np.inf, restarts=optimal_restarts,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 10
plt.plot(fit_scores)
plt.title("Fitness at Various # of Attempts")
plt.xlabel("# (Attempts - 10) / 10")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' attempts.')
return best_score_index
RHC_optimal_attempt = RHC_tune_attempts(optimal_restarts)
def RHC_iters(optimal_restarts, optimal_attempt):
best_score = 0
best_score_index = -1
iters = range(100, 5000, 100)
fit_scores = [0] * len(iters)
for i, a in enumerate(iters):
best_state, best_fitness = mlrose.random_hill_climb(problem, max_attempts=optimal_attempt, max_iters=a, restarts=optimal_restarts,
init_state=state)
# print('Iteration ', str(i))
# print('The best state found is: ', best_state)
# print('The fitness at the best state is: ', best_fitness)
# print('\n')
fit_scores[i] = best_fitness
if best_fitness > best_score:
best_score = best_fitness
best_score_index = (i + 1) * 100
plt.plot(fit_scores)
plt.title("Fitness at Various # of Iterations")
plt.xlabel("# (Iters - 100) / 100")
plt.ylabel("Fitness Score")
plt.show()
print('The maximum fitness score of ' + str(best_score) + ' is obtained in ' + str(best_score_index) + ' iterations.')
return best_score_index, fit_scores
RHC_best_iter, RHC_fit_scores = RHC_iters(optimal_restarts, RHC_optimal_attempt)
| 0.340156 | 0.888081 |
# Introduction à la librairie PyTorch -- Solutions
Matériel de cours rédigé par Pascal Germain, 2019
************
### Partie 1
```
class regression_logistique:
def __init__(self, rho=.01, eta=0.4, nb_iter=50, seed=None):
# Initialisation des paramètres de la descente en gradient
self.rho = rho # Paramètre de regularisation
self.eta = eta # Pas de gradient
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation trace de la descente de gradient
self.w_list = list()
self.obj_list = list()
def _trace(self, w, obj):
self.w_list.append(np.array(w.detach()))
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32)
n, d = x.shape
self.w = torch.randn(d, requires_grad=True)
for t in range(self.nb_iter + 1):
xw = x @ self.w
w2 = self.w @ self.w
loss = torch.mean(- y * xw + torch.log(1 + torch.exp(xw))) + self.rho * w2 / 2
self._trace(self.w, loss)
if t < self.nb_iter:
loss.backward()
with torch.no_grad():
self.w -= self.eta * self.w.grad
self.w.grad.zero_()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = x @ self.w
return np.array(pred.numpy() > 0, dtype=np.int)
class regression_logistique_avec_biais:
def __init__(self, rho=.01, eta=0.4, nb_iter=50, seed=None):
# Initialisation des paramètres de la descente en gradient
self.rho = rho # Paramètre de regularisation
self.eta = eta # Pas de gradient
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation trace de la descente de gradient
self.w_list = list()
self.b_list = list()
self.obj_list = list()
def _trace(self, w, b, obj):
self.w_list.append(np.array(w.detach()))
self.b_list.append(b.item())
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32)
n, d = x.shape
self.w = torch.randn(d, requires_grad=True)
self.b = torch.zeros(1, requires_grad=True)
for t in range(self.nb_iter + 1):
xw = x @ self.w + self.b
w2 = self.w @ self.w
loss = torch.mean(- y * xw + torch.log(1 + torch.exp(xw))) + self.rho * w2 / 2
self._trace(self.w, self.b, loss)
if t < self.nb_iter:
loss.backward()
with torch.no_grad():
self.w -= self.eta * self.w.grad
self.b -= self.eta * self.b.grad
self.w.grad.zero_()
self.b.grad.zero_()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = x @ self.w + self.b
return np.array(pred.numpy() > 0, dtype=np.int)
```
### Partie 2
```
class reseau_classification:
def __init__(self, nb_neurones=4, eta=0.4, alpha=0.1, nb_iter=50, seed=None):
# Architecture du réseau
self.nb_neurones = nb_neurones # Nombre de neurones sur la couche cachée
# Initialisation des paramètres de la descente en gradient
self.eta = eta # Pas de gradient
self.alpha = alpha # Momentum
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation des listes enregistrant la trace de l'algorithme
self.w_list = list()
self.obj_list = list()
def _trace(self, obj):
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32).unsqueeze(1)
n, d = x.shape
self.model = nn.Sequential(
torch.nn.Linear(d, self.nb_neurones),
torch.nn.ReLU(),
torch.nn.Linear(self.nb_neurones, 1),
torch.nn.Sigmoid()
)
perte_logistique = nn.BCELoss()
optimiseur = torch.optim.SGD(self.model.parameters(), lr=self.eta, momentum=self.alpha)
for t in range(self.nb_iter + 1):
y_pred = self.model(x)
perte = perte_logistique(y_pred, y)
self._trace(perte)
if t < self.nb_iter:
perte.backward()
optimiseur.step()
optimiseur.zero_grad()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = self.model(x)
pred = pred.squeeze()
return np.array(pred > .5, dtype=np.int)
```
|
github_jupyter
|
class regression_logistique:
def __init__(self, rho=.01, eta=0.4, nb_iter=50, seed=None):
# Initialisation des paramètres de la descente en gradient
self.rho = rho # Paramètre de regularisation
self.eta = eta # Pas de gradient
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation trace de la descente de gradient
self.w_list = list()
self.obj_list = list()
def _trace(self, w, obj):
self.w_list.append(np.array(w.detach()))
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32)
n, d = x.shape
self.w = torch.randn(d, requires_grad=True)
for t in range(self.nb_iter + 1):
xw = x @ self.w
w2 = self.w @ self.w
loss = torch.mean(- y * xw + torch.log(1 + torch.exp(xw))) + self.rho * w2 / 2
self._trace(self.w, loss)
if t < self.nb_iter:
loss.backward()
with torch.no_grad():
self.w -= self.eta * self.w.grad
self.w.grad.zero_()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = x @ self.w
return np.array(pred.numpy() > 0, dtype=np.int)
class regression_logistique_avec_biais:
def __init__(self, rho=.01, eta=0.4, nb_iter=50, seed=None):
# Initialisation des paramètres de la descente en gradient
self.rho = rho # Paramètre de regularisation
self.eta = eta # Pas de gradient
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation trace de la descente de gradient
self.w_list = list()
self.b_list = list()
self.obj_list = list()
def _trace(self, w, b, obj):
self.w_list.append(np.array(w.detach()))
self.b_list.append(b.item())
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32)
n, d = x.shape
self.w = torch.randn(d, requires_grad=True)
self.b = torch.zeros(1, requires_grad=True)
for t in range(self.nb_iter + 1):
xw = x @ self.w + self.b
w2 = self.w @ self.w
loss = torch.mean(- y * xw + torch.log(1 + torch.exp(xw))) + self.rho * w2 / 2
self._trace(self.w, self.b, loss)
if t < self.nb_iter:
loss.backward()
with torch.no_grad():
self.w -= self.eta * self.w.grad
self.b -= self.eta * self.b.grad
self.w.grad.zero_()
self.b.grad.zero_()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = x @ self.w + self.b
return np.array(pred.numpy() > 0, dtype=np.int)
class reseau_classification:
def __init__(self, nb_neurones=4, eta=0.4, alpha=0.1, nb_iter=50, seed=None):
# Architecture du réseau
self.nb_neurones = nb_neurones # Nombre de neurones sur la couche cachée
# Initialisation des paramètres de la descente en gradient
self.eta = eta # Pas de gradient
self.alpha = alpha # Momentum
self.nb_iter = nb_iter # Nombre d'itérations
self.seed = seed # Germe du générateur de nombres aléatoires
# Initialisation des listes enregistrant la trace de l'algorithme
self.w_list = list()
self.obj_list = list()
def _trace(self, obj):
self.obj_list.append(obj.item())
def apprentissage(self, x, y):
if self.seed is not None:
torch.manual_seed(self.seed)
x = torch.tensor(x, dtype=torch.float32)
y = torch.tensor(y, dtype=torch.float32).unsqueeze(1)
n, d = x.shape
self.model = nn.Sequential(
torch.nn.Linear(d, self.nb_neurones),
torch.nn.ReLU(),
torch.nn.Linear(self.nb_neurones, 1),
torch.nn.Sigmoid()
)
perte_logistique = nn.BCELoss()
optimiseur = torch.optim.SGD(self.model.parameters(), lr=self.eta, momentum=self.alpha)
for t in range(self.nb_iter + 1):
y_pred = self.model(x)
perte = perte_logistique(y_pred, y)
self._trace(perte)
if t < self.nb_iter:
perte.backward()
optimiseur.step()
optimiseur.zero_grad()
def prediction(self, x):
x = torch.tensor(x, dtype=torch.float32)
with torch.no_grad():
pred = self.model(x)
pred = pred.squeeze()
return np.array(pred > .5, dtype=np.int)
| 0.817574 | 0.864024 |
# Classifying Fashion-MNIST
Now it's your turn to build and train a neural network. You'll be using the [Fashion-MNIST dataset](https://github.com/zalandoresearch/fashion-mnist), a drop-in replacement for the MNIST dataset. MNIST is actually quite trivial with neural networks where you can easily achieve better than 97% accuracy. Fashion-MNIST is a set of 28x28 greyscale images of clothes. It's more complex than MNIST, so it's a better representation of the actual performance of your network, and a better representation of datasets you'll use in the real world.
<img src='assets/fashion-mnist-sprite.png' width=500px>
In this notebook, you'll build your own neural network. For the most part, you could just copy and paste the code from Part 3, but you wouldn't be learning. It's important for you to write the code yourself and get it to work. Feel free to consult the previous notebooks though as you work through this.
First off, let's load the dataset through torchvision.
```
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here we can see one of the images.
```
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
```
## Building the network
Here you should define your network. As with MNIST, each image is 28x28 which is a total of 784 pixels, and there are 10 classes. You should include at least one hidden layer. We suggest you use ReLU activations for the layers and to return the logits or log-softmax from the forward pass. It's up to you how many layers you add and the size of those layers.
```
# TODO: Define your network architecture here
import torch.nn as nn
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
```
# Train the network
Now you should create your network and train it. First you'll want to define [the criterion](http://pytorch.org/docs/master/nn.html#loss-functions) ( something like `nn.CrossEntropyLoss`) and [the optimizer](http://pytorch.org/docs/master/optim.html) (typically `optim.SGD` or `optim.Adam`).
Then write the training code. Remember the training pass is a fairly straightforward process:
* Make a forward pass through the network to get the logits
* Use the logits to calculate the loss
* Perform a backward pass through the network with `loss.backward()` to calculate the gradients
* Take a step with the optimizer to update the weights
By adjusting the hyperparameters (hidden units, learning rate, etc), you should be able to get the training loss below 0.4.
```
# TODO: Create the network, define the criterion and optimizer
criterion = nn.NLLLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.003)
# TODO: Train the network here
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
# TODO: Calculate the class probabilities (softmax) for img
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
```
|
github_jupyter
|
import torch
from torchvision import datasets, transforms
import helper
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
image, label = next(iter(trainloader))
helper.imshow(image[0,:]);
# TODO: Define your network architecture here
import torch.nn as nn
model = nn.Sequential(nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1))
# TODO: Create the network, define the criterion and optimizer
criterion = nn.NLLLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.003)
# TODO: Train the network here
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten MNIST images into a 784 long vector
images = images.view(images.shape[0], -1)
# TODO: Training pass
# Clear the gradients, do this because gradients are accumulated
optimizer.zero_grad()
output = model.forward(images)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
# Test out your network!
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.resize_(1, 784)
# TODO: Calculate the class probabilities (softmax) for img
with torch.no_grad():
logps = model(img)
# Output of the network are log-probabilities, need to take exponential for probabilities
ps = torch.exp(logps)
# Plot the image and probabilities
helper.view_classify(img.resize_(1, 28, 28), ps, version='Fashion')
| 0.652241 | 0.991456 |
```
import wandb
wandb.init(project="Channel_Charting")
import torch
from torch import nn
from torch.optim import SGD
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torchvision.transforms import Compose, ToTensor, Normalize
from torchvision.datasets import MNIST
from ignite.engine import Events, create_supervised_trainer, create_supervised_trainer
from ignite.metrics import Accuracy, Loss
from tqdm import tqdm
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
def get_data_loaders(train_batch_size, val_batch_size):
data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])
train_loader = DataLoader(MNIST(download=True, root=".", transform=data_transform, train=True),
batch_size=train_batch_size, shuffle=True)
val_loader = DataLoader(MNIST(download=False, root=".", transform=data_transform, train=False),
batch_size=val_batch_size, shuffle=False)
return train_loader, val_loader
def run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):
train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
model = Net()
device = 'cpu'
if torch.cuda.is_available():
device = 'cuda'
optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)
trainer = create_supervised_trainer(model, optimizer, F.nll_loss, device=device)
evaluator = create_supervised_evaluator(model,
metrics={'accuracy': Accuracy(),
'nll': Loss(F.nll_loss)},
device=device)
desc = "ITERATION - loss: {:.2f}"
pbar = tqdm(
initial=0, leave=False, total=len(train_loader),
desc=desc.format(0)
)
@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(engine):
pbar.desc = desc.format(engine.state.output)
pbar.update(log_interval)
wandb.log({"train loss": engine.state.output})
@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(engine):
pbar.refresh()
evaluator.run(train_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
tqdm.write(
"Training Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(engine.state.epoch, avg_accuracy, avg_nll)
)
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(engine):
evaluator.run(val_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
tqdm.write(
"Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(engine.state.epoch, avg_accuracy, avg_nll))
pbar.n = pbar.last_print_n = 0
wandb.log({"validation loss": engine.state.metrics['nll']})
wandb.log({"validation accuracy": engine.state.metrics['accuracy']})
trainer.run(train_loader, max_epochs=epochs)
pbar.close()
# Train Model
hyperparameter_defaults = dict(
batch_size = 256,
val_batch_size = 100,
epochs = 10,
lr = 0.001,
momentum = 0.3,
log_interval = 10,
)
```
|
github_jupyter
|
import wandb
wandb.init(project="Channel_Charting")
import torch
from torch import nn
from torch.optim import SGD
from torch.utils.data import DataLoader
import torch.nn.functional as F
from torchvision.transforms import Compose, ToTensor, Normalize
from torchvision.datasets import MNIST
from ignite.engine import Events, create_supervised_trainer, create_supervised_trainer
from ignite.metrics import Accuracy, Loss
from tqdm import tqdm
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=-1)
def get_data_loaders(train_batch_size, val_batch_size):
data_transform = Compose([ToTensor(), Normalize((0.1307,), (0.3081,))])
train_loader = DataLoader(MNIST(download=True, root=".", transform=data_transform, train=True),
batch_size=train_batch_size, shuffle=True)
val_loader = DataLoader(MNIST(download=False, root=".", transform=data_transform, train=False),
batch_size=val_batch_size, shuffle=False)
return train_loader, val_loader
def run(train_batch_size, val_batch_size, epochs, lr, momentum, log_interval):
train_loader, val_loader = get_data_loaders(train_batch_size, val_batch_size)
model = Net()
device = 'cpu'
if torch.cuda.is_available():
device = 'cuda'
optimizer = SGD(model.parameters(), lr=lr, momentum=momentum)
trainer = create_supervised_trainer(model, optimizer, F.nll_loss, device=device)
evaluator = create_supervised_evaluator(model,
metrics={'accuracy': Accuracy(),
'nll': Loss(F.nll_loss)},
device=device)
desc = "ITERATION - loss: {:.2f}"
pbar = tqdm(
initial=0, leave=False, total=len(train_loader),
desc=desc.format(0)
)
@trainer.on(Events.ITERATION_COMPLETED(every=log_interval))
def log_training_loss(engine):
pbar.desc = desc.format(engine.state.output)
pbar.update(log_interval)
wandb.log({"train loss": engine.state.output})
@trainer.on(Events.EPOCH_COMPLETED)
def log_training_results(engine):
pbar.refresh()
evaluator.run(train_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
tqdm.write(
"Training Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(engine.state.epoch, avg_accuracy, avg_nll)
)
@trainer.on(Events.EPOCH_COMPLETED)
def log_validation_results(engine):
evaluator.run(val_loader)
metrics = evaluator.state.metrics
avg_accuracy = metrics['accuracy']
avg_nll = metrics['nll']
tqdm.write(
"Validation Results - Epoch: {} Avg accuracy: {:.2f} Avg loss: {:.2f}"
.format(engine.state.epoch, avg_accuracy, avg_nll))
pbar.n = pbar.last_print_n = 0
wandb.log({"validation loss": engine.state.metrics['nll']})
wandb.log({"validation accuracy": engine.state.metrics['accuracy']})
trainer.run(train_loader, max_epochs=epochs)
pbar.close()
# Train Model
hyperparameter_defaults = dict(
batch_size = 256,
val_batch_size = 100,
epochs = 10,
lr = 0.001,
momentum = 0.3,
log_interval = 10,
)
| 0.87464 | 0.586227 |
# Spatial joins
Goals of this notebook:
- Based on the `countries` and `cities` dataframes, determine for each city the country in which it is located.
- To solve this problem, we will use the the concept of a 'spatial join' operation: combining information of geospatial datasets based on their spatial relationship.
```
%matplotlib inline
import pandas as pd
import geopandas
pd.options.display.max_rows = 10
countries = geopandas.read_file("zip://./data/ne_110m_admin_0_countries.zip")
cities = geopandas.read_file("zip://./data/ne_110m_populated_places.zip")
rivers = geopandas.read_file("zip://./data/ne_50m_rivers_lake_centerlines.zip")
```
## Recap - joining dataframes
Pandas provides functionality to join or merge dataframes in different ways, see https://chrisalbon.com/python/data_wrangling/pandas_join_merge_dataframe/ for an overview and https://pandas.pydata.org/pandas-docs/stable/merging.html for the full documentation.
To illustrate the concept of joining the information of two dataframes with pandas, let's take a small subset of our `cities` and `countries` datasets:
```
cities2 = cities[cities['name'].isin(['Bern', 'Brussels', 'London', 'Paris'])].copy()
cities2['iso_a3'] = ['CHE', 'BEL', 'GBR', 'FRA']
cities2
countries2 = countries[['iso_a3', 'name', 'continent']]
countries2.head()
```
We added a 'iso_a3' column to the `cities` dataset, indicating a code of the country of the city. This country code is also present in the `countries` dataset, which allows us to merge those two dataframes based on the common column.
Joining the `cities` dataframe with `countries` will transfer extra information about the countries (the full name, the continent) to the `cities` dataframe, based on a common key:
```
cities2.merge(countries2, on='iso_a3')
```
**But**, for this illustrative example, we added the common column manually, it is not present in the original dataset. However, we can still know how to join those two datasets based on their spatial coordinates.
## Recap - spatial relationships between objects
In the previous notebook [02-spatial-relationships.ipynb](./02-spatial-relationships-operations.ipynb), we have seen the notion of spatial relationships between geometry objects: within, contains, intersects, ...
In this case, we know that each of the cities is located *within* one of the countries, or the other way around that each country can *contain* multiple cities.
We can test such relationships using the methods we have seen in the previous notebook:
```
france = countries.loc[countries['name'] == 'France', 'geometry'].squeeze()
cities.within(france)
```
The above gives us a boolean series, indicating for each point in our `cities` dataframe whether it is located within the area of France or not.
Because this is a boolean series as result, we can use it to filter the original dataframe to only show those cities that are actually within France:
```
cities[cities.within(france)]
```
We could now repeat the above analysis for each of the countries, and add a column to the `cities` dataframe indicating this country. However, that would be tedious to do manually, and is also exactly what the spatial join operation provides us.
*(note: the above result is incorrect, but this is just because of the coarse-ness of the countries dataset)*
## Spatial join operation
<div class="alert alert-info" style="font-size:120%">
**SPATIAL JOIN** = *transferring attributes from one layer to another based on their spatial relationship* <br><br>
Different parts of this operations:
* The GeoDataFrame to which we want add information
* The GeoDataFrame that contains the information we want to add
* The spatial relationship we want to use to match both datasets ('intersects', 'contains', 'within')
* The type of join: left or inner join

</div>
In this case, we want to join the `cities` dataframe with the information of the `countries` dataframe, based on the spatial relationship between both datasets.
We use the [`geopandas.sjoin`](http://geopandas.readthedocs.io/en/latest/reference/geopandas.sjoin.html) function:
```
joined = geopandas.sjoin(cities, countries, op='within', how='left')
joined
joined['continent'].value_counts()
```
## Lets's practice!
We will again use the Paris datasets to do some exercises. Let's start importing them again:
```
districts = geopandas.read_file("data/paris_districts_utm.geojson")
stations = geopandas.read_file("data/paris_sharing_bike_stations_utm.geojson")
```
<div class="alert alert-success">
<b>EXERCISE: Make a plot of the density of bike stations by district</b>
<p>
<ul>
<li>Determine for each bike station in which district it is located (using a spatial join!). Call the result `joined`.</li>
<li>Based on this result, calculate the number of bike stations in each district (e.g. using `groupby` method; you can use the `size` size method to know the size of each group).
<ul>
<li>Make sure the result is a DataFrame called `counts` with the columns 'district_name' and 'n_bike_stations'.</li>
<li>To go from a Series to a DataFrame, you can use the `reset_index` or `to_frame` method (both have a `name` keyword to specify a column name for the original Series values.
</ul>
</li>
<li>Add those counts to the original `districts` dataframe, creating a new `districts2` dataframe (tip: this is a merge operation).</li>
<li>Calculate a new column 'n_bike_stations_by_area'.</li>
<li>Make a plot showing the density in bike stations of the districts.</li>
</ul>
</p>
</div>
```
# %load _solved/solutions/03-spatial-joins1.py
# %load _solved/solutions/03-spatial-joins2.py
# %load _solved/solutions/03-spatial-joins3.py
# %load _solved/solutions/03-spatial-joins4.py
# %load _solved/solutions/03-spatial-joins5.py
# %load _solved/solutions/03-spatial-joins6.py
# %load _solved/solutions/03-spatial-joins7.py
```
## The overlay operation
In the spatial join operation above, we are not changing the geometries itself. We are not joining geometries, but joining attributes based on a spatial relationship between the geometries. This also means that the geometries need to at least overlap partially.
If you want to create new geometries based on joining (combining) geometries of different dataframes into one new dataframe (eg by taking the intersection of the geometries), you want an **overlay** operation.
```
africa = countries[countries['continent'] == 'Africa']
africa.plot()
cities['geometry'] = cities.buffer(2)
geopandas.overlay(africa, cities, how='difference').plot()
```
<div class="alert alert-info" style="font-size:120%">
<b>REMEMBER</b> <br>
* **Spatial join**: transfer attributes from one dataframe to another based on the spatial relationship
* **Spatial overlay**: construct new geometries based on spatial operation between both dataframes (and combining attributes of both dataframes)
</div>
|
github_jupyter
|
%matplotlib inline
import pandas as pd
import geopandas
pd.options.display.max_rows = 10
countries = geopandas.read_file("zip://./data/ne_110m_admin_0_countries.zip")
cities = geopandas.read_file("zip://./data/ne_110m_populated_places.zip")
rivers = geopandas.read_file("zip://./data/ne_50m_rivers_lake_centerlines.zip")
cities2 = cities[cities['name'].isin(['Bern', 'Brussels', 'London', 'Paris'])].copy()
cities2['iso_a3'] = ['CHE', 'BEL', 'GBR', 'FRA']
cities2
countries2 = countries[['iso_a3', 'name', 'continent']]
countries2.head()
cities2.merge(countries2, on='iso_a3')
france = countries.loc[countries['name'] == 'France', 'geometry'].squeeze()
cities.within(france)
cities[cities.within(france)]
joined = geopandas.sjoin(cities, countries, op='within', how='left')
joined
joined['continent'].value_counts()
districts = geopandas.read_file("data/paris_districts_utm.geojson")
stations = geopandas.read_file("data/paris_sharing_bike_stations_utm.geojson")
# %load _solved/solutions/03-spatial-joins1.py
# %load _solved/solutions/03-spatial-joins2.py
# %load _solved/solutions/03-spatial-joins3.py
# %load _solved/solutions/03-spatial-joins4.py
# %load _solved/solutions/03-spatial-joins5.py
# %load _solved/solutions/03-spatial-joins6.py
# %load _solved/solutions/03-spatial-joins7.py
africa = countries[countries['continent'] == 'Africa']
africa.plot()
cities['geometry'] = cities.buffer(2)
geopandas.overlay(africa, cities, how='difference').plot()
| 0.330795 | 0.987067 |
# Unit 5 - Financial Planning
```
# Initial imports
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
import datetime
import json
%matplotlib inline
# Load .env enviroment variables
load_dotenv()
```
## Part 1 - Personal Finance Planner
### Collect Crypto Prices Using the `requests` Library
```
# Set current amount of crypto assets
my_btc = 1.2
my_eth = 5.3
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=CAD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=CAD"
# Fetch current BTC price
btc_fetch = requests.get(btc_url)
btc_json = btc_fetch.json()
# Fetch current ETH price
eth_fetch = requests.get(eth_url)
eth_json = eth_fetch.json()
# Compute current value of my crpto
my_btc_value = my_btc * btc_json['data']['1']['quotes']['CAD']['price']
my_eth_value = my_eth * eth_json['data']['1027']['quotes']['CAD']['price']
# Print current crypto wallet balance
print(f"The current value of your {my_btc} BTC is ${my_btc_value}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value}")
```
### Collect Investments Data Using Alpaca: `SPY` (stocks) and `AGG` (bonds)
```
# Set current amount of shares
my_agg = 200
my_spy = 50
# Set Alpaca API key and secret
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca API object
api = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version = "v2"
)
# Format current date as ISO format
start_date = pd.Timestamp("2020-05-01", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2021-04-30", tz="America/New_York").isoformat()
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
df_ticker = api.get_barset(tickers, timeframe, limit = 1000, start = start_date, end = end_date).df
df_ticker.index = df_ticker.index.date
# Preview DataFrame
df_ticker.head()
# Pick AGG and SPY close prices
# Taking the latest price based on the data captured from Alpaca
agg_close_price = df_ticker['AGG']['close'][-1]
spy_close_price = df_ticker['SPY']['close'][-1]
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# Compute the current value of shares
my_spy_value = my_spy * spy_close_price
my_agg_value = my_agg * agg_close_price
# Print current value of shares
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}")
```
### Savings Health Analysis
```
# Set monthly household income
monthly_income = 12000
# Consolidate financial assets data
shares = my_spy_value + my_agg_value
crypto = my_btc_value + my_eth_value
# Create savings DataFrame
df_savings = pd.DataFrame({'category':['crypto', 'shares'], 'amount':[crypto, shares]})
df_savings = df_savings.set_index('category')
# Display savings DataFrame
display(df_savings)
df_savings['amount'].sum()
# Plot savings pie chart
df_savings.plot(kind = 'pie', y = 'amount')
# Set ideal emergency fund
emergency_fund = monthly_income * 3
# Calculate total amount of savings
savings = df_savings['amount'].sum()
# Validate saving health
if savings > emergency_fund:
print('Congratulations! You have enough money in your emergency fund.')
elif savings == emergency_fund:
print('Congratulations! You have reached your financial goal')
else:
print(f'You are still ${emergency_fund - savings} away from reaching the goal')
```
## Part 2 - Retirement Planning
### Monte Carlo Simulation
```
# Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
start_date = pd.Timestamp('2016-05-01', tz='America/New_York').isoformat()
end_date = pd.Timestamp('2021-05-01', tz='America/New_York').isoformat()
# Get 5 years' worth of historical data for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
df_stock_data = api.get_barset(tickers, timeframe, limit = 1000, start = start_date, end = end_date).df
df_stock_data.index = df_stock_data.index.date
# Display sample data
df_stock_data.head()
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
simulation =MCSimulation(
df_stock_data,
weights = [0.4,0.6],
num_simulation = 500,
num_trading_days = 252*30)
# Printing the simulation input data
simulation.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
simulation.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation.plot_distribution()
```
### Retirement Analysis
```
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary = simulation.summarize_cumulative_return()
# Print summary statistics
print(saving_summary)
```
### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `$20,000` initial investment.
```
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
ci_lower = round(saving_summary[8]*initial_investment,2)
ci_upper = round(saving_summary[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
```
### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `50%` increase in the initial investment.
```
# Set initial investment
initial_investment_1 = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
ci_lower = round(saving_summary[8]*initial_investment_1,2)
ci_upper = round(saving_summary[9]*initial_investment_1,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_1} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
```
## Optional Challenge - Early Retirement
### Five Years Retirement Option
```
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
simulation_1 =MCSimulation(
df_stock_data,
weights = [0.6,0.4],
num_simulation = 500,
num_trading_days = 252*5)
# Printing the simulation input data
simulation_1.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
simulation_1.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation_1.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation_1.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary_5_years = simulation_1.summarize_cumulative_return()
# Print summary statistics
print(saving_summary_5_years)
# Set initial investment
initial_investment_2 = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
ci_lower_five = round(saving_summary_5_years[8]*initial_investment_2,2)
ci_upper_five = round(saving_summary_5_years[9]*initial_investment_2,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_2} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}")
```
### Ten Years Retirement Option
```
# Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
simulation_2 =MCSimulation(
df_stock_data,
weights = [0.6,0.4],
num_simulation = 500,
num_trading_days = 252*10)
# Printing the simulation input data
simulation_2.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
simulation_2.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation_2.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation_2.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary_10_years = simulation_2.summarize_cumulative_return()
# Print summary statistics
print(saving_summary_10_years)
# Set initial investment
initial_investment_3 = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
ci_lower_ten = round(saving_summary_10_years[8]*initial_investment_3,2)
ci_upper_ten = round(saving_summary_10_years[9]*initial_investment_3,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_3} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}")
```
|
github_jupyter
|
# Initial imports
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
import datetime
import json
%matplotlib inline
# Load .env enviroment variables
load_dotenv()
# Set current amount of crypto assets
my_btc = 1.2
my_eth = 5.3
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=CAD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=CAD"
# Fetch current BTC price
btc_fetch = requests.get(btc_url)
btc_json = btc_fetch.json()
# Fetch current ETH price
eth_fetch = requests.get(eth_url)
eth_json = eth_fetch.json()
# Compute current value of my crpto
my_btc_value = my_btc * btc_json['data']['1']['quotes']['CAD']['price']
my_eth_value = my_eth * eth_json['data']['1027']['quotes']['CAD']['price']
# Print current crypto wallet balance
print(f"The current value of your {my_btc} BTC is ${my_btc_value}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value}")
# Set current amount of shares
my_agg = 200
my_spy = 50
# Set Alpaca API key and secret
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca API object
api = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version = "v2"
)
# Format current date as ISO format
start_date = pd.Timestamp("2020-05-01", tz="America/New_York").isoformat()
end_date = pd.Timestamp("2021-04-30", tz="America/New_York").isoformat()
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
df_ticker = api.get_barset(tickers, timeframe, limit = 1000, start = start_date, end = end_date).df
df_ticker.index = df_ticker.index.date
# Preview DataFrame
df_ticker.head()
# Pick AGG and SPY close prices
# Taking the latest price based on the data captured from Alpaca
agg_close_price = df_ticker['AGG']['close'][-1]
spy_close_price = df_ticker['SPY']['close'][-1]
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# Compute the current value of shares
my_spy_value = my_spy * spy_close_price
my_agg_value = my_agg * agg_close_price
# Print current value of shares
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}")
# Set monthly household income
monthly_income = 12000
# Consolidate financial assets data
shares = my_spy_value + my_agg_value
crypto = my_btc_value + my_eth_value
# Create savings DataFrame
df_savings = pd.DataFrame({'category':['crypto', 'shares'], 'amount':[crypto, shares]})
df_savings = df_savings.set_index('category')
# Display savings DataFrame
display(df_savings)
df_savings['amount'].sum()
# Plot savings pie chart
df_savings.plot(kind = 'pie', y = 'amount')
# Set ideal emergency fund
emergency_fund = monthly_income * 3
# Calculate total amount of savings
savings = df_savings['amount'].sum()
# Validate saving health
if savings > emergency_fund:
print('Congratulations! You have enough money in your emergency fund.')
elif savings == emergency_fund:
print('Congratulations! You have reached your financial goal')
else:
print(f'You are still ${emergency_fund - savings} away from reaching the goal')
# Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
start_date = pd.Timestamp('2016-05-01', tz='America/New_York').isoformat()
end_date = pd.Timestamp('2021-05-01', tz='America/New_York').isoformat()
# Get 5 years' worth of historical data for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
df_stock_data = api.get_barset(tickers, timeframe, limit = 1000, start = start_date, end = end_date).df
df_stock_data.index = df_stock_data.index.date
# Display sample data
df_stock_data.head()
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
simulation =MCSimulation(
df_stock_data,
weights = [0.4,0.6],
num_simulation = 500,
num_trading_days = 252*30)
# Printing the simulation input data
simulation.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
simulation.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary = simulation.summarize_cumulative_return()
# Print summary statistics
print(saving_summary)
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
ci_lower = round(saving_summary[8]*initial_investment,2)
ci_upper = round(saving_summary[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
# Set initial investment
initial_investment_1 = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
ci_lower = round(saving_summary[8]*initial_investment_1,2)
ci_upper = round(saving_summary[9]*initial_investment_1,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_1} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
simulation_1 =MCSimulation(
df_stock_data,
weights = [0.6,0.4],
num_simulation = 500,
num_trading_days = 252*5)
# Printing the simulation input data
simulation_1.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
simulation_1.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation_1.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation_1.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary_5_years = simulation_1.summarize_cumulative_return()
# Print summary statistics
print(saving_summary_5_years)
# Set initial investment
initial_investment_2 = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
ci_lower_five = round(saving_summary_5_years[8]*initial_investment_2,2)
ci_upper_five = round(saving_summary_5_years[9]*initial_investment_2,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_2} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}")
# Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
simulation_2 =MCSimulation(
df_stock_data,
weights = [0.6,0.4],
num_simulation = 500,
num_trading_days = 252*10)
# Printing the simulation input data
simulation_2.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
simulation_2.calc_cumulative_return()
# Plot simulation outcomes
line_plot = simulation_2.plot_simulation()
# Plot probability distribution and confidence intervals
dist_plot = simulation_2.plot_distribution()
# Fetch summary statistics from the Monte Carlo simulation results
saving_summary_10_years = simulation_2.summarize_cumulative_return()
# Print summary statistics
print(saving_summary_10_years)
# Set initial investment
initial_investment_3 = 60000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
ci_lower_ten = round(saving_summary_10_years[8]*initial_investment_3,2)
ci_upper_ten = round(saving_summary_10_years[9]*initial_investment_3,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment_3} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}")
| 0.642545 | 0.691341 |
<a href="https://colab.research.google.com/github/DiploDatos/AprendizajePorRefuerzos/blob/master/lab_1_intro_rl.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Notebook 1: Introducción al aprendizaje por refuerzos
Curso Aprendizaje por Refuerzos, Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones
FaMAF, 2021
## Introducción
En el siguiente notebook se muestra cómo ejecutar agentes de aprendizaje por refuerzos, los cuáles son necesarios para realizar este Lab.
### Repaso rápido
* Recompensa: señal $r$ recibida desde el entorno que recompensa o castiga el agente según su desempeño con respecto al objetivo de la tarea.
* Valor: función $v_\pi (s)$ que establece cuánto el agente espera percibir de recompensa al seguir la política $\pi$ partiendo desde el estado $s$. También se la suele expresar como $Q_\pi(s,a)$, indicando cuánto el agente espera percibir siguiendo la política $\pi$ partiendo desde el estado $s$ y siguiendo la acción $a$.
* Política: función $\pi(s) \to a$ que mapea un estado a una acción. Se suele expresar como probabilidad de elegir la acción $\pi(a \mid s)$. La política $\epsilon$-greedy, en donde $\epsilon$ es la probabilidad de exploración (normalmente menor que la probabilidad de explotación) está dada por
$$\pi(a \mid s) = 1 - \epsilon$$ si $a$ es la mejor acción, caso contrario $$\pi(a \mid s) = \epsilon$$
Por otra parte, en la política Softmax, no se busca la acción con máxima probabilidad sino que se computa la probabilidad de cada una mediante la función Softmax y se realiza un sorteo entre ellas pesado por la misma. Así, para cada acción $a$, $$\pi(a \mid s) = \frac{e^{Q(s,a)/\tau}}{\sum_{\widetilde{a} \in A}e^{Q(s,\widetilde{a})/\tau}}$$
En este notebook vemos dos algoritmos para actualizar la función de valor (y, por lo tanto, la política de selección de acciones):
* Actualización por SARSA (on-policy).
$$Q(s,a) \gets Q(s,a) + \alpha (r + \gamma Q(s',a') - Q(s,a))$$
Algoritmo completo (a modo de referencia):

* Actualización por Q-Learning (off-policy)
$$Q(s,a) \gets Q(s,a) + \alpha (r + \gamma \arg\max_{a'} Q(s',a') - Q(s,a))$$
Algoritmo completo (a modo de referencia):

Fuente de las imágenes: capítulo 6 de [Reinforcement Learning: An Introduction](http://www.incompleteideas.net/book/the-book.html).
## Librería a usar: Librería OpenAI Gym
[OpenAI Gym](https://gym.openai.com/) (Brockman et. al., 2016) es una librería de OpenAI que ofrece entornos y una interfaz estándar con la cuál probar nuestros agentes. Su objetivo es proveer benchmarks unificados para ver el desempeño de algoritmos en el entorno y así poder saber con facilidad cómo es su desempeño comparado con los demás. Parte de la siguiente sección está basada en la [documentación oficial de OpenAI](https://gym.openai.com/docs/).
La interfaz principal de los ambientes de gym es la interfaz Env. La misma posee cinco métodos principales:
* ```reset(self)``` : Reinicia el estado del entorno, a su estado inicial, devolviendo una observación de dicho estado.
* ```step(self, action)``` : "Avanza" un timestep del ambiente. Devuelve: ```observation, reward, done, info```.
* ```render(self)``` : Muestra en pantalla una parte del ambiente.
* ```close(self)``` : Finaliza con la instancia del agente.
* ```seed(self)``` : Establece la semilla aleatoria del generador de números aleatorios del presente entorno.
Por otra parte, cada entorno posee los siguientes tres atributos principales:
* ```action_space``` : El objeto de tipo Space correspondiente al espacio de acciones válidas.
* ```observation_space``` : El objeto de tipo Space correspondiente a todos los rangos posibles de observaciones.
* ```reward_range``` : Tupla que contiene los valores mínimo y máximo de recompensa posible.
Algunas de las ejecuciones contienen videos. Para poder verlos se necesita previamente instalar la librería ffmpeg; para instalarla desde Linux ejecutar en consola
```apt-get install ffmpeg```
desde Mac, reemplazar *apt-get* por *brew*
desde Windows, descargarla desde
[https://ffmpeg.org/download.html](https://ffmpeg.org/download.html)
(Nota: las animaciones son a modo ilustrativo, si no se desea instalar la librería se puede directamente eliminar la línea de código donde se llama al método ``env.render(mode='human')``)
Código básico de importación y funciones de graficación (no modificar)
```
#@title Código básico de graficación (no modificar)
import numpy as np
import matplotlib.pyplot as plt
import itertools
import gym
def plot_reward_per_episode(reward_ep):
episode_rewards = np.array(reward_ep)
# se suaviza la curva de convergencia
episode_number = np.linspace(1, len(episode_rewards) + 1, len(episode_rewards) + 1)
acumulated_rewards = np.cumsum(episode_rewards)
reward_per_episode = [acumulated_rewards[i] / episode_number[i] for i in range(len(acumulated_rewards))]
plt.plot(reward_per_episode)
plt.title('Recompensa acumulada por episodio')
plt.show()
def plot_steps_per_episode(timesteps_ep):
# se muestra la curva de aprendizaje de los pasos por episodio
episode_steps = np.array(timesteps_ep)
plt.plot(np.array(range(0, len(episode_steps))), episode_steps)
plt.title('Pasos (timesteps) por episodio')
plt.show()
def plot_steps_per_episode_smooth(timesteps_ep):
episode_steps = np.array(timesteps_ep)
# se suaviza la curva de aprendizaje
episode_number = np.linspace(1, len(episode_steps) + 1, len(episode_steps) + 1)
acumulated_steps = np.cumsum(episode_steps)
steps_per_episode = [acumulated_steps[i] / episode_number[i] for i in range(len(acumulated_steps))]
plt.plot(steps_per_episode)
plt.title('Pasos (timesteps) acumulados por episodio')
plt.show()
def draw_value_matrix(q):
n_rows = 4
n_columns = 12
n_actions = 4
# se procede con los cálculos previos a la graficación de la matriz de valor
q_value_matrix = np.empty((n_rows, n_columns))
for row in range(n_rows):
for column in range(n_columns):
state_values = []
for action in range(n_actions):
state_values.append(q.get((row * n_columns + column, action), -100))
maximum_value = max(state_values) # determinamos la acción que arroja máximo valor
q_value_matrix[row, column] = maximum_value
# el valor del estado objetivo se asigna en -1 (reward recibido al llegar) para que se coloree de forma apropiada
q_value_matrix[3, 11] = -1
# se grafica la matriz de valor
plt.imshow(q_value_matrix, cmap=plt.cm.RdYlGn)
plt.tight_layout()
plt.colorbar()
for row, column in itertools.product(range(q_value_matrix.shape[0]), range(q_value_matrix.shape[1])):
left_action = q.get((row * n_columns + column, 3), -1000)
down_action = q.get((row * n_columns + column, 2), -1000)
right_action = q.get((row * n_columns + column, 1), -1000)
up_action = q.get((row * n_columns + column, 0), -1000)
arrow_direction = 'D'
best_action = down_action
if best_action < right_action:
arrow_direction = 'R'
best_action = right_action
if best_action < left_action:
arrow_direction = 'L'
best_action = left_action
if best_action < up_action:
arrow_direction = 'U'
best_action = up_action
if best_action == -1:
arrow_direction = ''
# notar que column, row están invertidos en orden en la línea de abajo porque representan a x,y del plot
plt.text(column, row, arrow_direction, horizontalalignment="center")
plt.xticks([])
plt.yticks([])
plt.show()
print('\n Matriz de mejor acción-valor (en números): \n\n', q_value_matrix)
```
Ejemplo: agente CartPole
```
import gym
import time
from IPython.display import clear_output
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
# no es posible mostrar videos de ejecución del agente desde Colab
if not IN_COLAB:
env = gym.make('CartPole-v0')
env.reset()
for _ in range(500):
env.render(mode='human')
observation, reward, done, info = env.step(env.action_space.sample()) # se ejecuta una acción aleatoria
if done:
env.reset()
env.close()
clear_output()
```
Ejemplo: agente Mountain Car
```
if not IN_COLAB:
env = gym.make('MountainCar-v0')
observation = env.reset()
for t in range(500):
env.render(mode='human')
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
clear_output()
```
## Ejemplo 1: The Cliff.

donde S= starting point, G= goal
(imagen de Sutton y Barto, 2018)
Descripción del entorno:
Acciones:
* $\uparrow$ - Arriba
* $\downarrow$ - Abajo
* $\rightarrow$ - Derecha
* $\leftarrow$ - Izquierda
Función de recompensa:
* $-1$ en todos los demás estados
* $-100$ en el acantilado
Nota: caer en el acantilado devuelve al agente al estado inicial en un mismo episodio
Vemos los bloques básicos de nuestro agente
Definimos el método de elección de acciones. En este caso el mismo utiliza la política de exploración $\epsilon$-greedy.
```
def choose_action(state):
"""
Chooses an action according to the learning previously performed
using an epsilon-greedy exploration policy
"""
q_values = [q.get((state, a), 0.0) for a in actions] # ej: para 4 acciones inicializa en [0,0,0,0]
max_q = max(q_values)
if random_state.uniform() < epsilon: # sorteamos un número: es menor a épsilon?
return random_state.choice(actions) # sí: se selecciona una acción aleatoria
count = q_values.count(max_q)
# hay más de un máximo valor de estado-acción?
if count > 1:
# sí: seleccionamos uno de ellos aleatoriamente
best = [i for i in range(len(actions)) if q_values[i] == max_q]
i = random_state.choice(best)
else:
# no: seleccionamos el máximo valor de estado-acción
i = q_values.index(max_q)
return actions[i]
```
Definimos el esqueleto del método learn, el cuál toma una transición y cambia el dict de los valores de Q de acuerdo a algún algoritmo.
```
def learn(state, action, reward, next_state, next_action):
"""
Performs a SARSA update for a given state transition
"""
# TODO - completa con tu código aquí
pass
```
Finalmente, definimos el método principal de iteraciones.
```
def run():
"""
Runs the reinforcement learning agent with a given configuration.
"""
timesteps_of_episode = [] # registro de la cantidad de pasos que le llevó en cada episodio
reward_of_episode = [] # cantidad de recompensa que recibió el agente en cada episodio
for i_episode in range(episodes_to_run):
# se ejecuta una instancia del agente hasta que el mismo llega a la salida
# o tarda más de 2000 pasos
# reinicia el ambiente, obteniendo el estado inicial del mismo
state = env.reset()
episode_reward = 0
done = False
t = 0
# elige una acción basado en el estado actual
action = choose_action(state)
while not done:
# el agente ejecuta la acción elegida y obtiene los resultados
next_state, reward, done, info = env.step(action)
next_action = choose_action(next_state)
episode_reward += reward
learn(state, action, reward, next_state, next_action)
if not done and t < 2000: # if the algorithm does not converge, it stops after 2000 timesteps
state = next_state
action = next_action
else:
# el algoritmo no ha podido llegar a la meta antes de dar 2000 pasos
done = True # se establece manualmente la bandera done
timesteps_of_episode = np.append(timesteps_of_episode, [int(t + 1)])
reward_of_episode = np.append(reward_of_episode, max(episode_reward, -100))
t += 1
return reward_of_episode.mean(), timesteps_of_episode, reward_of_episode
```
Definidos los métodos básicos, procedemos a instanciar a nuestro agente.
```
# se crea el diccionario que contendrá los valores de Q para cada tupla (estado, acción)
q = {}
# definimos sus híper-parámetros básicos
alpha = 0.5
gamma = 1
epsilon = 0.1
tau = 25
episodes_to_run = 500
env = gym.make("CliffWalking-v0")
actions = range(env.action_space.n)
# se declara una semilla aleatoria
random_state = np.random.RandomState(42)
```
Ya instanciado, ejecutamos nuestro agente
```
avg_steps_per_episode, timesteps_ep, reward_ep = run()
```
### Análisis de la ejecución del agente
#### Análisis de convergencia
A diferencia de lo que sucede en el aprendizaje supervisado, en el aprendizaje por refuerzos el rendimiento se evalúa por una función específica que es la función de recompensa. En la práctica, la función de recompensa puede ser externa (y provista por el entorno) o bien puede ser una función creada por diseño (a modo de dirigir el agente hacia lo que por diseño se considera mejor, en nuestro ejemplo podría ser con una recompensa de $+1$ cada vez que el agente llega al estado objetivo). Esto se conoce como *reward shaping*, y hay que tener mucho cuidado con los posibles efectos secundarios de su uso.
Como el objetivo de RL es maximizar la recompensa obtenida, es posible utilizar la información sobre la obtención de la recompensas en cada time-step o episodio para evaluar el rendimiento parcial del agente (esto depende mucho de la particularidad de la distribución de la recompensa para el problema tratado).
Para analizar la ejecución del agente, vamos a ver cómo se desempeñó el mismo en dos curvas:
* Recompensa obtenida en cada episodio: nos dirá cuánta recompensa obtuvo el agente sumando cada una de recompensas individuales de cada episodio. Con esta medida podremos tener una noción de cómo se desempeñó esquivando el acantilado y llegando lo antes posible a la meta.
* Pasos transcurridos en cada episodio: indicará cuántos pasos le ha llevado al agente la ejecución del episodio.
Se estila suavizar ambas curvas para apreciar mejor su progresión (aunque a veces suele analizarse la curva de pasos por episodio sin suavizar).
Veamos recompensa por episodio (recordar que en este entorno cada paso otorga una recompensa de $-1$ excepto al caer al acantilado, donde la recompensa es de $-100$)
```
plot_reward_per_episode(reward_ep)
```
Veamos pasos por episodio
```
plot_steps_per_episode(timesteps_ep)
```
Suavizando...
```
plot_steps_per_episode_smooth(timesteps_ep)
```
#### Análisis de matriz de acción-valor y política óptima
Siendo que este es un ejemplo tabular y de pocos estados / acciones, es posible realizar un análisis de convergencia desde otro punto de vista: desde el valor de la función $Q(s,a)$ para la mejor acción de cada estado, al finalizar el entrenamiento del agente, (sería la acción que el agente ejecutaría en cada estado bajo una política *greedy*). Ambos nos brindarán información sobre la convergencia alcanzada por el agente.
Tener en cuenta que este análisis se hace principalmente con fines educativos, para entornos más complejos el mismo puede no ser factible. En tales casos, un análisis alternativo podría consistir en hacer que el agente ejecute su política para la que fue entrenado, para hacer una evaluación a partir del comportamiento del mismo (esto último sería el *test de la política*, frente al *entrenamiento de la política* previo).
```
draw_value_matrix(q)
env.close()
```
## Actividades
1. Implementar y ejecutar el algoritmo SARSA en "The Cliff".
2. Implementar y ejecutar el algoritmo Q-Learning en "The Cliff". ¿Cómo converge con respecto a SARSA? ¿A qué se debe? Comentar.
3. Ejecutando con distintos híper-parámetros, realizar una breve descripción sobre cómo afectan a la convergencia los distintos valores de $\alpha$, $\epsilon$ y $\gamma$.
4. (Opcional) Implementar política de exploración Softmax, en donde cada acción tiene una probabilidad $$\pi(a \mid s) = \frac{e^{Q(s,a)/\tau}}{\sum_{\widetilde{a} \in A}e^{Q(s,\widetilde{a})/\tau}}$$
5. (Opcional) Implementar Dyna-Q a partir del algoritmo Q-Learning, incorporando una actualización mediante un modelo. Comentar cómo se desempeña respecto a los demás algoritmos.
Para dejar el lab listo para su corrección, dejar link a repo de Github con un notebook ejecutando el agente en la planilla enviada en Slack.
FIN
|
github_jupyter
|
desde Mac, reemplazar *apt-get* por *brew*
desde Windows, descargarla desde
[https://ffmpeg.org/download.html](https://ffmpeg.org/download.html)
(Nota: las animaciones son a modo ilustrativo, si no se desea instalar la librería se puede directamente eliminar la línea de código donde se llama al método ``env.render(mode='human')``)
Código básico de importación y funciones de graficación (no modificar)
Ejemplo: agente CartPole
Ejemplo: agente Mountain Car
## Ejemplo 1: The Cliff.

donde S= starting point, G= goal
(imagen de Sutton y Barto, 2018)
Descripción del entorno:
Acciones:
* $\uparrow$ - Arriba
* $\downarrow$ - Abajo
* $\rightarrow$ - Derecha
* $\leftarrow$ - Izquierda
Función de recompensa:
* $-1$ en todos los demás estados
* $-100$ en el acantilado
Nota: caer en el acantilado devuelve al agente al estado inicial en un mismo episodio
Vemos los bloques básicos de nuestro agente
Definimos el método de elección de acciones. En este caso el mismo utiliza la política de exploración $\epsilon$-greedy.
Definimos el esqueleto del método learn, el cuál toma una transición y cambia el dict de los valores de Q de acuerdo a algún algoritmo.
Finalmente, definimos el método principal de iteraciones.
Definidos los métodos básicos, procedemos a instanciar a nuestro agente.
Ya instanciado, ejecutamos nuestro agente
### Análisis de la ejecución del agente
#### Análisis de convergencia
A diferencia de lo que sucede en el aprendizaje supervisado, en el aprendizaje por refuerzos el rendimiento se evalúa por una función específica que es la función de recompensa. En la práctica, la función de recompensa puede ser externa (y provista por el entorno) o bien puede ser una función creada por diseño (a modo de dirigir el agente hacia lo que por diseño se considera mejor, en nuestro ejemplo podría ser con una recompensa de $+1$ cada vez que el agente llega al estado objetivo). Esto se conoce como *reward shaping*, y hay que tener mucho cuidado con los posibles efectos secundarios de su uso.
Como el objetivo de RL es maximizar la recompensa obtenida, es posible utilizar la información sobre la obtención de la recompensas en cada time-step o episodio para evaluar el rendimiento parcial del agente (esto depende mucho de la particularidad de la distribución de la recompensa para el problema tratado).
Para analizar la ejecución del agente, vamos a ver cómo se desempeñó el mismo en dos curvas:
* Recompensa obtenida en cada episodio: nos dirá cuánta recompensa obtuvo el agente sumando cada una de recompensas individuales de cada episodio. Con esta medida podremos tener una noción de cómo se desempeñó esquivando el acantilado y llegando lo antes posible a la meta.
* Pasos transcurridos en cada episodio: indicará cuántos pasos le ha llevado al agente la ejecución del episodio.
Se estila suavizar ambas curvas para apreciar mejor su progresión (aunque a veces suele analizarse la curva de pasos por episodio sin suavizar).
Veamos recompensa por episodio (recordar que en este entorno cada paso otorga una recompensa de $-1$ excepto al caer al acantilado, donde la recompensa es de $-100$)
Veamos pasos por episodio
Suavizando...
#### Análisis de matriz de acción-valor y política óptima
Siendo que este es un ejemplo tabular y de pocos estados / acciones, es posible realizar un análisis de convergencia desde otro punto de vista: desde el valor de la función $Q(s,a)$ para la mejor acción de cada estado, al finalizar el entrenamiento del agente, (sería la acción que el agente ejecutaría en cada estado bajo una política *greedy*). Ambos nos brindarán información sobre la convergencia alcanzada por el agente.
Tener en cuenta que este análisis se hace principalmente con fines educativos, para entornos más complejos el mismo puede no ser factible. En tales casos, un análisis alternativo podría consistir en hacer que el agente ejecute su política para la que fue entrenado, para hacer una evaluación a partir del comportamiento del mismo (esto último sería el *test de la política*, frente al *entrenamiento de la política* previo).
| 0.47658 | 0.990169 |
## Python Data Structures Exercises
```
mydata = [{'Born': '2007',
'City': 'Cauneside',
'Crypto': ('FTH', 'Feathercoin'),
'Description': 'Natus voluptas repellat consequatur. Nihil nobis reprehenderit libero sunt nulla.\nVeniam quia ab consectetur voluptatibus reprehenderit debitis sint.',
'Email': '[email protected]',
'FavoriteURL': 'http://www.purins.com/',
'FirstName': 'Aija',
'LastName': 'Apsītis',
'Phone': ['+371 22654114', '+371 17411292', '+371 82836492'],
'UID': 0},
{'Born': '2001',
'City': 'Port Augustsstad',
'Crypto': ('BCH', 'Bitcoin Cash'),
'Description': 'Ipsam accusantium eos odit.\nAsperiores blanditiis mollitia praesentium cum sapiente dolore. Nulla excepturi nulla culpa esse eius reprehenderit.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.zvaigzne.info/',
'FirstName': 'Nikolajs',
'LastName': 'Ziemelis',
'Phone': ['+371 73927263', '+(371) 77959887', '+37141903089'],
'UID': 1},
{'Born': '2010',
'City': 'Zvaigznefort',
'Crypto': ('NEM', 'XEM'),
'Description': 'Enim temporibus porro vitae explicabo nemo consequuntur dolorum. Reprehenderit accusantium mollitia dolorum soluta. Perspiciatis eius inventore impedit ipsam veniam.',
'Email': '[email protected]',
'FavoriteURL': 'http://alksnis-skujins.org/',
'FirstName': 'Jūla',
'LastName': 'Lagzdiņš',
'Phone': ['+37182427365',
'+371 57494435',
'+(371) 35580513',
'+37176997474'],
'UID': 2},
{'Born': '1989',
'City': 'Avotiņšberg',
'Crypto': ('FTH', 'Feathercoin'),
'Description': 'Nihil a atque laudantium quae voluptates eaque laborum. Nam aspernatur ipsam laudantium doloremque modi. Eos labore quaerat velit omnis dolor iste tempora. Assumenda necessitatibus dignissimos.',
'Email': '[email protected]',
'FavoriteURL': 'https://krumins.com/',
'FirstName': 'Hanss',
'LastName': 'Riekstiņš',
'Phone': ['+37125863621'],
'UID': 3},
{'Born': '1989',
'City': 'Lake Ģirts',
'Crypto': ('MSC', 'Omni'),
'Description': 'Quaerat sunt debitis. Eaque tempore perferendis quam dolore repellat ratione voluptas. Fugit nesciunt consectetur eos fugiat quis.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.sprogis.com/',
'FirstName': 'Vilma',
'LastName': 'Zvirbulis',
'Phone': ['+37146458585',
'+371 72765881',
'+(371) 21770943',
'+(371) 54641642'],
'UID': 4},
{'Born': '1977',
'City': 'North Zelmaberg',
'Crypto': ('BC', 'BlackCoin'),
'Description': 'Corporis amet molestiae beatae. Aut possimus nam atque. Ex repellat ratione dolores libero.\nFacilis reprehenderit quibusdam tenetur dolor assumenda. Neque adipisci impedit.',
'Email': '[email protected]',
'FavoriteURL': 'https://strazdins.com/',
'FirstName': 'Roberts',
'LastName': 'Skuja',
'Phone': ['+37107395161', '+(371) 23375197', '+37178255498', '+37193169667'],
'UID': 5},
{'Born': '2000',
'City': 'South Emmamouth',
'Crypto': ('DRC', 'Decred'),
'Description': 'Ea facilis perferendis. Optio earum magni dolore quo similique odit.\nFacere accusamus unde facilis eos accusantium quaerat iure. Harum assumenda provident eius. Nobis a incidunt blanditiis alias.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.kundzins.org/',
'FirstName': 'Roberts',
'LastName': 'Sproģis',
'Phone': ['+(371) 08167174',
'+371 53744269',
'+(371) 31467491',
'+(371) 39385039',
'+37109968538'],
'UID': 6},
{'Born': '1981',
'City': 'Lūsisfort',
'Crypto': ('DOGE', 'Dogecoin'),
'Description': 'Perspiciatis ipsum perspiciatis quibusdam tempore. Id aliquam ab nulla neque.\nIure veniam natus corporis officia sint minus. Placeat at porro cumque corrupti voluptates aperiam.',
'Email': '[email protected]',
'FavoriteURL': 'http://birznieks-purmals.org/',
'FirstName': 'Kristers',
'LastName': 'Liepiņš',
'Phone': ['+37186087909', '+(371) 46723286'],
'UID': 7},
{'Born': '2009',
'City': 'Johansbury',
'Crypto': ('XMR', 'Monero'),
'Description': 'Exercitationem iste non aut.',
'Email': '[email protected]',
'FavoriteURL': 'http://birznieks-purmals.com/',
'FirstName': 'Diāna',
'LastName': 'Kalējs',
'Phone': ['+(371) 60322027',
'+37118037508',
'+371 63268407',
'+(371) 16331194',
'+37167286470'],
'UID': 8},
{'Born': '1997',
'City': 'Paulīnafurt',
'Crypto': ('EMC', 'Emercoin'),
'Description': 'Labore amet quia soluta nam accusantium magni nihil. Commodi eveniet possimus perferendis magni. Quas dicta officia incidunt occaecati quasi. Molestiae minima officia neque maxime.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.prieditis-baltins.com/',
'FirstName': 'Margareta',
'LastName': 'Puriņš',
'Phone': ['+37106899581'],
'UID': 9},
{'Born': '2007',
'City': 'Liepiņštown',
'Crypto': ('AUR', 'Auroracoin'),
'Description': 'Repellat esse voluptates amet beatae. Ducimus reprehenderit odit molestiae.\nCulpa ad eaque earum dicta eaque illum quisquam. Dolorem tempore vel eius.',
'Email': '[email protected]',
'FavoriteURL': 'http://www.karklins.com/',
'FirstName': 'Irēna',
'LastName': 'Vanags',
'Phone': ['+37155447041', '+(371) 22954167'],
'UID': 10},
{'Born': '1978',
'City': 'East Elizabeteland',
'Crypto': ('GRC', 'Gridcoin'),
'Description': 'Ut unde temporibus aperiam dolor laudantium nesciunt sint. Cupiditate ab velit.',
'Email': '[email protected]',
'FavoriteURL': 'http://alksnis-avotins.com/',
'FirstName': 'Silvija',
'LastName': 'Dūmiņš',
'Phone': ['+37123192224', '+37148590668', '+371 51499663'],
'UID': 11}]
# What is type of data is held in mydata?
# How many elements(in this case people) are in mydata?
# Print ALL information known about 6th person(the one with UID 5)
# Find First Two Numbers for Roberts Skuja (by hand picked index since we do not know how to loop just yet)
a1 = None
assert(a1 == ['+37107395161', '+(371) 23375197']), "Expected ['+37107395161', '+(371) 23375197']"
a1
# Find First Three Phone Numbers for Roberts Sprogis (by hand picked index since we do not know how to loop just yet)
a2 = None
assert(a2 == ['+(371) 08167174', '+371 53744269', '+(371) 31467491']), "Expected ['+(371) 08167174', '+371 53744269', '+(371) 31467491']"
a2
# Find LAST Two Phone Numbers for Aija Apsitis (by hand picked index since we do not know how to loop just yet)
a3 = None
assert(a3 == ['+371 17411292', '+371 82836492']), "Expected ['+371 17411292', '+371 82836492']"
a3
a4 = None
# Find First Two Words (including space in between) in the Description for Kristers Liepins
# For now it is okay to use numberic index
# There is also a solution using join and split which we can explore in class
assert(a4 == 'Perspiciatis ipsum'), "Expected 'Perspiciatis ipsum'"
a4
# What kind of Data Structure Holds Cryptocurrency information for each person?
# Find Favorite Crypto Symbol for Silvija Dumins
a5 = None
assert(a5 == 'GRC'), "Expected 'GRC'"
a5
```
|
github_jupyter
|
mydata = [{'Born': '2007',
'City': 'Cauneside',
'Crypto': ('FTH', 'Feathercoin'),
'Description': 'Natus voluptas repellat consequatur. Nihil nobis reprehenderit libero sunt nulla.\nVeniam quia ab consectetur voluptatibus reprehenderit debitis sint.',
'Email': '[email protected]',
'FavoriteURL': 'http://www.purins.com/',
'FirstName': 'Aija',
'LastName': 'Apsītis',
'Phone': ['+371 22654114', '+371 17411292', '+371 82836492'],
'UID': 0},
{'Born': '2001',
'City': 'Port Augustsstad',
'Crypto': ('BCH', 'Bitcoin Cash'),
'Description': 'Ipsam accusantium eos odit.\nAsperiores blanditiis mollitia praesentium cum sapiente dolore. Nulla excepturi nulla culpa esse eius reprehenderit.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.zvaigzne.info/',
'FirstName': 'Nikolajs',
'LastName': 'Ziemelis',
'Phone': ['+371 73927263', '+(371) 77959887', '+37141903089'],
'UID': 1},
{'Born': '2010',
'City': 'Zvaigznefort',
'Crypto': ('NEM', 'XEM'),
'Description': 'Enim temporibus porro vitae explicabo nemo consequuntur dolorum. Reprehenderit accusantium mollitia dolorum soluta. Perspiciatis eius inventore impedit ipsam veniam.',
'Email': '[email protected]',
'FavoriteURL': 'http://alksnis-skujins.org/',
'FirstName': 'Jūla',
'LastName': 'Lagzdiņš',
'Phone': ['+37182427365',
'+371 57494435',
'+(371) 35580513',
'+37176997474'],
'UID': 2},
{'Born': '1989',
'City': 'Avotiņšberg',
'Crypto': ('FTH', 'Feathercoin'),
'Description': 'Nihil a atque laudantium quae voluptates eaque laborum. Nam aspernatur ipsam laudantium doloremque modi. Eos labore quaerat velit omnis dolor iste tempora. Assumenda necessitatibus dignissimos.',
'Email': '[email protected]',
'FavoriteURL': 'https://krumins.com/',
'FirstName': 'Hanss',
'LastName': 'Riekstiņš',
'Phone': ['+37125863621'],
'UID': 3},
{'Born': '1989',
'City': 'Lake Ģirts',
'Crypto': ('MSC', 'Omni'),
'Description': 'Quaerat sunt debitis. Eaque tempore perferendis quam dolore repellat ratione voluptas. Fugit nesciunt consectetur eos fugiat quis.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.sprogis.com/',
'FirstName': 'Vilma',
'LastName': 'Zvirbulis',
'Phone': ['+37146458585',
'+371 72765881',
'+(371) 21770943',
'+(371) 54641642'],
'UID': 4},
{'Born': '1977',
'City': 'North Zelmaberg',
'Crypto': ('BC', 'BlackCoin'),
'Description': 'Corporis amet molestiae beatae. Aut possimus nam atque. Ex repellat ratione dolores libero.\nFacilis reprehenderit quibusdam tenetur dolor assumenda. Neque adipisci impedit.',
'Email': '[email protected]',
'FavoriteURL': 'https://strazdins.com/',
'FirstName': 'Roberts',
'LastName': 'Skuja',
'Phone': ['+37107395161', '+(371) 23375197', '+37178255498', '+37193169667'],
'UID': 5},
{'Born': '2000',
'City': 'South Emmamouth',
'Crypto': ('DRC', 'Decred'),
'Description': 'Ea facilis perferendis. Optio earum magni dolore quo similique odit.\nFacere accusamus unde facilis eos accusantium quaerat iure. Harum assumenda provident eius. Nobis a incidunt blanditiis alias.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.kundzins.org/',
'FirstName': 'Roberts',
'LastName': 'Sproģis',
'Phone': ['+(371) 08167174',
'+371 53744269',
'+(371) 31467491',
'+(371) 39385039',
'+37109968538'],
'UID': 6},
{'Born': '1981',
'City': 'Lūsisfort',
'Crypto': ('DOGE', 'Dogecoin'),
'Description': 'Perspiciatis ipsum perspiciatis quibusdam tempore. Id aliquam ab nulla neque.\nIure veniam natus corporis officia sint minus. Placeat at porro cumque corrupti voluptates aperiam.',
'Email': '[email protected]',
'FavoriteURL': 'http://birznieks-purmals.org/',
'FirstName': 'Kristers',
'LastName': 'Liepiņš',
'Phone': ['+37186087909', '+(371) 46723286'],
'UID': 7},
{'Born': '2009',
'City': 'Johansbury',
'Crypto': ('XMR', 'Monero'),
'Description': 'Exercitationem iste non aut.',
'Email': '[email protected]',
'FavoriteURL': 'http://birznieks-purmals.com/',
'FirstName': 'Diāna',
'LastName': 'Kalējs',
'Phone': ['+(371) 60322027',
'+37118037508',
'+371 63268407',
'+(371) 16331194',
'+37167286470'],
'UID': 8},
{'Born': '1997',
'City': 'Paulīnafurt',
'Crypto': ('EMC', 'Emercoin'),
'Description': 'Labore amet quia soluta nam accusantium magni nihil. Commodi eveniet possimus perferendis magni. Quas dicta officia incidunt occaecati quasi. Molestiae minima officia neque maxime.',
'Email': '[email protected]',
'FavoriteURL': 'https://www.prieditis-baltins.com/',
'FirstName': 'Margareta',
'LastName': 'Puriņš',
'Phone': ['+37106899581'],
'UID': 9},
{'Born': '2007',
'City': 'Liepiņštown',
'Crypto': ('AUR', 'Auroracoin'),
'Description': 'Repellat esse voluptates amet beatae. Ducimus reprehenderit odit molestiae.\nCulpa ad eaque earum dicta eaque illum quisquam. Dolorem tempore vel eius.',
'Email': '[email protected]',
'FavoriteURL': 'http://www.karklins.com/',
'FirstName': 'Irēna',
'LastName': 'Vanags',
'Phone': ['+37155447041', '+(371) 22954167'],
'UID': 10},
{'Born': '1978',
'City': 'East Elizabeteland',
'Crypto': ('GRC', 'Gridcoin'),
'Description': 'Ut unde temporibus aperiam dolor laudantium nesciunt sint. Cupiditate ab velit.',
'Email': '[email protected]',
'FavoriteURL': 'http://alksnis-avotins.com/',
'FirstName': 'Silvija',
'LastName': 'Dūmiņš',
'Phone': ['+37123192224', '+37148590668', '+371 51499663'],
'UID': 11}]
# What is type of data is held in mydata?
# How many elements(in this case people) are in mydata?
# Print ALL information known about 6th person(the one with UID 5)
# Find First Two Numbers for Roberts Skuja (by hand picked index since we do not know how to loop just yet)
a1 = None
assert(a1 == ['+37107395161', '+(371) 23375197']), "Expected ['+37107395161', '+(371) 23375197']"
a1
# Find First Three Phone Numbers for Roberts Sprogis (by hand picked index since we do not know how to loop just yet)
a2 = None
assert(a2 == ['+(371) 08167174', '+371 53744269', '+(371) 31467491']), "Expected ['+(371) 08167174', '+371 53744269', '+(371) 31467491']"
a2
# Find LAST Two Phone Numbers for Aija Apsitis (by hand picked index since we do not know how to loop just yet)
a3 = None
assert(a3 == ['+371 17411292', '+371 82836492']), "Expected ['+371 17411292', '+371 82836492']"
a3
a4 = None
# Find First Two Words (including space in between) in the Description for Kristers Liepins
# For now it is okay to use numberic index
# There is also a solution using join and split which we can explore in class
assert(a4 == 'Perspiciatis ipsum'), "Expected 'Perspiciatis ipsum'"
a4
# What kind of Data Structure Holds Cryptocurrency information for each person?
# Find Favorite Crypto Symbol for Silvija Dumins
a5 = None
assert(a5 == 'GRC'), "Expected 'GRC'"
a5
| 0.2763 | 0.57069 |
```
import pandas as pd
train = pd.read_csv('train.csv', index_col='_id')
test = pd.read_csv('test.csv', index_col='_id')
train.info(), test.info()
train.shape, test.shape
y_train = list(train['target'])
train = train.drop('target', axis=1)
train.info(verbose=True)
train.loc[:,'sample'] = 'train'
test.loc[:,'sample'] = 'test'
df = train.append(test)
df.shape
df.head()
df.index.values.tolist().count('905a0b9a5456ee962223033473666be3')
df[['sample']]
df.select_dtypes(include='object').head()
def show_object_columns(df, show=True):
df_objects_dict = {}
for i in df.columns:
if str(df[i].dtype) == 'object':
if show:
print('='*10)
print(i)
print(set(df[i]))
print('\n')
df_objects_dict[i] = set(df[i])
return df_objects_dict
show_object_columns(df)
df.describe(percentiles=[0.99])
df[(df['marital'] == 'unknown') & (df['job'] == 'unknown')][['marital', 'loan', 'previous', 'poutcome', 'pdays', 'month', 'day_of_week','age', 'education', 'job', 'duration']]
for i in set(df['marital']):
print('count ',i, '==', df[df['marital'] == i]['marital'].count())
df.columns
df[['emp.var.rate', 'cons.price.idx',
'cons.conf.idx', 'euribor3m', 'nr.employed']]
from sklearn.preprocessing import LabelEncoder
def df_columns_labelencoding(columns_to_encode, dataframe, exclude_columns_list=None):
"""
Returns:
encoded_dataframe : df with encoded by label encoder columns, it's copy from original df
"""
encoded_dataframe = dataframe.copy()
columns_le_dict = dict()
for column in columns_to_encode:
if exclude_columns_list and column in exclude_columns_list:
continue
columns_le_dict[column] = LabelEncoder()
encoded_dataframe[column] = columns_le_dict[column].fit_transform(encoded_dataframe[column])
return encoded_dataframe, columns_le_dict
object_columns = show_object_columns(df, show=False)
encoded_df, columns_encoders_dict = df_columns_labelencoding(object_columns.keys(), df, ['sample'])
encoded_df.info()
encoded_df.head()
# encoded_df_saved = encoded_df.copy()
# encoded_df = encoded_df.drop(['month','day_of_week'],axis=1)
def filter_columns(list_c, exc):
l = list()
for c in list_c:
if c not in exc:
l.append(c)
return l
# cols = filter_columns(columns_encoders_dict.keys(), ['month', 'day_of_week'])
dummied_encoded_df = pd.get_dummies(encoded_df, columns=list(columns_encoders_dict.keys()))
# dummied_encoded_df = pd.get_dummies(encoded_df, columns=cols)
dummied_encoded_df.info()
dummied_encoded_df.shape
dummied_encoded_df.head()
for c in ['emp.var.rate', 'pdays', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed']:
print(c, ' ===='*10)
print(set(dummied_encoded_df[c]))
print('\n')
dummied_encoded_df = pd.get_dummies(dummied_encoded_df, columns=['emp.var.rate', 'nr.employed'])
dummied_encoded_df.info()
pdays_999_previous_1 = dummied_encoded_df[
((dummied_encoded_df['pdays'] == 999)) & (dummied_encoded_df['previous'] == 1)
][['pdays', 'duration']].pivot_table(index=['_id', 'pdays'])['duration'].mean()
pdays_999 = dummied_encoded_df['duration'].mean()
duration_mean = dummied_encoded_df['duration'].mean()
pt = dummied_encoded_df[
((dummied_encoded_df['pdays'] == 999)) & (dummied_encoded_df['previous'] == 1)
][['pdays', 'duration']].pivot_table(index=['_id', 'pdays'])
pt['derived'] = [1 if v >= pdays_999_previous_1 else 0for v in pt['duration']]
pt['derived'].value_counts()
######## Let't the magic begins ###################################################### @ATTENTION_PLZ
dedf = dummied_encoded_df.copy()
dedf['pdays_state'] = [0 if v >= 999 else 1 for v in dedf['pdays']] # previous contact, boolean state 0 -no, 1 - yes
dedf['pdays_state_mean'] = [0 if v == 999 or v < pdays_999 else 1 for v in dedf['pdays']]
dedf['duration_mean'] = [1 if v > duration_mean else 0 for v in dedf['duration']]
dedf[['pdays', 'previous', 'duration','pdays_state', 'pdays_state_mean', 'duration_mean']].sample(n=5)
# print(set(dedf['cons.price.idx']))
# print(dedf['cons.price.idx'].mean())
cpi_mean = dedf['cons.price.idx'].mean()
cci_mean = dedf['cons.conf.idx'].mean()
euribor3m_mean = dedf['euribor3m'].mean()
dedf['cons.price.idx_mean'] = [v/cpi_mean for v in dedf['cons.price.idx']]
dedf['cons.conf.idx_mean'] = [v/cci_mean for v in dedf['cons.conf.idx']]
dedf['euribor3m_mean'] = [v/euribor3m_mean for v in dedf['euribor3m']]
# dedf['cons.price.idx_mean_splitting'] = [1 if v >= 1 else 0 for v in dedf['cons.price.idx_mean']]
# dedf['cons.conf.idx_mean_splitting'] = [1 if v >= 1 else 0 for v in dedf['cons.conf.idx_mean']]
# euribor3m_mean_mean = dedf['euribor3m_mean'].mean()
# dedf['euribor3m_mean_splitting'] = [1 if v >=euribor3m_mean_mean else 0 for v in dedf['euribor3m_mean']]
dedf[['euribor3m',
'euribor3m_mean',
# 'euribor3m_mean_splitting',
'cons.price.idx',
'cons.price.idx_mean',
# 'cons.price.idx_mean_splitting',
'cons.conf.idx',
'cons.conf.idx_mean',
# 'cons.conf.idx_mean_splitting'
]].sample(n=5)
#### @ATTENTION_PLZ
# dedf = dedf.drop(['euribor3m', 'cons.price.idx', 'cons.conf.idx', 'pdays'], axis=1)
dedf.sample(n=5)
dedf = pd.get_dummies(dedf, columns=['pdays'])
dedf.info()
dedf_train = dedf.query('sample == "train"').drop(['sample'], axis=1)
dedf_test = dedf.query('sample == "test"').drop(['sample'], axis=1)
dedf_train.shape, dedf_test.shape
len(dedf_train), len(y_train)
from sklearn.model_selection import train_test_split
X_train_train, X_train_test, y_train_train, y_train_test = train_test_split(dedf_train, y_train, test_size=0.35, random_state=40)
X_train_train.shape, X_train_test.shape
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (12,5)
knn = KNeighborsClassifier()
knn.fit(X_train_train, y_train_train)
predict_knn = knn.predict(X_train_test)
predict_proba_knn = knn.predict_proba(X_train_test)
dtc = DecisionTreeClassifier()
dtc.fit(X_train_train, y_train_train)
predict_dtc = dtc.predict(X_train_test)
predict_proba_dtc = dtc.predict_proba(X_train_test)
rfc = RandomForestClassifier()
rfc.fit(X_train_train, y_train_train)
predict_rfc = rfc.predict(X_train_test)
predict_proba_rfc = rfc.predict_proba(X_train_test)
lr = LogisticRegression()
lr.fit(X_train_train, y_train_train)
predict_lr = lr.predict(X_train_test)
predict_proba_lr = lr.predict_proba(X_train_test)
from sklearn.metrics import accuracy_score, precision_score, recall_score
models_accuracy = {
'dtc' : accuracy_score(y_train_test, predict_dtc),
'rfc' : accuracy_score(y_train_test, predict_rfc),
'lr' : accuracy_score(y_train_test, predict_lr),
'knn' : accuracy_score(y_train_test, predict_knn)
}
best_accuracy_score = max(models_accuracy.values())
models_accuracy, best_accuracy_score, list(filter(lambda k: models_accuracy.get(k) == best_accuracy_score, models_accuracy.keys()))
models_precision_score = {
'dtc' : precision_score(y_train_test, predict_dtc),
'rfc' : precision_score(y_train_test, predict_rfc),
'lr' : precision_score(y_train_test, predict_lr),
'knn' : precision_score(y_train_test, predict_knn)
}
best_precision_score = max(models_precision_score.values())
models_precision_score, best_precision_score, list(filter(lambda k: models_precision_score.get(k) == best_precision_score, models_precision_score.keys()))
models_recall_score = {
'dtc' : recall_score(y_train_test, predict_dtc),
'rfc' : recall_score(y_train_test, predict_rfc),
'lr' : recall_score(y_train_test, predict_lr),
'knn' : recall_score(y_train_test, predict_knn)
}
best_recall_score = max(models_recall_score.values())
models_recall_score, best_recall_score, list(filter(lambda k: models_recall_score.get(k) == best_recall_score, models_recall_score.keys()))
models_accuracy, models_precision_score, models_recall_score
from sklearn.metrics import precision_recall_curve
from matplotlib import pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve
precision_prc_dtc, recall_prc_dtc, treshold_prc_dtc = precision_recall_curve(y_train_test, predict_proba_dtc[:,1])
precision_prc_rfc, recall_prc_rfc, treshold_prc_rfc = precision_recall_curve(y_train_test, predict_proba_rfc[:,1])
precision_prc_lr, recall_prc_lr, treshold_prc_lr = precision_recall_curve(y_train_test, predict_proba_lr[:,1])
precision_prc_knn, recall_prc_knn, treshold_prc_knn = precision_recall_curve(y_train_test, predict_proba_knn[:,1])
plt.figure(figsize=(15, 15))
plt.plot(precision_prc_dtc, recall_prc_dtc, label='dtc')
plt.plot(precision_prc_rfc, recall_prc_rfc, label='rfc')
plt.plot(precision_prc_lr, recall_prc_lr, label='lr')
plt.plot(precision_prc_knn, recall_prc_knn, label='knn')
plt.legend(loc='upper right')
plt.ylabel('recall')
plt.xlabel('precision')
plt.grid(True)
plt.title('Precision Recall Curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
fpr_dtc, tpr_dtc, thresholds_dtc = roc_curve(y_train_test, predict_proba_dtc[:,1])
fpr_rfc, tpr_rfc, thresholds_rfc = roc_curve(y_train_test, predict_proba_rfc[:,1])
fpr_lr, tpr_lr, thresholds_lr = roc_curve(y_train_test, predict_proba_lr[:,1])
fpr_knn, tpr_knn, thresholds_knn = roc_curve(y_train_test, predict_proba_knn[:,1])
plt.figure(figsize=(12, 12))
plt.plot(fpr_dtc, tpr_dtc, label='dtc')
plt.plot(fpr_rfc, tpr_rfc, label='rfc')
plt.plot(fpr_lr, tpr_lr, label='lr')
plt.plot(fpr_knn, tpr_knn, label='knn')
plt.legend()
plt.plot([1.0, 0], [1.0, 0])
plt.ylabel('tpr')
plt.xlabel('fpr')
plt.grid(True)
plt.title('ROC curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
results = dict(dtc=roc_auc_score(y_train_test, predict_proba_dtc[:,1]),
rfc=roc_auc_score(y_train_test, predict_proba_rfc[:,1]),
lr=roc_auc_score(y_train_test, predict_proba_lr[:,1]),
knn=roc_auc_score(y_train_test, predict_proba_knn[:,1]))
print(results)
pd.DataFrame(list(zip(lr.coef_[0], dedf.columns))).sort_values(0)
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=15, shuffle=True, random_state=100)
s1 = cross_val_score(dtc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( knn, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits())
for train_ind, test_ind in cv.split(dedf_train, y_train):
x_train_xval_ml = np.array(dedf_train)[train_ind,:]
x_test_xval_ml = np.array(dedf_train)[test_ind,:]
y_train_xval_ml = np.array(y_train)[train_ind]
s2 = cross_val_score( dtc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score(knn, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits())
s3 = cross_val_score( dtc, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score(knn, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits())
s1,s2,s3
predict_test_lr_proba = lr.predict_proba(dedf_test) # best
predict_test_knn_proba = knn.predict_proba(dedf_test)
predict_test_dtc_proba = dtc.predict_proba(dedf_test)
predict_test_rfc_proba = rfc.predict_proba(dedf_test)
supposed_y_test = pd.read_csv('sample_submission.csv', index_col='_id')
supposed_y_test[supposed_y_test['target'] == 0].count(), supposed_y_test[supposed_y_test['target'] == 1].count()
len(predict_test_lr_proba)
predict_test_lr = lr.predict(dedf_test)
predict_test_dtc = dtc.predict(dedf_test)
predict_test_rfc = rfc.predict(dedf_test)
predict_test_knn = knn.predict(dedf_test)
ones_lr = [v for v in predict_test_lr if v == 1]
zeros_lr = [v for v in predict_test_lr if v == 0]
print(len(ones_lr), len(zeros_lr))
ones_dtc = [v for v in predict_test_dtc if v == 1]
zeros_dtc = [v for v in predict_test_dtc if v == 0]
print(len(ones_dtc), len(zeros_dtc))
ones_rfc = [v for v in predict_test_rfc if v == 1]
zeros_rfc = [v for v in predict_test_rfc if v == 0]
print(len(ones_rfc), len(zeros_rfc))
ones_knn = [v for v in predict_test_knn if v == 1]
zeros_knn = [v for v in predict_test_knn if v == 0]
print(len(ones_knn), len(zeros_knn))
combined_results_lr = list(zip(dedf_test.index, predict_test_lr))
combined_results_dtc = list(zip(dedf_test.index, predict_test_dtc))
combined_results_rfc = list(zip(dedf_test.index, predict_test_rfc))
combined_results_knn = list(zip(dedf_test.index, predict_test_knn))
# result_prediction = pd.DataFrame(, columns=['_id', 'target'])
result_prediction_lr = pd.DataFrame(combined_results_lr, columns=['_id', 'target'])
result_prediction_rfc = pd.DataFrame(combined_results_rfc, columns=['_id', 'target'])
result_prediction_dtc = pd.DataFrame(combined_results_dtc, columns=['_id', 'target'])
result_prediction_knn = pd.DataFrame(combined_results_knn, columns=['_id', 'target'])
result_prediction_lr.to_csv('result_prediction_lr_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_rfc.to_csv('result_prediction_rfc_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_dtc.to_csv('result_prediction_dtc_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_knn.to_csv('result_prediction_knn_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
```
|
github_jupyter
|
import pandas as pd
train = pd.read_csv('train.csv', index_col='_id')
test = pd.read_csv('test.csv', index_col='_id')
train.info(), test.info()
train.shape, test.shape
y_train = list(train['target'])
train = train.drop('target', axis=1)
train.info(verbose=True)
train.loc[:,'sample'] = 'train'
test.loc[:,'sample'] = 'test'
df = train.append(test)
df.shape
df.head()
df.index.values.tolist().count('905a0b9a5456ee962223033473666be3')
df[['sample']]
df.select_dtypes(include='object').head()
def show_object_columns(df, show=True):
df_objects_dict = {}
for i in df.columns:
if str(df[i].dtype) == 'object':
if show:
print('='*10)
print(i)
print(set(df[i]))
print('\n')
df_objects_dict[i] = set(df[i])
return df_objects_dict
show_object_columns(df)
df.describe(percentiles=[0.99])
df[(df['marital'] == 'unknown') & (df['job'] == 'unknown')][['marital', 'loan', 'previous', 'poutcome', 'pdays', 'month', 'day_of_week','age', 'education', 'job', 'duration']]
for i in set(df['marital']):
print('count ',i, '==', df[df['marital'] == i]['marital'].count())
df.columns
df[['emp.var.rate', 'cons.price.idx',
'cons.conf.idx', 'euribor3m', 'nr.employed']]
from sklearn.preprocessing import LabelEncoder
def df_columns_labelencoding(columns_to_encode, dataframe, exclude_columns_list=None):
"""
Returns:
encoded_dataframe : df with encoded by label encoder columns, it's copy from original df
"""
encoded_dataframe = dataframe.copy()
columns_le_dict = dict()
for column in columns_to_encode:
if exclude_columns_list and column in exclude_columns_list:
continue
columns_le_dict[column] = LabelEncoder()
encoded_dataframe[column] = columns_le_dict[column].fit_transform(encoded_dataframe[column])
return encoded_dataframe, columns_le_dict
object_columns = show_object_columns(df, show=False)
encoded_df, columns_encoders_dict = df_columns_labelencoding(object_columns.keys(), df, ['sample'])
encoded_df.info()
encoded_df.head()
# encoded_df_saved = encoded_df.copy()
# encoded_df = encoded_df.drop(['month','day_of_week'],axis=1)
def filter_columns(list_c, exc):
l = list()
for c in list_c:
if c not in exc:
l.append(c)
return l
# cols = filter_columns(columns_encoders_dict.keys(), ['month', 'day_of_week'])
dummied_encoded_df = pd.get_dummies(encoded_df, columns=list(columns_encoders_dict.keys()))
# dummied_encoded_df = pd.get_dummies(encoded_df, columns=cols)
dummied_encoded_df.info()
dummied_encoded_df.shape
dummied_encoded_df.head()
for c in ['emp.var.rate', 'pdays', 'cons.price.idx', 'cons.conf.idx', 'euribor3m', 'nr.employed']:
print(c, ' ===='*10)
print(set(dummied_encoded_df[c]))
print('\n')
dummied_encoded_df = pd.get_dummies(dummied_encoded_df, columns=['emp.var.rate', 'nr.employed'])
dummied_encoded_df.info()
pdays_999_previous_1 = dummied_encoded_df[
((dummied_encoded_df['pdays'] == 999)) & (dummied_encoded_df['previous'] == 1)
][['pdays', 'duration']].pivot_table(index=['_id', 'pdays'])['duration'].mean()
pdays_999 = dummied_encoded_df['duration'].mean()
duration_mean = dummied_encoded_df['duration'].mean()
pt = dummied_encoded_df[
((dummied_encoded_df['pdays'] == 999)) & (dummied_encoded_df['previous'] == 1)
][['pdays', 'duration']].pivot_table(index=['_id', 'pdays'])
pt['derived'] = [1 if v >= pdays_999_previous_1 else 0for v in pt['duration']]
pt['derived'].value_counts()
######## Let't the magic begins ###################################################### @ATTENTION_PLZ
dedf = dummied_encoded_df.copy()
dedf['pdays_state'] = [0 if v >= 999 else 1 for v in dedf['pdays']] # previous contact, boolean state 0 -no, 1 - yes
dedf['pdays_state_mean'] = [0 if v == 999 or v < pdays_999 else 1 for v in dedf['pdays']]
dedf['duration_mean'] = [1 if v > duration_mean else 0 for v in dedf['duration']]
dedf[['pdays', 'previous', 'duration','pdays_state', 'pdays_state_mean', 'duration_mean']].sample(n=5)
# print(set(dedf['cons.price.idx']))
# print(dedf['cons.price.idx'].mean())
cpi_mean = dedf['cons.price.idx'].mean()
cci_mean = dedf['cons.conf.idx'].mean()
euribor3m_mean = dedf['euribor3m'].mean()
dedf['cons.price.idx_mean'] = [v/cpi_mean for v in dedf['cons.price.idx']]
dedf['cons.conf.idx_mean'] = [v/cci_mean for v in dedf['cons.conf.idx']]
dedf['euribor3m_mean'] = [v/euribor3m_mean for v in dedf['euribor3m']]
# dedf['cons.price.idx_mean_splitting'] = [1 if v >= 1 else 0 for v in dedf['cons.price.idx_mean']]
# dedf['cons.conf.idx_mean_splitting'] = [1 if v >= 1 else 0 for v in dedf['cons.conf.idx_mean']]
# euribor3m_mean_mean = dedf['euribor3m_mean'].mean()
# dedf['euribor3m_mean_splitting'] = [1 if v >=euribor3m_mean_mean else 0 for v in dedf['euribor3m_mean']]
dedf[['euribor3m',
'euribor3m_mean',
# 'euribor3m_mean_splitting',
'cons.price.idx',
'cons.price.idx_mean',
# 'cons.price.idx_mean_splitting',
'cons.conf.idx',
'cons.conf.idx_mean',
# 'cons.conf.idx_mean_splitting'
]].sample(n=5)
#### @ATTENTION_PLZ
# dedf = dedf.drop(['euribor3m', 'cons.price.idx', 'cons.conf.idx', 'pdays'], axis=1)
dedf.sample(n=5)
dedf = pd.get_dummies(dedf, columns=['pdays'])
dedf.info()
dedf_train = dedf.query('sample == "train"').drop(['sample'], axis=1)
dedf_test = dedf.query('sample == "test"').drop(['sample'], axis=1)
dedf_train.shape, dedf_test.shape
len(dedf_train), len(y_train)
from sklearn.model_selection import train_test_split
X_train_train, X_train_test, y_train_train, y_train_test = train_test_split(dedf_train, y_train, test_size=0.35, random_state=40)
X_train_train.shape, X_train_test.shape
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (12,5)
knn = KNeighborsClassifier()
knn.fit(X_train_train, y_train_train)
predict_knn = knn.predict(X_train_test)
predict_proba_knn = knn.predict_proba(X_train_test)
dtc = DecisionTreeClassifier()
dtc.fit(X_train_train, y_train_train)
predict_dtc = dtc.predict(X_train_test)
predict_proba_dtc = dtc.predict_proba(X_train_test)
rfc = RandomForestClassifier()
rfc.fit(X_train_train, y_train_train)
predict_rfc = rfc.predict(X_train_test)
predict_proba_rfc = rfc.predict_proba(X_train_test)
lr = LogisticRegression()
lr.fit(X_train_train, y_train_train)
predict_lr = lr.predict(X_train_test)
predict_proba_lr = lr.predict_proba(X_train_test)
from sklearn.metrics import accuracy_score, precision_score, recall_score
models_accuracy = {
'dtc' : accuracy_score(y_train_test, predict_dtc),
'rfc' : accuracy_score(y_train_test, predict_rfc),
'lr' : accuracy_score(y_train_test, predict_lr),
'knn' : accuracy_score(y_train_test, predict_knn)
}
best_accuracy_score = max(models_accuracy.values())
models_accuracy, best_accuracy_score, list(filter(lambda k: models_accuracy.get(k) == best_accuracy_score, models_accuracy.keys()))
models_precision_score = {
'dtc' : precision_score(y_train_test, predict_dtc),
'rfc' : precision_score(y_train_test, predict_rfc),
'lr' : precision_score(y_train_test, predict_lr),
'knn' : precision_score(y_train_test, predict_knn)
}
best_precision_score = max(models_precision_score.values())
models_precision_score, best_precision_score, list(filter(lambda k: models_precision_score.get(k) == best_precision_score, models_precision_score.keys()))
models_recall_score = {
'dtc' : recall_score(y_train_test, predict_dtc),
'rfc' : recall_score(y_train_test, predict_rfc),
'lr' : recall_score(y_train_test, predict_lr),
'knn' : recall_score(y_train_test, predict_knn)
}
best_recall_score = max(models_recall_score.values())
models_recall_score, best_recall_score, list(filter(lambda k: models_recall_score.get(k) == best_recall_score, models_recall_score.keys()))
models_accuracy, models_precision_score, models_recall_score
from sklearn.metrics import precision_recall_curve
from matplotlib import pyplot as plt
from sklearn.metrics import roc_auc_score, roc_curve
precision_prc_dtc, recall_prc_dtc, treshold_prc_dtc = precision_recall_curve(y_train_test, predict_proba_dtc[:,1])
precision_prc_rfc, recall_prc_rfc, treshold_prc_rfc = precision_recall_curve(y_train_test, predict_proba_rfc[:,1])
precision_prc_lr, recall_prc_lr, treshold_prc_lr = precision_recall_curve(y_train_test, predict_proba_lr[:,1])
precision_prc_knn, recall_prc_knn, treshold_prc_knn = precision_recall_curve(y_train_test, predict_proba_knn[:,1])
plt.figure(figsize=(15, 15))
plt.plot(precision_prc_dtc, recall_prc_dtc, label='dtc')
plt.plot(precision_prc_rfc, recall_prc_rfc, label='rfc')
plt.plot(precision_prc_lr, recall_prc_lr, label='lr')
plt.plot(precision_prc_knn, recall_prc_knn, label='knn')
plt.legend(loc='upper right')
plt.ylabel('recall')
plt.xlabel('precision')
plt.grid(True)
plt.title('Precision Recall Curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
fpr_dtc, tpr_dtc, thresholds_dtc = roc_curve(y_train_test, predict_proba_dtc[:,1])
fpr_rfc, tpr_rfc, thresholds_rfc = roc_curve(y_train_test, predict_proba_rfc[:,1])
fpr_lr, tpr_lr, thresholds_lr = roc_curve(y_train_test, predict_proba_lr[:,1])
fpr_knn, tpr_knn, thresholds_knn = roc_curve(y_train_test, predict_proba_knn[:,1])
plt.figure(figsize=(12, 12))
plt.plot(fpr_dtc, tpr_dtc, label='dtc')
plt.plot(fpr_rfc, tpr_rfc, label='rfc')
plt.plot(fpr_lr, tpr_lr, label='lr')
plt.plot(fpr_knn, tpr_knn, label='knn')
plt.legend()
plt.plot([1.0, 0], [1.0, 0])
plt.ylabel('tpr')
plt.xlabel('fpr')
plt.grid(True)
plt.title('ROC curve')
plt.xlim((-0.01, 1.01))
plt.ylim((-0.01, 1.01))
results = dict(dtc=roc_auc_score(y_train_test, predict_proba_dtc[:,1]),
rfc=roc_auc_score(y_train_test, predict_proba_rfc[:,1]),
lr=roc_auc_score(y_train_test, predict_proba_lr[:,1]),
knn=roc_auc_score(y_train_test, predict_proba_knn[:,1]))
print(results)
pd.DataFrame(list(zip(lr.coef_[0], dedf.columns))).sort_values(0)
from sklearn.cross_validation import cross_val_score
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=15, shuffle=True, random_state=100)
s1 = cross_val_score(dtc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( knn, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits())
for train_ind, test_ind in cv.split(dedf_train, y_train):
x_train_xval_ml = np.array(dedf_train)[train_ind,:]
x_test_xval_ml = np.array(dedf_train)[test_ind,:]
y_train_xval_ml = np.array(y_train)[train_ind]
s2 = cross_val_score( dtc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score(knn, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits())
s3 = cross_val_score( dtc, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score(knn, dedf_train, y_train, scoring='roc_auc', cv=cv.get_n_splits())
s1,s2,s3
predict_test_lr_proba = lr.predict_proba(dedf_test) # best
predict_test_knn_proba = knn.predict_proba(dedf_test)
predict_test_dtc_proba = dtc.predict_proba(dedf_test)
predict_test_rfc_proba = rfc.predict_proba(dedf_test)
supposed_y_test = pd.read_csv('sample_submission.csv', index_col='_id')
supposed_y_test[supposed_y_test['target'] == 0].count(), supposed_y_test[supposed_y_test['target'] == 1].count()
len(predict_test_lr_proba)
predict_test_lr = lr.predict(dedf_test)
predict_test_dtc = dtc.predict(dedf_test)
predict_test_rfc = rfc.predict(dedf_test)
predict_test_knn = knn.predict(dedf_test)
ones_lr = [v for v in predict_test_lr if v == 1]
zeros_lr = [v for v in predict_test_lr if v == 0]
print(len(ones_lr), len(zeros_lr))
ones_dtc = [v for v in predict_test_dtc if v == 1]
zeros_dtc = [v for v in predict_test_dtc if v == 0]
print(len(ones_dtc), len(zeros_dtc))
ones_rfc = [v for v in predict_test_rfc if v == 1]
zeros_rfc = [v for v in predict_test_rfc if v == 0]
print(len(ones_rfc), len(zeros_rfc))
ones_knn = [v for v in predict_test_knn if v == 1]
zeros_knn = [v for v in predict_test_knn if v == 0]
print(len(ones_knn), len(zeros_knn))
combined_results_lr = list(zip(dedf_test.index, predict_test_lr))
combined_results_dtc = list(zip(dedf_test.index, predict_test_dtc))
combined_results_rfc = list(zip(dedf_test.index, predict_test_rfc))
combined_results_knn = list(zip(dedf_test.index, predict_test_knn))
# result_prediction = pd.DataFrame(, columns=['_id', 'target'])
result_prediction_lr = pd.DataFrame(combined_results_lr, columns=['_id', 'target'])
result_prediction_rfc = pd.DataFrame(combined_results_rfc, columns=['_id', 'target'])
result_prediction_dtc = pd.DataFrame(combined_results_dtc, columns=['_id', 'target'])
result_prediction_knn = pd.DataFrame(combined_results_knn, columns=['_id', 'target'])
result_prediction_lr.to_csv('result_prediction_lr_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_rfc.to_csv('result_prediction_rfc_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_dtc.to_csv('result_prediction_dtc_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
result_prediction_knn.to_csv('result_prediction_knn_hw03.csv', sep=',', index=False, columns=['_id', 'target'])
| 0.192046 | 0.224013 |
```
#library
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras.applications as ap
#mount file from google drive
from google.colab import drive
drive.mount('/content/drive')
#grab the data
img512 = np.load('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/img512.npy')
label = np.load('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/label.npy')
X512_train = img512
y512_train = label
y512_train = tf.keras.utils.to_categorical(y512_train, 4)
#checkpoint
checkpoint_filepath = '/content/drive/MyDrive/Colab Notebooks/FINAL_DL/checkpoint/resnet'
model_checkpoint_callack = keras.callbacks.ModelCheckpoint(
filepath = checkpoint_filepath,
save_weight_only = True,
save_freq = 'epoch',
mode = 'auto',
save_best_only = True,
monitor = 'val_accuracy'
)
#resnet
def model_resnet(X_train, y_train,epochs = 8):
base_model = ap.ResNet50(weights='imagenet',input_shape=(512,512,3),include_top=False)
base_model.trainable = False
head_model = base_model.output
head_model = keras.layers.Flatten()(head_model)
#head_model = keras.layers.Dense(512, activation = 'relu')(head_model)
#head_model = keras.layers.Dropout(0.5)(head_model)
#head_model = keras.layers.Dense(128, activation = 'relu')(head_model)
head_model = keras.layers.Dense(4,activation='softmax')(head_model)
model = keras.Model(base_model.input,head_model)
#model = base_model
callbacks = [keras.callbacks.TensorBoard(log_dir='/content/drive/MyDrive/Colab Notebooks/FINAL_DL/logs/resnet'),model_checkpoint_callack]
model.compile(optimizer=keras.optimizers.Adam(), loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,batch_size=128, epochs=epochs, callbacks = callbacks, validation_split = 0.2)
base_model.trainable = True
model.compile(optimizer=keras.optimizers.Adam(1e-5), loss='categorical_crossentropy',metrics=['accuracy'])
history = model.fit(X_train,y_train,batch_size=128, epochs=epochs, callbacks = callbacks, validation_split = 0.2)
return model,history
model_resnet_m2,history= model_resnet(X512_train,y512_train,4)
model_resnet_m2.save_weights('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/weights/resnet_m2_weights.h5')
#save the curve
his = pd.DataFrame(history.history)
his.to_pickle('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/curve/history_resnet2.pkl')
model_resnet_m2.summary()
```
|
github_jupyter
|
#library
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tensorflow import keras
import tensorflow.keras.applications as ap
#mount file from google drive
from google.colab import drive
drive.mount('/content/drive')
#grab the data
img512 = np.load('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/img512.npy')
label = np.load('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/label.npy')
X512_train = img512
y512_train = label
y512_train = tf.keras.utils.to_categorical(y512_train, 4)
#checkpoint
checkpoint_filepath = '/content/drive/MyDrive/Colab Notebooks/FINAL_DL/checkpoint/resnet'
model_checkpoint_callack = keras.callbacks.ModelCheckpoint(
filepath = checkpoint_filepath,
save_weight_only = True,
save_freq = 'epoch',
mode = 'auto',
save_best_only = True,
monitor = 'val_accuracy'
)
#resnet
def model_resnet(X_train, y_train,epochs = 8):
base_model = ap.ResNet50(weights='imagenet',input_shape=(512,512,3),include_top=False)
base_model.trainable = False
head_model = base_model.output
head_model = keras.layers.Flatten()(head_model)
#head_model = keras.layers.Dense(512, activation = 'relu')(head_model)
#head_model = keras.layers.Dropout(0.5)(head_model)
#head_model = keras.layers.Dense(128, activation = 'relu')(head_model)
head_model = keras.layers.Dense(4,activation='softmax')(head_model)
model = keras.Model(base_model.input,head_model)
#model = base_model
callbacks = [keras.callbacks.TensorBoard(log_dir='/content/drive/MyDrive/Colab Notebooks/FINAL_DL/logs/resnet'),model_checkpoint_callack]
model.compile(optimizer=keras.optimizers.Adam(), loss='categorical_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,batch_size=128, epochs=epochs, callbacks = callbacks, validation_split = 0.2)
base_model.trainable = True
model.compile(optimizer=keras.optimizers.Adam(1e-5), loss='categorical_crossentropy',metrics=['accuracy'])
history = model.fit(X_train,y_train,batch_size=128, epochs=epochs, callbacks = callbacks, validation_split = 0.2)
return model,history
model_resnet_m2,history= model_resnet(X512_train,y512_train,4)
model_resnet_m2.save_weights('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/weights/resnet_m2_weights.h5')
#save the curve
his = pd.DataFrame(history.history)
his.to_pickle('/content/drive/MyDrive/Colab Notebooks/FINAL_DL/curve/history_resnet2.pkl')
model_resnet_m2.summary()
| 0.558809 | 0.123551 |
# Population Tool: Alpha
## First Step: Define functions we need
Import necessary packages and declare constant variables
```
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
# Use these paths to pull real data
POP_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx'
POP_RELATABLE_PATH = 'https://raw.githubusercontent.com/ONEcampaign/humanitarian-data-service/master/resources/data/derived/example/2017_relatable_population_rankings.csv'
POP_AGE_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx'
# Use these paths when testing locally
# POP_DATA_PATH = 'local_data/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx'
# POP_RELATABLE_PATH = 'local_data/2017_relatable_population_rankings.csv'
# POP_AGE_DATA_PATH = 'local_data/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx'
%matplotlib inline
```
Make a function that pulls general population data from the UN site and re-shapes it for our use
```
def ReadPopulationData(path, skiprows = 16):
medium_variant = pd.read_excel(path, sheetname=1, skiprows=skiprows)
medium_variant_long = pd.melt(medium_variant,
id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code'],
value_name = 'Population',
var_name = 'Year')
return medium_variant_long
```
Write another function that pulls the age-disaggregated data from the UN site
```
def ReadPopulationAgeData(path, skiprows = 16):
age_data = pd.read_excel(path, sheetname=1, skiprows=skiprows)
age_data_long = pd.melt(age_data,
id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code','Reference date (as of 1 July)'],
value_name = 'Population',
var_name = 'Age Cohort')
return age_data_long
```
Make a function that asks user to input a valid country name contained in the data set
```
def GetValidCountry(dataset):
valid_country = False
while valid_country == False:
country = input('Enter a country name (e.g. Nigeria):')
if country == 'quit':
quit()
elif country in dataset['Region, subregion, country or area *'].unique():
valid_country = True
print('Thanks. {} is in the dataset.'.format(country))
else:
print('Sorry, {} is not in dataset. Please try again, e.g. Nigeria:'.format(country))
return country
```
Make a function that asks user to input a valid year contained in the data set
```
def GetValidYear(dataset, check_field = 'Year'):
valid_year = False
while valid_year == False:
year = int(input('Enter a projection year (e.g. 2040):'))
if year == 'quit':
quit()
elif year in dataset[check_field].unique():
valid_year = True
print('Thanks. {} is in the dataset.'.format(year))
else:
print('Sorry, {} is not in dataset. Please try again:'.format(year))
return year
```
Another function that asks user to input a valid age cohort range
```
def GetValidAgeCohort(dataset, check_field = 'Age Cohort'):
valid_cohort = False
while valid_cohort == False:
cohort = str(input('Enter an age cohort (e.g. 10-14):'))
if cohort == 'quit':
quit()
elif cohort in dataset[check_field].unique():
valid_cohort = True
print('Thanks. {} is in the dataset.'.format(cohort))
else:
print('Sorry, {} is not in dataset. Please try again. Valid values:'.format(cohort))
print(dataset[check_field].unique())
return cohort
```
Write a function that runs the main menu prompts and asks the user to select an option
```
def MainMenu():
print("")
print("*********************")
print("Welcome to David & Kate's Very Simple Python Population Tool. Please select an option from the list:")
print("1) Get a population projection for a given country and year")
print("2) Find a population projection for a given country, year and age cohort, along with a comparable population")
valid_answer = False
while valid_answer == False:
selection = str(input('Input a number (1 or 2):'))
if selection in ['1','2','quit']:
valid_answer = True
else:
print("Sorry, that is not a valid selection. Please enter numbers 1-2 or type 'quit':")
return selection
```
A small function to ask if the user would like to keep investigating or quit
```
def AnotherQuery():
valid_answer = False
while valid_answer == False:
response = input('Would you like to make another query? (Y/N)')
if response.lower() == 'y':
valid_answer = True
keep_playing = True
elif response.lower() == 'n':
print('Thanks for using this tool. Quitting....')
keep_playing = False
break
else:
print('Sorry, invalid response. Please type Y or N.')
return keep_playing
```
A function for Task 1: input a country and year and return the relevant population projection
```
def TaskOne(dataset):
country = GetValidCountry(dataset)
year = GetValidYear(dataset)
population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Year'] == year),'Population'].values[0]
print('The population for {} in the year {} is projected to be {} thousand.'.format(country, year, population))
print('A time series plot of this population over time:')
subset = dataset[(dataset['Region, subregion, country or area *'] == country)]
subset.plot(x='Year', y='Population')
plt.title('Projected Population of {}'.format(country))
plt.ylabel('Population (thousands)')
plt.show()
```
A function for Task 2: load relatable populations
```
def GetComparablePopulation(reference_value, path = POP_RELATABLE_PATH):
df_relatable_populations = pd.read_csv(path)
df_relatable_populations['Population'] = df_relatable_populations[[
'Population - World Bank (2015)','Population - UNFPA (2016)'
]].max(axis=1)
df_relatable_populations = df_relatable_populations[['City, State, Country',
'Population']].dropna()
def find_nearest_place_population(reference_value, df_relatable_populations = df_relatable_populations):
if reference_value:
nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]]
nearest_population = nearest_row['Population']
else:
nearest_population = 0.00
return nearest_population
def find_nearest_place(reference_value, df_relatable_populations = df_relatable_populations):
if reference_value:
nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]]
nearest_place = nearest_row['City, State, Country']
else:
nearest_place = ''
return nearest_place
return find_nearest_place(reference_value), find_nearest_place_population(reference_value)
```
A function for Task 3: Find a population projection for a given country, year and age cohort
```
def TaskThree(dataset):
country = GetValidCountry(dataset)
year = GetValidYear(dataset, check_field = 'Reference date (as of 1 July)')
age = GetValidAgeCohort(dataset)
population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Reference date (as of 1 July)'] == year) &
(dataset['Age Cohort'] == age),'Population'].values[0]
similar_place, similar_pop = GetComparablePopulation(population*1000)
print('The population aged {} for {} in the year {} is projected to be {} thousand.'.format(age, country, year, population))
print('That is similar to the current population of {} ({} thousand people).'.format(similar_place, similar_pop/1000))
print('A time series plot of this age cohort over time:')
subset = dataset[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Age Cohort'] == age)]
subset.plot(x='Reference date (as of 1 July)', y='Population')
plt.title('Projected Population Aged {} in {}'.format(age, country))
plt.ylabel('Population (thousands)')
plt.show()
```
Write the main function that calls all the other functions when needed
```
def run():
keep_using = True
# Create a blank variables to store each population data sets when needed
pop_data = None
pop_relateable = None
pop_age_data = None
# Start a loop that will keep going until the user decides to quit
while keep_using == True:
selection = MainMenu() # Run main menu function to retrieve a valid menu option
# Series of if statements to do different actions based on menu option
if selection == '1':
print("Thanks. You selected option 1.")
if pop_data is None: # Check if the population data is already downloaded and if not, download it
print("Downloading the latest data from the UN....")
pop_data = ReadPopulationData(POP_DATA_PATH)
pop_data['Year'] = pop_data['Year'].astype('int64')
TaskOne(pop_data) # Run task 1 function
elif selection == '2':
print("Thanks. You selected option 3.")
if pop_age_data is None: # Check if the population data is already downloaded and if not, download it
print("Downloading the latest data from the UN....")
pop_age_data = ReadPopulationAgeData(POP_AGE_DATA_PATH)
TaskThree(pop_age_data) # Run Task 3 function
elif selection == "quit": # Add a secret 'quit' option in case the programme malfunctions in testing
print("Quitting...")
break
else: # Hopefully no one should get to this point, but just in case print an error message and stop the loop
print("Error")
break
# Before re-running the loop, run the AnotherQuery function to see if the user would like to continue
keep_using = AnotherQuery()
return
```
## Run the programme!
```
run()
```
|
github_jupyter
|
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
matplotlib.style.use('ggplot')
# Use these paths to pull real data
POP_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx'
POP_RELATABLE_PATH = 'https://raw.githubusercontent.com/ONEcampaign/humanitarian-data-service/master/resources/data/derived/example/2017_relatable_population_rankings.csv'
POP_AGE_DATA_PATH = 'https://esa.un.org/unpd/wpp/DVD/Files/1_Indicators%20(Standard)/EXCEL_FILES/1_Population/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx'
# Use these paths when testing locally
# POP_DATA_PATH = 'local_data/WPP2017_POP_F01_1_TOTAL_POPULATION_BOTH_SEXES.xlsx'
# POP_RELATABLE_PATH = 'local_data/2017_relatable_population_rankings.csv'
# POP_AGE_DATA_PATH = 'local_data/WPP2017_POP_F07_1_POPULATION_BY_AGE_BOTH_SEXES.xlsx'
%matplotlib inline
def ReadPopulationData(path, skiprows = 16):
medium_variant = pd.read_excel(path, sheetname=1, skiprows=skiprows)
medium_variant_long = pd.melt(medium_variant,
id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code'],
value_name = 'Population',
var_name = 'Year')
return medium_variant_long
def ReadPopulationAgeData(path, skiprows = 16):
age_data = pd.read_excel(path, sheetname=1, skiprows=skiprows)
age_data_long = pd.melt(age_data,
id_vars=['Index','Variant','Region, subregion, country or area *','Notes','Country code','Reference date (as of 1 July)'],
value_name = 'Population',
var_name = 'Age Cohort')
return age_data_long
def GetValidCountry(dataset):
valid_country = False
while valid_country == False:
country = input('Enter a country name (e.g. Nigeria):')
if country == 'quit':
quit()
elif country in dataset['Region, subregion, country or area *'].unique():
valid_country = True
print('Thanks. {} is in the dataset.'.format(country))
else:
print('Sorry, {} is not in dataset. Please try again, e.g. Nigeria:'.format(country))
return country
def GetValidYear(dataset, check_field = 'Year'):
valid_year = False
while valid_year == False:
year = int(input('Enter a projection year (e.g. 2040):'))
if year == 'quit':
quit()
elif year in dataset[check_field].unique():
valid_year = True
print('Thanks. {} is in the dataset.'.format(year))
else:
print('Sorry, {} is not in dataset. Please try again:'.format(year))
return year
def GetValidAgeCohort(dataset, check_field = 'Age Cohort'):
valid_cohort = False
while valid_cohort == False:
cohort = str(input('Enter an age cohort (e.g. 10-14):'))
if cohort == 'quit':
quit()
elif cohort in dataset[check_field].unique():
valid_cohort = True
print('Thanks. {} is in the dataset.'.format(cohort))
else:
print('Sorry, {} is not in dataset. Please try again. Valid values:'.format(cohort))
print(dataset[check_field].unique())
return cohort
def MainMenu():
print("")
print("*********************")
print("Welcome to David & Kate's Very Simple Python Population Tool. Please select an option from the list:")
print("1) Get a population projection for a given country and year")
print("2) Find a population projection for a given country, year and age cohort, along with a comparable population")
valid_answer = False
while valid_answer == False:
selection = str(input('Input a number (1 or 2):'))
if selection in ['1','2','quit']:
valid_answer = True
else:
print("Sorry, that is not a valid selection. Please enter numbers 1-2 or type 'quit':")
return selection
def AnotherQuery():
valid_answer = False
while valid_answer == False:
response = input('Would you like to make another query? (Y/N)')
if response.lower() == 'y':
valid_answer = True
keep_playing = True
elif response.lower() == 'n':
print('Thanks for using this tool. Quitting....')
keep_playing = False
break
else:
print('Sorry, invalid response. Please type Y or N.')
return keep_playing
def TaskOne(dataset):
country = GetValidCountry(dataset)
year = GetValidYear(dataset)
population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Year'] == year),'Population'].values[0]
print('The population for {} in the year {} is projected to be {} thousand.'.format(country, year, population))
print('A time series plot of this population over time:')
subset = dataset[(dataset['Region, subregion, country or area *'] == country)]
subset.plot(x='Year', y='Population')
plt.title('Projected Population of {}'.format(country))
plt.ylabel('Population (thousands)')
plt.show()
def GetComparablePopulation(reference_value, path = POP_RELATABLE_PATH):
df_relatable_populations = pd.read_csv(path)
df_relatable_populations['Population'] = df_relatable_populations[[
'Population - World Bank (2015)','Population - UNFPA (2016)'
]].max(axis=1)
df_relatable_populations = df_relatable_populations[['City, State, Country',
'Population']].dropna()
def find_nearest_place_population(reference_value, df_relatable_populations = df_relatable_populations):
if reference_value:
nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]]
nearest_population = nearest_row['Population']
else:
nearest_population = 0.00
return nearest_population
def find_nearest_place(reference_value, df_relatable_populations = df_relatable_populations):
if reference_value:
nearest_row = df_relatable_populations.iloc[(df_relatable_populations['Population']- reference_value).abs().argsort()[0]]
nearest_place = nearest_row['City, State, Country']
else:
nearest_place = ''
return nearest_place
return find_nearest_place(reference_value), find_nearest_place_population(reference_value)
def TaskThree(dataset):
country = GetValidCountry(dataset)
year = GetValidYear(dataset, check_field = 'Reference date (as of 1 July)')
age = GetValidAgeCohort(dataset)
population = dataset.loc[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Reference date (as of 1 July)'] == year) &
(dataset['Age Cohort'] == age),'Population'].values[0]
similar_place, similar_pop = GetComparablePopulation(population*1000)
print('The population aged {} for {} in the year {} is projected to be {} thousand.'.format(age, country, year, population))
print('That is similar to the current population of {} ({} thousand people).'.format(similar_place, similar_pop/1000))
print('A time series plot of this age cohort over time:')
subset = dataset[(dataset['Region, subregion, country or area *'] == country) &
(dataset['Age Cohort'] == age)]
subset.plot(x='Reference date (as of 1 July)', y='Population')
plt.title('Projected Population Aged {} in {}'.format(age, country))
plt.ylabel('Population (thousands)')
plt.show()
def run():
keep_using = True
# Create a blank variables to store each population data sets when needed
pop_data = None
pop_relateable = None
pop_age_data = None
# Start a loop that will keep going until the user decides to quit
while keep_using == True:
selection = MainMenu() # Run main menu function to retrieve a valid menu option
# Series of if statements to do different actions based on menu option
if selection == '1':
print("Thanks. You selected option 1.")
if pop_data is None: # Check if the population data is already downloaded and if not, download it
print("Downloading the latest data from the UN....")
pop_data = ReadPopulationData(POP_DATA_PATH)
pop_data['Year'] = pop_data['Year'].astype('int64')
TaskOne(pop_data) # Run task 1 function
elif selection == '2':
print("Thanks. You selected option 3.")
if pop_age_data is None: # Check if the population data is already downloaded and if not, download it
print("Downloading the latest data from the UN....")
pop_age_data = ReadPopulationAgeData(POP_AGE_DATA_PATH)
TaskThree(pop_age_data) # Run Task 3 function
elif selection == "quit": # Add a secret 'quit' option in case the programme malfunctions in testing
print("Quitting...")
break
else: # Hopefully no one should get to this point, but just in case print an error message and stop the loop
print("Error")
break
# Before re-running the loop, run the AnotherQuery function to see if the user would like to continue
keep_using = AnotherQuery()
return
run()
| 0.457379 | 0.8474 |
# Inspecting TFX metadata
## Learning Objectives
1. Use a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.
In this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted **AI Platform Pipelines** instance includes the [ML Metadata](https://github.com/google/ml-metadata) service. In **AI Platform Pipelines**, ML Metadata uses *MySQL* as a database backend and can be accessed using a GRPC server.
## Setup
```
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
```
### Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in `lab-02` or `lab-03`.
#### 1.1 Configure Kubernetes port forwarding
To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.
From a JupyterLab terminal, execute the following commands:
```
gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE]
kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080
```
Proceed to the next step, "Connecting to ML Metadata".
### Option 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.
Hosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.
```
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
```
The pipeline source can be found in the `pipeline` folder. Switch to the `pipeline` folder and compile the pipeline.
```
%cd pipeline
```
#### 2.1 Create AI Platform Pipelines cluster
Navigate to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.
Create or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select `"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform"` to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an `App instance name` such as "TFX-lab-04".
#### 2.2 Configure environment settings
Update the below constants with the settings reflecting your lab environment.
- `GCP_REGION` - the compute region for AI Platform Training and Prediction
- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `kubeflowpipelines-` prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.
```
!gsutil ls
```
* `CUSTOM_SERVICE_ACCOUNT` - your user created custom google cloud service account for your pipeline's AI Platform Training job that you created during initial setup for these labs to access the Cloud AI Platform Vizier service. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup `README` for setup instructions.
- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.
1. Open the *SETTINGS* for your instance
2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
```
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default'
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com'
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
```
#### 2.3 Compile pipeline
```
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
```
#### 2.4 Deploy pipeline to AI Platform
```
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
```
(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:
```
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
```
#### 2.5 Create and monitor pipeline run
```
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
```
#### 2.6 Configure Kubernetes port forwarding
To enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.
From a JupyterLab terminal, execute the following commands:
```
gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE]
kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080
```
## Connecting to ML Metadata
### Configure ML Metadata GRPC client
```
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
```
### Connect to ML Metadata service
```
store = metadata_store.MetadataStore(connection_config)
```
### Important
A full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below.
## Exploring ML Metadata
The Metadata Store uses the following data model:
- `ArtifactType` describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.
- `Artifact` describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.
- `ExecutionType` describes a type of component or step in a workflow, and its runtime parameters.
- `Execution` is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.
- `Event` is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.
- `ContextType` describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.
- `Context` is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.
- `Attribution` is a record of the relationship between Artifacts and Contexts.
- `Association` is a record of the relationship between Executions and Contexts.
List the registered artifact types.
```
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
```
Display the registered execution types.
```
for execution_type in store.get_execution_types():
print(execution_type.name)
```
List the registered context types.
```
for context_type in store.get_context_types():
print(context_type.name)
```
## Visualizing TFX artifacts
### Retrieve data analysis and validation artifacts
```
with metadata.Metadata(connection_config) as store:
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
anomalies_file = os.path.join(anomalies_artifacts[-1].uri, 'anomalies.pbtxt')
print("Generated anomalies file:{}".format(anomalies_file))
```
### Visualize statistics
#### Exercise: looking at the features visualized below, answer the following questions:
- Which feature transformations would you apply to each feature with TF Transform?
- Are there data quality issues with certain features that may impact your model performance? How might you deal with it?
```
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
```
### Visualize schema
```
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
```
### Visualize anomalies
```
anomalies = tfdv.load_anomalies_text(anomalies_file)
tfdv.display_anomalies(anomalies)
```
### Retrieve model evaluations
```
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
```
### Visualize model evaluations
#### Exercise: review the model evaluation results below and answer the following questions:
- Which Wilderness Area had the highest accuracy?
- Which Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?
```
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
```
**Debugging tip**: If the TFMA visualization of the Evaluator results do not render, try switching to view in a Classic Jupyter Notebook. You do so by clicking `Help > Launch Classic Notebook` and re-opening the notebook and running the above cell to see the interactive TFMA results.
## License
<font size=-1>Licensed under the Apache License, Version 2.0 (the \"License\");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>
|
github_jupyter
|
import os
import ml_metadata
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from ml_metadata.metadata_store import metadata_store
from ml_metadata.proto import metadata_store_pb2
from tfx.orchestration import metadata
from tfx.types import standard_artifacts
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE]
kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080
import yaml
# Set `PATH` to include the directory containing TFX CLI.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
%cd pipeline
!gsutil ls
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = 'us-central1'
ARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default'
ENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com'
CUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
PIPELINE_NAME = 'tfx_covertype_lab_04'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
!tfx pipeline create \
--pipeline_path=runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
gcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE]
kubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080
grpc_host = 'localhost'
grpc_port = 7000
connection_config = metadata_store_pb2.MetadataStoreClientConfig()
connection_config.host = grpc_host
connection_config.port = grpc_port
store = metadata_store.MetadataStore(connection_config)
for artifact_type in store.get_artifact_types():
print(artifact_type.name)
for execution_type in store.get_execution_types():
print(execution_type.name)
for context_type in store.get_context_types():
print(context_type.name)
with metadata.Metadata(connection_config) as store:
stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)
schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME)
anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)
stats_path = stats_artifacts[-1].uri
train_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')
eval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')
print("Train stats file:{}, Eval stats file:{}".format(
train_stats_file, eval_stats_file))
schema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')
print("Generated schame file:{}".format(schema_file))
anomalies_file = os.path.join(anomalies_artifacts[-1].uri, 'anomalies.pbtxt')
print("Generated anomalies file:{}".format(anomalies_file))
train_stats = tfdv.load_statistics(train_stats_file)
eval_stats = tfdv.load_statistics(eval_stats_file)
tfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,
lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')
schema = tfdv.load_schema_text(schema_file)
tfdv.display_schema(schema=schema)
anomalies = tfdv.load_anomalies_text(anomalies_file)
tfdv.display_anomalies(anomalies)
with metadata.Metadata(connection_config) as store:
model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)
model_eval_path = model_eval_artifacts[-1].uri
print("Generated model evaluation result:{}".format(model_eval_path))
eval_result = tfma.load_eval_result(model_eval_path)
tfma.view.render_slicing_metrics(
eval_result, slicing_column='Wilderness_Area')
| 0.281801 | 0.90764 |
### 基于维特比算法来优化上述流程
此项目需要的数据:
1. 综合类中文词库.xlsx: 包含了中文词,当做词典来用
2. 以变量的方式提供了部分unigram概率word_prob
举个例子: 给定词典=[我们 学习 人工 智能 人工智能 未来 是], 另外我们给定unigram概率:p(我们)=0.25, p(学习)=0.15, p(人工)=0.05, p(智能)=0.1, p(人工智能)=0.2, p(未来)=0.1, p(是)=0.15
#### Step 1: 根据词典,输入的句子和 word_prob来创建带权重的有向图(Directed Graph) 参考:课程内容
有向图的每一条边是一个单词的概率(只要存在于词典里的都可以作为一个合法的单词),这些概率已经给出(存放在word_prob)。
注意:思考用什么方式来存储这种有向图比较合适? 不一定只有一种方式来存储这种结构。
#### Step 2: 编写维特比算法(viterebi)算法来找出其中最好的PATH, 也就是最好的句子切分
具体算法参考课程中讲过的内容
#### Step 3: 返回结果
跟PART 1.1的要求一致
```
import pandas as pd
import numpy as np
path = "./data/综合类中文词库.xlsx"
data_frame = pd.read_excel(path, header = None)
dic_word_list = data_frame[data_frame.columns[0]].tolist()
dic_words = dic_word_list # 保存词典库中读取的单词
# 以下是每一个单词出现的概率。为了问题的简化,我们只列出了一小部分单词的概率。 在这里没有出现的的单词但是出现在词典里的,统一把概率设置成为0.00001
# 比如 p("学院")=p("概率")=...0.00001
word_prob = {"北京":0.03,"的":0.08,"天":0.005,"气":0.005,"天气":0.06,"真":0.04,"好":0.05,"真好":0.04,"啊":0.01,"真好啊":0.02,
"今":0.01,"今天":0.07,"课程":0.06,"内容":0.06,"有":0.05,"很":0.03,"很有":0.04,"意思":0.06,"有意思":0.005,"课":0.01,
"程":0.005,"经常":0.08,"意见":0.08,"意":0.01,"见":0.005,"有意见":0.02,"分歧":0.04,"分":0.02, "歧":0.005}
for key, value in word_prob.items():
if key not in dic_words:
dic_words.append(key)
for item in dic_words:
word_prob.setdefault(item, 0.00001)
def set_graph_back_set_log(input_str, word_prob = word_prob):
'''
逆邻接表
未计算log值,找到后计算对应log值
'''
str_len = len(input_str)
master_list = [{} for _ in range(str_len + 1)] # 初始化含有定点的主列表
window = 1 # 初始化窗口大小
start_position = 0 # 窗口起始位置
while(window <= str_len):
end_position = start_position + window - 1 # 窗口结束为止
split_str = input_str[start_position:end_position] # 当前查找的字符串
log_value = word_prob.get(split_str)# 找到当前字符串对应的log值
if(None != log_value): # 值存在则插入字典
master_list[end_position][start_position] = -np.log(log_value)
if(str_len > end_position): start_position += 1# 向后一位进行检测
else:
window += 1 # 增大窗口大小
start_position = 0
return master_list
def get_total_weight(list_input, index):
if(list_input[index][0] == 0 and list_input[index][1] == 1): return 0
else:
return list_input[index][0] + get_total_weight(list_input, list_input[index][1])
def calc_min_weight_path_back(master_list):
'''
逆邻接表求最短路径
'''
len_master_list = len(master_list)
# tmp_min_list = [[None, None]] * len_master_list # 初始化给定长度的数组保存最短路径
tmp_min_list = [[None] * 2 for row in range(len_master_list)] # 初始化给定长度的数组保存最短路径
tmp_min_list[0] = [0, 1] # 初始化第一个节点默认代表 最小值、来源
for index in range(1, len_master_list):
for key, value in master_list[index].items():
if(tmp_min_list[index][0] == None):
tmp_min_list[index][0] = value
tmp_min_list[index][1] = key
else:
tmp_sum1 = get_total_weight(tmp_min_list, index) # 原来的结果
tmp_sum2 = get_total_weight(tmp_min_list, tmp_min_list[key][1]) + value # 待插入的结果
if(tmp_sum2 < tmp_sum1): # 新插入的值更小
tmp_min_list[index][0] = value
tmp_min_list[index][1] = key
return tmp_min_list
min_path = [1, 0, 0, 2, 3, 3, 5, 5, 5]
min_path[-1]
def get_split_by_viterbi(min_path_list, input_str):
min_path = [ i[1] for i in min_path_list]
tmp_list =[]
last_position = len(input_str)
position = min_path[-1]
while(position != 0):
tmp_list.append(input_str[position:last_position])
last_position = position
position = min_path[position]
tmp_list.append(input_str[position:last_position])
return tmp_list[::-1]
# 分数(10)
## TODO 请编写word_segment_viterbi函数来实现对输入字符串的分词
def word_segment_viterbi(input_str):
"""
1. 基于输入字符串,词典,以及给定的unigram概率来创建DAG(有向图)。
2. 编写维特比算法来寻找最优的PATH
3. 返回分词结果
input_str: 输入字符串 输入格式:“今天天气好”
best_segment: 最好的分词结果 输出格式:["今天","天气","好"]
"""
# 第一步:根据词典,输入的句子,以及给定的unigram概率来创建带权重的有向图(Directed Graph) 参考:课程内容
graph = set_graph_back_set_log(input_str)
# 第二步: 利用维特比算法来找出最好的PATH, 这个PATH是P(sentence)最大或者 -log P(sentence)最小的PATH。
min_path_list = calc_min_weight_path_back(graph)
# 第三步: 根据最好的PATH, 返回最好的切分
best_segment = get_split_by_viterbi(min_path_list, input_str)
return best_segment
print (word_segment_viterbi("北京的天气真好啊"))
print (word_segment_viterbi("今天的课程内容很有意思"))
print (word_segment_viterbi("经常有意见分歧"))
```
|
github_jupyter
|
import pandas as pd
import numpy as np
path = "./data/综合类中文词库.xlsx"
data_frame = pd.read_excel(path, header = None)
dic_word_list = data_frame[data_frame.columns[0]].tolist()
dic_words = dic_word_list # 保存词典库中读取的单词
# 以下是每一个单词出现的概率。为了问题的简化,我们只列出了一小部分单词的概率。 在这里没有出现的的单词但是出现在词典里的,统一把概率设置成为0.00001
# 比如 p("学院")=p("概率")=...0.00001
word_prob = {"北京":0.03,"的":0.08,"天":0.005,"气":0.005,"天气":0.06,"真":0.04,"好":0.05,"真好":0.04,"啊":0.01,"真好啊":0.02,
"今":0.01,"今天":0.07,"课程":0.06,"内容":0.06,"有":0.05,"很":0.03,"很有":0.04,"意思":0.06,"有意思":0.005,"课":0.01,
"程":0.005,"经常":0.08,"意见":0.08,"意":0.01,"见":0.005,"有意见":0.02,"分歧":0.04,"分":0.02, "歧":0.005}
for key, value in word_prob.items():
if key not in dic_words:
dic_words.append(key)
for item in dic_words:
word_prob.setdefault(item, 0.00001)
def set_graph_back_set_log(input_str, word_prob = word_prob):
'''
逆邻接表
未计算log值,找到后计算对应log值
'''
str_len = len(input_str)
master_list = [{} for _ in range(str_len + 1)] # 初始化含有定点的主列表
window = 1 # 初始化窗口大小
start_position = 0 # 窗口起始位置
while(window <= str_len):
end_position = start_position + window - 1 # 窗口结束为止
split_str = input_str[start_position:end_position] # 当前查找的字符串
log_value = word_prob.get(split_str)# 找到当前字符串对应的log值
if(None != log_value): # 值存在则插入字典
master_list[end_position][start_position] = -np.log(log_value)
if(str_len > end_position): start_position += 1# 向后一位进行检测
else:
window += 1 # 增大窗口大小
start_position = 0
return master_list
def get_total_weight(list_input, index):
if(list_input[index][0] == 0 and list_input[index][1] == 1): return 0
else:
return list_input[index][0] + get_total_weight(list_input, list_input[index][1])
def calc_min_weight_path_back(master_list):
'''
逆邻接表求最短路径
'''
len_master_list = len(master_list)
# tmp_min_list = [[None, None]] * len_master_list # 初始化给定长度的数组保存最短路径
tmp_min_list = [[None] * 2 for row in range(len_master_list)] # 初始化给定长度的数组保存最短路径
tmp_min_list[0] = [0, 1] # 初始化第一个节点默认代表 最小值、来源
for index in range(1, len_master_list):
for key, value in master_list[index].items():
if(tmp_min_list[index][0] == None):
tmp_min_list[index][0] = value
tmp_min_list[index][1] = key
else:
tmp_sum1 = get_total_weight(tmp_min_list, index) # 原来的结果
tmp_sum2 = get_total_weight(tmp_min_list, tmp_min_list[key][1]) + value # 待插入的结果
if(tmp_sum2 < tmp_sum1): # 新插入的值更小
tmp_min_list[index][0] = value
tmp_min_list[index][1] = key
return tmp_min_list
min_path = [1, 0, 0, 2, 3, 3, 5, 5, 5]
min_path[-1]
def get_split_by_viterbi(min_path_list, input_str):
min_path = [ i[1] for i in min_path_list]
tmp_list =[]
last_position = len(input_str)
position = min_path[-1]
while(position != 0):
tmp_list.append(input_str[position:last_position])
last_position = position
position = min_path[position]
tmp_list.append(input_str[position:last_position])
return tmp_list[::-1]
# 分数(10)
## TODO 请编写word_segment_viterbi函数来实现对输入字符串的分词
def word_segment_viterbi(input_str):
"""
1. 基于输入字符串,词典,以及给定的unigram概率来创建DAG(有向图)。
2. 编写维特比算法来寻找最优的PATH
3. 返回分词结果
input_str: 输入字符串 输入格式:“今天天气好”
best_segment: 最好的分词结果 输出格式:["今天","天气","好"]
"""
# 第一步:根据词典,输入的句子,以及给定的unigram概率来创建带权重的有向图(Directed Graph) 参考:课程内容
graph = set_graph_back_set_log(input_str)
# 第二步: 利用维特比算法来找出最好的PATH, 这个PATH是P(sentence)最大或者 -log P(sentence)最小的PATH。
min_path_list = calc_min_weight_path_back(graph)
# 第三步: 根据最好的PATH, 返回最好的切分
best_segment = get_split_by_viterbi(min_path_list, input_str)
return best_segment
print (word_segment_viterbi("北京的天气真好啊"))
print (word_segment_viterbi("今天的课程内容很有意思"))
print (word_segment_viterbi("经常有意见分歧"))
| 0.138084 | 0.827131 |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Author(s): Kevin P. Murphy ([email protected]) and Mahmoud Soliman ([email protected])
```
<a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/figures//chapter10_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Figure 10.1:<a name='10.1'></a> <a name='iris-logreg-2d'></a>
(a) Visualization of a 2d plane in a 3d space with surface normal $\mathbf w $ going through point $\mathbf x _0=(x_0,y_0,z_0)$. See text for details. (b) Visualization of optimal linear decision boundary induced by logistic regression on a 2-class, 2-feature version of the iris dataset.
Figure(s) generated by [iris_logreg.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/iris_logreg.py")
```
## Figure 10.2:<a name='10.2'></a> <a name='sigmoidPlot2d'></a>
Plots of $\sigma (w_1 x_1 + w_2 x_2)$. Here $\mathbf w = (w_1,w_2)$ defines the normal to the decision boundary. Points to the right of this have $\sigma (\mathbf w ^\top \mathbf x )>0.5$, and points to the left have $\sigma (\mathbf w ^\top \mathbf x ) < 0.5$. Adapted from Figure 39.3 of <a href='#MacKay03'>[Mac03]</a> .
Figure(s) generated by [sigmoid_2d_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/sigmoid_2d_plot.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/sigmoid_2d_plot.py")
```
## Figure 10.3:<a name='10.3'></a> <a name='kernelTrickQuadratic'></a>
Illustration of how we can transform a quadratic decision boundary into a linear one by transforming the features from $\mathbf x =(x_1,x_2)$ to $\boldsymbol \phi (\mathbf x )=(x_1^2,x_2^2)$. Used with kind permission of Jean-Philippe Vert
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/kernelTrickQuadratic.png")
```
## Figure 10.4:<a name='10.4'></a> <a name='logregPoly'></a>
Polynomial feature expansion applied to a two-class, two-dimensional logistic regression problem. (a) Degree $K=1$. (a) Degree $K=2$. (a) Degree $K=4$. (d) Train and test error vs degree.
Figure(s) generated by [logreg_poly_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/logreg_poly_demo.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_poly_demo.py")
```
## Figure 10.5:<a name='10.5'></a> <a name='irisLossSurface'></a>
NLL loss surface for binary logistic regression applied to Iris dataset with 1 feature and 1 bias term. The goal is to minimize the function.
Figure(s) generated by [iris_logreg_loss_surface.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_logreg_loss_surface.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/iris_logreg_loss_surface.py")
```
## Figure 10.6:<a name='10.6'></a> <a name='logregPolyRidge'></a>
Weight decay with variance $C$ applied to two-class, two-dimensional logistic regression problem with a degree 4 polynomial. (a) $C=1$. (a) $C=316$. (a) $C=100,000$. (d) Train and test error vs $C$.
Figure(s) generated by [logreg_poly_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/logreg_poly_demo.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_poly_demo.py")
```
## Figure 10.7:<a name='10.7'></a> <a name='logregMultinom3class'></a>
Example of 3-class logistic regression with 2d inputs. (a) Original features. (b) Quadratic features.
Figure(s) generated by [logreg_multiclass_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/logreg_multiclass_demo.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_multiclass_demo.py")
```
## Figure 10.8:<a name='10.8'></a> <a name='labelTree'></a>
A simple example of a label hierarchy. Nodes within the same ellipse have a mutual exclusion relationship between them.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/labelTree.png")
```
## Figure 10.9:<a name='10.9'></a> <a name='hierSoftmax'></a>
A flat and hierarchical softmax model $p(y|x)$, where $x$ are the input features (context) and $y$ is the output label. From https://www.quora.com/What-is-hierarchical-softmax
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/softmaxFlat.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/softmaxHier.png")
```
## Figure 10.10:<a name='10.10'></a> <a name='termDoc'></a>
Example of a term-document matrix, where raw counts have been replaced by their TF-IDF values (see \cref sec:tfidf ). Darker cells are larger values. From https://bit.ly/2kByLQI . Used with kind permission of Christoph Carl Kling.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/LSAorig.png")
```
## Figure 10.11:<a name='10.11'></a> <a name='logregRobust'></a>
(a) Logistic regression on some data with outliers (denoted by x). Training points have been (vertically) jittered to avoid overlapping too much. Vertical line is the decision boundary, and its posterior credible interval. (b) Same as (a) but using robust model, with a mixture likelihood. Adapted from Figure 4.13 of <a href='#Martin2018'>[Mar18]</a> .
Figure(s) generated by [logreg_iris_bayes_robust_1d_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/logreg_iris_bayes_robust_1d_pymc3.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_iris_bayes_robust_1d_pymc3.py")
```
## Figure 10.12:<a name='10.12'></a> <a name='bitemperedLoss'></a>
(a) Illustration of logistic and tempered logistic loss with $t_1=0.8$. (b) Illustration of sigmoid and tempered sigmoid transfer function with $t_2=2.0$. From https://ai.googleblog.com/2019/08/bi-tempered-logistic-loss-for-training.html . Used with kind permission of Ehsan Amid.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/binary_loss.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binary_transfer_function.png")
```
## Figure 10.13:<a name='10.13'></a> <a name='bitempered'></a>
Illustration of standard and bi-tempered logistic regression on data with label noise. From https://ai.googleblog.com/2019/08/bi-tempered-logistic-loss-for-training.html . Used with kind permission of Ehsan Amid.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/bi_tempered_blog.png")
```
## Figure 10.14:<a name='10.14'></a> <a name='logregLaplaceGirolamiPost'></a>
(a) Illustration of the data. (b) Log-likelihood for a logistic regression model. The line is drawn from the origin in the direction of the MLE (which is at infinity). The numbers correspond to 4 points in parameter space, corresponding to the lines in (a). (c) Unnormalized log posterior (assuming vague spherical prior). (d) Laplace approximation to posterior. Adapted from a figure by Mark Girolami.
Figure(s) generated by [logregLaplaceGirolamiDemo.m](https://github.com/probml/pmtk3/blob/master/demos/logregLaplaceGirolamiDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiData.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiNLL.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPost.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPostLaplace.png")
```
## Figure 10.15:<a name='10.15'></a> <a name='logregLaplaceDemoPred'></a>
Posterior predictive distribution for a logistic regression model in 2d. Top left: contours of $p(y=1|\mathbf x , \mathbf w _ map )$. Top right: samples from the posterior predictive distribution. Bottom left: Averaging over these samples. Bottom right: moderated output (probit approximation). Adapted from a figure by Mark Girolami.
Figure(s) generated by [logregLaplaceGirolamiDemo.m](https://github.com/probml/pmtk3/blob/master/demos/logregLaplaceGirolamiDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPlugin.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiSamples.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiMc.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiModerated.png")
```
## Figure 10.16:<a name='10.16'></a> <a name='logregIris2dBayesUnbalanced'></a>
Illustration of the posterior over the decision boundary for classifying iris flowers (setosa vs versicolor) using 2 input features. (a) 25 examples per class. Adapted from Figure 4.5 of <a href='#Martin2018'>[Mar18]</a> . (b) 5 examples of class 0, 45 examples of class 1. Adapted from Figure 4.8 of <a href='#Martin2018'>[Mar18]</a> .
Figure(s) generated by [logreg_iris_bayes_2d_pymc3.py](https://github.com/probml/pyprobml/blob/master/scripts/logreg_iris_bayes_2d_pymc3.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_iris_bayes_2d_pymc3.py")
```
## Figure 10.17:<a name='10.17'></a> <a name='sigmoidLowerBound'></a>
Quadratic lower bounds on the sigmoid (logistic) function. In solid red, we plot $\sigma (x)$ vs $x$. In dotted blue, we plot the lower bound $L(x, \boldsymbol \xi )$ vs $x$ for $\boldsymbol \xi =2.5$. (a) JJ bound. This is tight at $\boldsymbol \xi = \pm 2.5$. (b) Bohning bound (\cref sec:bohningBinary . This is tight at $\boldsymbol \xi =2.5$.
Figure(s) generated by [sigmoidLowerBounds.m](https://github.com/probml/pmtk3/blob/master/demos/sigmoidLowerBounds.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/sigmoidBoundJJ.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/sigmoidBoundB.png")
```
## Figure 10.18:<a name='10.18'></a> <a name='dynamicLogreg'></a>
A dynamic logistic regression model. $\mathbf w _t$ are the regression weights at time $t$, and $a_t = \mathbf w _t^\top \mathbf x _t$.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/dynamicLogregA.png")
```
## Figure 10.19:<a name='10.19'></a> <a name='ridgeLassoOLS'></a>
(a) Data for logistic regression question. (b) Plot of $ w _k$ vs amount of correlation $c_k$ for three different estimators.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregQ1b.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ridgeLassoOLS.png")
```
## References:
<a name='MacKay03'>[Mac03]</a> D. MacKay "Information Theory, Inference, and Learning Algorithms". (2003).
<a name='Martin2018'>[Mar18]</a> O. Martin "Bayesian analysis with Python". (2018).
|
github_jupyter
|
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Author(s): Kevin P. Murphy ([email protected]) and Mahmoud Soliman ([email protected])
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/iris_logreg.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/sigmoid_2d_plot.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/kernelTrickQuadratic.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_poly_demo.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/iris_logreg_loss_surface.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_poly_demo.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_multiclass_demo.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/labelTree.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/softmaxFlat.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/softmaxHier.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/LSAorig.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_iris_bayes_robust_1d_pymc3.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/binary_loss.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binary_transfer_function.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/bi_tempered_blog.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiData.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiNLL.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPost.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPostLaplace.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiPlugin.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiSamples.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiMc.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/logregLaplaceGirolamiModerated.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/logreg_iris_bayes_2d_pymc3.py")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/sigmoidBoundJJ.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/sigmoidBoundB.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/dynamicLogregA.png")
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/logregQ1b.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ridgeLassoOLS.png")
| 0.666171 | 0.913097 |
<a href="https://colab.research.google.com/github/simecek/ECCB2021/blob/main/notebooks/10_Integrated_Gradients_G4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Data
```
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv1D, BatchNormalization, MaxPooling1D, Dropout, GlobalAveragePooling1D, Dense
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display, HTML
# get train dataset
!wget --quiet https://raw.githubusercontent.com/ML-Bioinfo-CEITEC/penguinn/master/Datasets/train_set_1_1.txt
nucleo_dic = {
"A": 0,
"C": 1,
"T": 2,
"G": 3,
"N": 4,
}
df_train = pd.read_csv("train_set_1_1.txt", sep='\t', names=['sequence', 'label'])
# translate text labels to numbers 0, 1
labels_train = np.array(list(map((lambda x: 1 if x == 'positive' else 0), list(df_train['label']))))
dataset_train = df_train['sequence'].tolist()
# numericalize using the dictionary
dataset_ordinal_train = [[nucleo_dic[letter] for letter in sequence] for sequence in dataset_train]
# translate number values to one-hot vectors
dataset_onehot_train = tf.one_hot(dataset_ordinal_train, depth=5)
# get test dataset
!wget --quiet https://raw.githubusercontent.com/ML-Bioinfo-CEITEC/penguinn/master/Datasets/test_set_1_1.txt
# preprocess the test set similarly
df_test = pd.read_csv("test_set_1_1.txt", sep='\t', names=['sequence', 'label'])
labels_test = np.array(list(map((lambda x: 1 if x == 'positive' else 0), list(df_test['label']))))
dataset_test = df_test['sequence'].tolist()
# we use the same nucleo_dic as on the example before
dataset_ordinal_test = [[nucleo_dic[letter] for letter in sequence] for sequence in dataset_test]
dataset_onehot_test = tf.one_hot(dataset_ordinal_test, depth=5)
```
## Model
We have adapted model from our original [paper](https://www.frontiersin.org/articles/10.3389/fgene.2020.568546/full). Note it is sligtly more complex model than what we have seen yesterday.
```
model = Sequential([
Conv1D(32, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Conv1D(16, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Conv1D(4, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Dropout(0.3),
GlobalAveragePooling1D(),
Dense(1)])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.005),
loss=tf.keras.losses.binary_crossentropy(from_logits=True),
metrics=['accuracy']
)
```
## Training and saving the model
```
model.fit(
dataset_onehot_train,
labels_train,
batch_size=128,
epochs=5,
validation_split=0.3
)
model.save("cnn_3epochs.h5", save_format='h5')
model = tf.keras.models.load_model('cnn_3epochs.h5')
```
## Integrated Gradients
```
def generate_alphas(m_steps=50, method='riemann_trapezoidal'):
"""
Args:
m_steps(Tensor): A 0D tensor of an int corresponding to the number of linear
interpolation steps for computing an approximate integral. Default is 50.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
"""
m_steps_float = tf.cast(m_steps, float)
if method == 'riemann_trapezoidal':
alphas = tf.linspace(0.0, 1.0, m_steps+1)
elif method == 'riemann_left':
alphas = tf.linspace(0.0, 1.0 - (1.0 / m_steps_float), m_steps)
elif method == 'riemann_midpoint':
alphas = tf.linspace(1.0 / (2.0 * m_steps_float), 1.0 - 1.0 / (2.0 * m_steps_float), m_steps)
elif method == 'riemann_right':
alphas = tf.linspace(1.0 / m_steps_float, 1.0, m_steps)
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
return alphas
def generate_path_inputs(baseline, input, alphas):
"""
Generate interpolated 'images' along a linear path at alpha intervals between a baseline tensor
baseline: 2D, shape: (200, 4)
input: preprocessed sample, shape: (200, 4)
alphas: list of steps in interpolated image ,shape: (21)
return: shape (21, 200, 4)
"""
# Expand dimensions for vectorized computation of interpolations.
alphas_x = alphas[:, tf.newaxis, tf.newaxis]
baseline_x = tf.expand_dims(baseline, axis=0)
input_x = tf.expand_dims(input, axis=0)
delta = input_x - baseline_x
path_inputs = baseline_x + alphas_x * delta
return path_inputs
def compute_gradients(model, path_inputs):
"""
compute dependency of each field on whole result, compared to interpolated 'images'
:param model: trained model
:param path_inputs: interpolated tensors, shape: (21, 200, 4)
:return: shape: (21, 200, 4)
"""
with tf.GradientTape() as tape:
tape.watch(path_inputs)
predictions = model(path_inputs)
outputs = []
for envelope in predictions:
outputs.append(envelope[0])
outputs = tf.convert_to_tensor(outputs, dtype=tf.float32)
gradients = tape.gradient(outputs, path_inputs)
return gradients
def integral_approximation(gradients, method='riemann_trapezoidal'):
"""Compute numerical approximation of integral from gradients.
Args:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the shape
(img_height, img_width, 3).
"""
if method == 'riemann_trapezoidal':
grads = (gradients[:-1] + gradients[1:]) / tf.constant(2.0)
elif method == 'riemann_left':
grads = gradients
elif method == 'riemann_midpoint':
grads = gradients
elif method == 'riemann_right':
grads = gradients
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
# Average integration approximation.
integrated_gradients = tf.math.reduce_mean(grads, axis=0)
return integrated_gradients
def integrated_gradients(model, baseline, input, m_steps=50, method='riemann_trapezoidal',
batch_size=32):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): 2D, shape: (200, 4)
input(Tensor): preprocessed sample, shape: (200, 4)
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
batch_size(Tensor): A 0D tensor of an integer corresponding to a batch
size for alpha to scale computation and prevent OOM errors. Note: needs to
be tf.int64 and shoud be < m_steps. Default value is 32.
Returns:
integrated_gradients(Tensor): A 2D tensor of floats with the same
shape as the input tensor.
"""
# 1. Generate alphas.
alphas = generate_alphas(m_steps=m_steps,
method=method)
# Initialize TensorArray outside loop to collect gradients. Note: this data structure
gradient_batches = tf.TensorArray(tf.float32, size=m_steps + 1)
# Iterate alphas range and batch computation for speed, memory efficiency, and scaling to larger m_steps.
for alpha in tf.range(0, len(alphas), batch_size):
from_ = alpha
to = tf.minimum(from_ + batch_size, len(alphas))
alpha_batch = alphas[from_:to]
# 2. Generate interpolated inputs between baseline and input.
interpolated_path_input_batch = generate_path_inputs(baseline=baseline,
input=input,
alphas=alpha_batch)
# 3. Compute gradients between model outputs and interpolated inputs.
gradient_batch = compute_gradients(model=model,
path_inputs=interpolated_path_input_batch)
# Write batch indices and gradients to TensorArray.
gradient_batches = gradient_batches.scatter(tf.range(from_, to), gradient_batch)
# Stack path gradients together row-wise into single tensor.
total_gradients = gradient_batches.stack()
# 4. Integral approximation through averaging gradients.
avg_gradients = integral_approximation(gradients=total_gradients,
method=method)
# 5. Scale integrated gradients with respect to input.
integrated_gradients = (input - baseline) * avg_gradients
return integrated_gradients
def choose_validation_points(integrated_gradients):
"""
Args:
integrated_gradients(Tensor): A 2D tensor of floats with shape (200, 4).
Return: List of attributes for highlighting DNA string sequence
"""
attr = np.zeros(200)
for i in range(200):
for j in range(4):
if integrated_gradients[i][j].numpy() == 0:
continue
attr[i] = integrated_gradients[i][j].numpy()
return attr
def visualize_token_attrs(sequence, attrs):
"""
Visualize attributions for given set of tokens.
Args:
- tokens: An array of tokens
- attrs: An array of attributions, of same size as 'tokens',
with attrs[i] being the attribution to tokens[i]
Returns:
- visualization: HTML text with colorful representation of DNA sequence
build on model prediction
"""
def get_color(attr):
if attr > 0:
red = int(128 * attr) + 127
green = 128 - int(64 * attr)
blue = 128 - int(64 * attr)
else:
red = 128 + int(64 * attr)
green = 128 + int(64 * attr)
blue = int(-128 * attr) + 127
return red, green, blue
# normalize attributions for visualization.
bound = max(abs(max(attrs)), abs(min(attrs)))
attrs = attrs / bound
html_text = ""
for i, tok in enumerate(sequence):
r, g, b = get_color(attrs[i])
if abs(attrs[i]) > 0.5:
html_text += " <span style='color:rgb(%d,%d,%d);font-weight:bold'>%s</span>" % (r, g, b, tok)
else:
html_text += " <span style='color:rgb(%d,%d,%d)'>%s</span>" % (r, g, b, tok)
return html_text
seq = 'AAAGAAGAGACCAAGACGGAAGACCCAATCGGACCGGGAGGTCCGGGGAGACGTGTCGGGGATCGGGACTTGACTGTGCTTACCAAAGGACCTAACGGAGGGGTCCATAGGAGTCTTGCGGGACTCCCTGGCACTGGAGTAGTATCGACATAAGGGTCACGGACGTTCCATTTAGTGAGCCATTTATAAACCACTATCAA'
channel={
'A': 0,
'T': 1,
'C': 2,
'G': 3,
'N': 4
}
seq_onehot = tf.one_hot([channel[c] for c in seq], depth=5)
#seq_onehot = tf.convert_to_tensor(seq_onehot, dtype=tf.float32)[:,:4]
seq_onehot.shape
baseline = tf.zeros(shape=(200, 5))
ig_attribution = integrated_gradients(model, baseline, seq_onehot)
attrs = choose_validation_points(ig_attribution)
visualisation = visualize_token_attrs(seq, attrs)
HTML(visualisation)
```
|
github_jupyter
|
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Conv1D, BatchNormalization, MaxPooling1D, Dropout, GlobalAveragePooling1D, Dense
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display, HTML
# get train dataset
!wget --quiet https://raw.githubusercontent.com/ML-Bioinfo-CEITEC/penguinn/master/Datasets/train_set_1_1.txt
nucleo_dic = {
"A": 0,
"C": 1,
"T": 2,
"G": 3,
"N": 4,
}
df_train = pd.read_csv("train_set_1_1.txt", sep='\t', names=['sequence', 'label'])
# translate text labels to numbers 0, 1
labels_train = np.array(list(map((lambda x: 1 if x == 'positive' else 0), list(df_train['label']))))
dataset_train = df_train['sequence'].tolist()
# numericalize using the dictionary
dataset_ordinal_train = [[nucleo_dic[letter] for letter in sequence] for sequence in dataset_train]
# translate number values to one-hot vectors
dataset_onehot_train = tf.one_hot(dataset_ordinal_train, depth=5)
# get test dataset
!wget --quiet https://raw.githubusercontent.com/ML-Bioinfo-CEITEC/penguinn/master/Datasets/test_set_1_1.txt
# preprocess the test set similarly
df_test = pd.read_csv("test_set_1_1.txt", sep='\t', names=['sequence', 'label'])
labels_test = np.array(list(map((lambda x: 1 if x == 'positive' else 0), list(df_test['label']))))
dataset_test = df_test['sequence'].tolist()
# we use the same nucleo_dic as on the example before
dataset_ordinal_test = [[nucleo_dic[letter] for letter in sequence] for sequence in dataset_test]
dataset_onehot_test = tf.one_hot(dataset_ordinal_test, depth=5)
model = Sequential([
Conv1D(32, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Conv1D(16, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Conv1D(4, kernel_size=8, data_format='channels_last', activation='relu'),
BatchNormalization(),
MaxPooling1D(),
Dropout(0.3),
GlobalAveragePooling1D(),
Dense(1)])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.005),
loss=tf.keras.losses.binary_crossentropy(from_logits=True),
metrics=['accuracy']
)
model.fit(
dataset_onehot_train,
labels_train,
batch_size=128,
epochs=5,
validation_split=0.3
)
model.save("cnn_3epochs.h5", save_format='h5')
model = tf.keras.models.load_model('cnn_3epochs.h5')
def generate_alphas(m_steps=50, method='riemann_trapezoidal'):
"""
Args:
m_steps(Tensor): A 0D tensor of an int corresponding to the number of linear
interpolation steps for computing an approximate integral. Default is 50.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
alphas(Tensor): A 1D tensor of uniformly spaced floats with the shape
(m_steps,).
"""
m_steps_float = tf.cast(m_steps, float)
if method == 'riemann_trapezoidal':
alphas = tf.linspace(0.0, 1.0, m_steps+1)
elif method == 'riemann_left':
alphas = tf.linspace(0.0, 1.0 - (1.0 / m_steps_float), m_steps)
elif method == 'riemann_midpoint':
alphas = tf.linspace(1.0 / (2.0 * m_steps_float), 1.0 - 1.0 / (2.0 * m_steps_float), m_steps)
elif method == 'riemann_right':
alphas = tf.linspace(1.0 / m_steps_float, 1.0, m_steps)
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
return alphas
def generate_path_inputs(baseline, input, alphas):
"""
Generate interpolated 'images' along a linear path at alpha intervals between a baseline tensor
baseline: 2D, shape: (200, 4)
input: preprocessed sample, shape: (200, 4)
alphas: list of steps in interpolated image ,shape: (21)
return: shape (21, 200, 4)
"""
# Expand dimensions for vectorized computation of interpolations.
alphas_x = alphas[:, tf.newaxis, tf.newaxis]
baseline_x = tf.expand_dims(baseline, axis=0)
input_x = tf.expand_dims(input, axis=0)
delta = input_x - baseline_x
path_inputs = baseline_x + alphas_x * delta
return path_inputs
def compute_gradients(model, path_inputs):
"""
compute dependency of each field on whole result, compared to interpolated 'images'
:param model: trained model
:param path_inputs: interpolated tensors, shape: (21, 200, 4)
:return: shape: (21, 200, 4)
"""
with tf.GradientTape() as tape:
tape.watch(path_inputs)
predictions = model(path_inputs)
outputs = []
for envelope in predictions:
outputs.append(envelope[0])
outputs = tf.convert_to_tensor(outputs, dtype=tf.float32)
gradients = tape.gradient(outputs, path_inputs)
return gradients
def integral_approximation(gradients, method='riemann_trapezoidal'):
"""Compute numerical approximation of integral from gradients.
Args:
gradients(Tensor): A 4D tensor of floats with the shape
(m_steps, img_height, img_width, 3).
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
Returns:
integrated_gradients(Tensor): A 3D tensor of floats with the shape
(img_height, img_width, 3).
"""
if method == 'riemann_trapezoidal':
grads = (gradients[:-1] + gradients[1:]) / tf.constant(2.0)
elif method == 'riemann_left':
grads = gradients
elif method == 'riemann_midpoint':
grads = gradients
elif method == 'riemann_right':
grads = gradients
else:
raise AssertionError("Provided Riemann approximation method is not valid.")
# Average integration approximation.
integrated_gradients = tf.math.reduce_mean(grads, axis=0)
return integrated_gradients
def integrated_gradients(model, baseline, input, m_steps=50, method='riemann_trapezoidal',
batch_size=32):
"""
Args:
model(keras.Model): A trained model to generate predictions and inspect.
baseline(Tensor): 2D, shape: (200, 4)
input(Tensor): preprocessed sample, shape: (200, 4)
m_steps(Tensor): A 0D tensor of an integer corresponding to the number of
linear interpolation steps for computing an approximate integral.
method(str): A string representing the integral approximation method. The
following methods are implemented:
- riemann_trapezoidal(default)
- riemann_left
- riemann_midpoint
- riemann_right
batch_size(Tensor): A 0D tensor of an integer corresponding to a batch
size for alpha to scale computation and prevent OOM errors. Note: needs to
be tf.int64 and shoud be < m_steps. Default value is 32.
Returns:
integrated_gradients(Tensor): A 2D tensor of floats with the same
shape as the input tensor.
"""
# 1. Generate alphas.
alphas = generate_alphas(m_steps=m_steps,
method=method)
# Initialize TensorArray outside loop to collect gradients. Note: this data structure
gradient_batches = tf.TensorArray(tf.float32, size=m_steps + 1)
# Iterate alphas range and batch computation for speed, memory efficiency, and scaling to larger m_steps.
for alpha in tf.range(0, len(alphas), batch_size):
from_ = alpha
to = tf.minimum(from_ + batch_size, len(alphas))
alpha_batch = alphas[from_:to]
# 2. Generate interpolated inputs between baseline and input.
interpolated_path_input_batch = generate_path_inputs(baseline=baseline,
input=input,
alphas=alpha_batch)
# 3. Compute gradients between model outputs and interpolated inputs.
gradient_batch = compute_gradients(model=model,
path_inputs=interpolated_path_input_batch)
# Write batch indices and gradients to TensorArray.
gradient_batches = gradient_batches.scatter(tf.range(from_, to), gradient_batch)
# Stack path gradients together row-wise into single tensor.
total_gradients = gradient_batches.stack()
# 4. Integral approximation through averaging gradients.
avg_gradients = integral_approximation(gradients=total_gradients,
method=method)
# 5. Scale integrated gradients with respect to input.
integrated_gradients = (input - baseline) * avg_gradients
return integrated_gradients
def choose_validation_points(integrated_gradients):
"""
Args:
integrated_gradients(Tensor): A 2D tensor of floats with shape (200, 4).
Return: List of attributes for highlighting DNA string sequence
"""
attr = np.zeros(200)
for i in range(200):
for j in range(4):
if integrated_gradients[i][j].numpy() == 0:
continue
attr[i] = integrated_gradients[i][j].numpy()
return attr
def visualize_token_attrs(sequence, attrs):
"""
Visualize attributions for given set of tokens.
Args:
- tokens: An array of tokens
- attrs: An array of attributions, of same size as 'tokens',
with attrs[i] being the attribution to tokens[i]
Returns:
- visualization: HTML text with colorful representation of DNA sequence
build on model prediction
"""
def get_color(attr):
if attr > 0:
red = int(128 * attr) + 127
green = 128 - int(64 * attr)
blue = 128 - int(64 * attr)
else:
red = 128 + int(64 * attr)
green = 128 + int(64 * attr)
blue = int(-128 * attr) + 127
return red, green, blue
# normalize attributions for visualization.
bound = max(abs(max(attrs)), abs(min(attrs)))
attrs = attrs / bound
html_text = ""
for i, tok in enumerate(sequence):
r, g, b = get_color(attrs[i])
if abs(attrs[i]) > 0.5:
html_text += " <span style='color:rgb(%d,%d,%d);font-weight:bold'>%s</span>" % (r, g, b, tok)
else:
html_text += " <span style='color:rgb(%d,%d,%d)'>%s</span>" % (r, g, b, tok)
return html_text
seq = 'AAAGAAGAGACCAAGACGGAAGACCCAATCGGACCGGGAGGTCCGGGGAGACGTGTCGGGGATCGGGACTTGACTGTGCTTACCAAAGGACCTAACGGAGGGGTCCATAGGAGTCTTGCGGGACTCCCTGGCACTGGAGTAGTATCGACATAAGGGTCACGGACGTTCCATTTAGTGAGCCATTTATAAACCACTATCAA'
channel={
'A': 0,
'T': 1,
'C': 2,
'G': 3,
'N': 4
}
seq_onehot = tf.one_hot([channel[c] for c in seq], depth=5)
#seq_onehot = tf.convert_to_tensor(seq_onehot, dtype=tf.float32)[:,:4]
seq_onehot.shape
baseline = tf.zeros(shape=(200, 5))
ig_attribution = integrated_gradients(model, baseline, seq_onehot)
attrs = choose_validation_points(ig_attribution)
visualisation = visualize_token_attrs(seq, attrs)
HTML(visualisation)
| 0.918713 | 0.936285 |
```
import pandemic_simulator as ps
import random
from tf_agents.specs import BoundedArraySpec
import numpy as np
import base64
import IPython
import matplotlib.pyplot as plt
import os
import reverb
import tempfile
import tensorflow as tf
from tf_agents.agents.ddpg import critic_network
from tf_agents.agents.sac import sac_agent
from tf_agents.agents.sac import tanh_normal_projection_network
from tf_agents.environments import suite_pybullet
from tf_agents.metrics import py_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.policies import greedy_policy
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_py_policy
from tf_agents.replay_buffers import reverb_replay_buffer
from tf_agents.replay_buffers import reverb_utils
from tf_agents.train import actor
from tf_agents.train import learner
from tf_agents.train import triggers
from tf_agents.train.utils import spec_utils
from tf_agents.train.utils import strategy_utils
from tf_agents.train.utils import train_utils
tempdir = tempfile.gettempdir()
env_name = "MinitaurBulletEnv-v0" # @param {type:"string"}
# Use "num_iterations = 1e6" for better results (2 hrs)
# 1e5 is just so this doesn't take too long (1 hr)
num_iterations = 100000 # @param {type:"integer"}
initial_collect_steps = 10000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 10000 # @param {type:"integer"}
batch_size = 256 # @param {type:"integer"}
critic_learning_rate = 3e-4 # @param {type:"number"}
actor_learning_rate = 3e-4 # @param {type:"number"}
alpha_learning_rate = 3e-4 # @param {type:"number"}
target_update_tau = 0.005 # @param {type:"number"}
target_update_period = 1 # @param {type:"number"}
gamma = 0.99 # @param {type:"number"}
reward_scale_factor = 1.0 # @param {type:"number"}
actor_fc_layer_params = (256, 256)
critic_joint_fc_layer_params = (256, 256)
log_interval = 5000 # @param {type:"integer"}
num_eval_episodes = 20 # @param {type:"integer"}
eval_interval = 10000 # @param {type:"integer"}
policy_save_interval = 5000 # @param {type:"integer"}
env = suite_pybullet.load(env_name)
env.reset()
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
ps.init_globals(seed=random.randint(0,1000))
sim_config = ps.sh.small_town_config
viz = ps.viz.GymViz.from_config(sim_config=sim_config)
collect_env = ps.env.PandemicGymEnv.from_config(name='collect', sim_config=sim_config, pandemic_regulations=ps.sh.austin_regulations,done_fn=ps.env.done.ORDone(done_fns=[ps.env.done.InfectionSummaryAboveThresholdDone(summary_type=ps.env.infection_model.InfectionSummary.CRITICAL,threshold=sim_config.max_hospital_capacity*3),ps.env.done.NoPandemicDone(num_days=30)]))
ps.init_globals(seed=random.randint(0,1000))#check if this affects both envs or just one
eval_env = ps.env.PandemicGymEnv.from_config(name='eval', sim_config=sim_config, pandemic_regulations=ps.sh.austin_regulations,done_fn=ps.env.done.ORDone(done_fns=[ps.env.done.InfectionSummaryAboveThresholdDone(summary_type=ps.env.infection_model.InfectionSummary.CRITICAL,threshold=sim_config.max_hospital_capacity*3),ps.env.done.NoPandemicDone(num_days=30)]))
use_gpu = True #@param {type:"boolean"}
strategy = strategy_utils.get_strategy(tpu=False, use_gpu=use_gpu)
a=np.ndarray((1),dtype='float32')
a[0]=0.0
c=np.ndarray(1,dtype='float32')
a
collect_env = suite_pybullet.load(env_name)
eval_env = suite_pybullet.load(env_name)
observation_spec, action_spec, time_step_spec = (spec_utils.get_tensor_specs(collect_env))
a=action_spec.minimum
a
c=np.array(0,dtype='float32')
c
c=np.array(0,dtype='float32')
d=np.ndarray(c)
observation_spec2 =BoundedArraySpec([13,],np.float32,minimum=0,maximum=1)
action_spec3=tf.TensorSpec(np.array([1],))
time_step_spec = BoundedArraySpec([13,],np.float32,minimum=0,maximum=1)
with strategy.scope():
critic_net = critic_network.CriticNetwork(
(observation_spec, action_spec),
observation_fc_layer_params=None,
action_fc_layer_params=None,
joint_fc_layer_params=critic_joint_fc_layer_params,
kernel_initializer='glorot_uniform',
last_kernel_initializer='glorot_uniform')
with strategy.scope():
actor_net = actor_distribution_network.ActorDistributionNetwork(
observation_spec,
action_spec3,
continuous_projection_net=(
tanh_normal_projection_network.TanhNormalProjectionNetwork))
```
|
github_jupyter
|
import pandemic_simulator as ps
import random
from tf_agents.specs import BoundedArraySpec
import numpy as np
import base64
import IPython
import matplotlib.pyplot as plt
import os
import reverb
import tempfile
import tensorflow as tf
from tf_agents.agents.ddpg import critic_network
from tf_agents.agents.sac import sac_agent
from tf_agents.agents.sac import tanh_normal_projection_network
from tf_agents.environments import suite_pybullet
from tf_agents.metrics import py_metrics
from tf_agents.networks import actor_distribution_network
from tf_agents.policies import greedy_policy
from tf_agents.policies import py_tf_eager_policy
from tf_agents.policies import random_py_policy
from tf_agents.replay_buffers import reverb_replay_buffer
from tf_agents.replay_buffers import reverb_utils
from tf_agents.train import actor
from tf_agents.train import learner
from tf_agents.train import triggers
from tf_agents.train.utils import spec_utils
from tf_agents.train.utils import strategy_utils
from tf_agents.train.utils import train_utils
tempdir = tempfile.gettempdir()
env_name = "MinitaurBulletEnv-v0" # @param {type:"string"}
# Use "num_iterations = 1e6" for better results (2 hrs)
# 1e5 is just so this doesn't take too long (1 hr)
num_iterations = 100000 # @param {type:"integer"}
initial_collect_steps = 10000 # @param {type:"integer"}
collect_steps_per_iteration = 1 # @param {type:"integer"}
replay_buffer_capacity = 10000 # @param {type:"integer"}
batch_size = 256 # @param {type:"integer"}
critic_learning_rate = 3e-4 # @param {type:"number"}
actor_learning_rate = 3e-4 # @param {type:"number"}
alpha_learning_rate = 3e-4 # @param {type:"number"}
target_update_tau = 0.005 # @param {type:"number"}
target_update_period = 1 # @param {type:"number"}
gamma = 0.99 # @param {type:"number"}
reward_scale_factor = 1.0 # @param {type:"number"}
actor_fc_layer_params = (256, 256)
critic_joint_fc_layer_params = (256, 256)
log_interval = 5000 # @param {type:"integer"}
num_eval_episodes = 20 # @param {type:"integer"}
eval_interval = 10000 # @param {type:"integer"}
policy_save_interval = 5000 # @param {type:"integer"}
env = suite_pybullet.load(env_name)
env.reset()
print('Observation Spec:')
print(env.time_step_spec().observation)
print('Action Spec:')
print(env.action_spec())
ps.init_globals(seed=random.randint(0,1000))
sim_config = ps.sh.small_town_config
viz = ps.viz.GymViz.from_config(sim_config=sim_config)
collect_env = ps.env.PandemicGymEnv.from_config(name='collect', sim_config=sim_config, pandemic_regulations=ps.sh.austin_regulations,done_fn=ps.env.done.ORDone(done_fns=[ps.env.done.InfectionSummaryAboveThresholdDone(summary_type=ps.env.infection_model.InfectionSummary.CRITICAL,threshold=sim_config.max_hospital_capacity*3),ps.env.done.NoPandemicDone(num_days=30)]))
ps.init_globals(seed=random.randint(0,1000))#check if this affects both envs or just one
eval_env = ps.env.PandemicGymEnv.from_config(name='eval', sim_config=sim_config, pandemic_regulations=ps.sh.austin_regulations,done_fn=ps.env.done.ORDone(done_fns=[ps.env.done.InfectionSummaryAboveThresholdDone(summary_type=ps.env.infection_model.InfectionSummary.CRITICAL,threshold=sim_config.max_hospital_capacity*3),ps.env.done.NoPandemicDone(num_days=30)]))
use_gpu = True #@param {type:"boolean"}
strategy = strategy_utils.get_strategy(tpu=False, use_gpu=use_gpu)
a=np.ndarray((1),dtype='float32')
a[0]=0.0
c=np.ndarray(1,dtype='float32')
a
collect_env = suite_pybullet.load(env_name)
eval_env = suite_pybullet.load(env_name)
observation_spec, action_spec, time_step_spec = (spec_utils.get_tensor_specs(collect_env))
a=action_spec.minimum
a
c=np.array(0,dtype='float32')
c
c=np.array(0,dtype='float32')
d=np.ndarray(c)
observation_spec2 =BoundedArraySpec([13,],np.float32,minimum=0,maximum=1)
action_spec3=tf.TensorSpec(np.array([1],))
time_step_spec = BoundedArraySpec([13,],np.float32,minimum=0,maximum=1)
with strategy.scope():
critic_net = critic_network.CriticNetwork(
(observation_spec, action_spec),
observation_fc_layer_params=None,
action_fc_layer_params=None,
joint_fc_layer_params=critic_joint_fc_layer_params,
kernel_initializer='glorot_uniform',
last_kernel_initializer='glorot_uniform')
with strategy.scope():
actor_net = actor_distribution_network.ActorDistributionNetwork(
observation_spec,
action_spec3,
continuous_projection_net=(
tanh_normal_projection_network.TanhNormalProjectionNetwork))
| 0.405096 | 0.203075 |
# Computational Astrophysics
## Fundamentals of Visualization
---
## Eduard Larrañaga
Observatorio Astronómico Nacional\
Facultad de Ciencias\
Universidad Nacional de Colombia
---
### About this notebook
In this notebook we present some of the fundamentals of visualization using `python`.
---
### Simple Data Plots with `matplotlib.pyplot`
```
import numpy as np
from matplotlib import pyplot as plt
data = np.loadtxt('plotdata.txt', comments='#')
x = data[:,0]
y = data[:,1]
plt.plot(x,y)
plt.show()
plt.plot(x, y, label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.show()
plt.plot(x, y, '--r', label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.show()
plt.figure(figsize=(7,7))
plt.plot(x, y, '-', color='blue', linewidth=3, label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.grid()
plt.show()
```
---
### 2 or more curves in a plot
```
import numpy as np
from matplotlib import pyplot as plt
data = np.loadtxt('plotdata.txt', comments='#')
x = data[:,0]
y1 = data[:,1]
y2 = data[:,2]
fig, ax = plt.subplots()
ax.plot(x, y1, 'r')
ax.plot(x, y2, 'b')
plt.xlim(min(x), max(x))
plt.ylim(min(y1)*1.1, max(y1)*1.1)
ax.set(xlabel=r'$X$ label', ylabel=r'$Y$ label', title=' Title of the Plot')
ax.grid()
plt.show()
```
---
### Plotting a function
```
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = np.zeros(len(t))
for i in range(len(t)):
y1[i] = f(t[i])
y2 = np.zeros(len(t))
for i in range(len(t)):
y2[i] = g(t[i])
y3 = np.cos(t)
plt.figure()
plt.plot(t, y1, 'r', label=r'$f(t) = t^2 e^{-t^2}$')
plt.plot(t, y2, 'b', label=r'$g(t) = t \sin t $')
plt.plot(t, y3, '--g', label=r'$ \cos t$')
plt.xlabel('t')
plt.title('Three Curves')
plt.legend()
plt.grid()
plt.show()
```
---
### Subplots
```
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = f(t)
y2 = g(t)
plt.figure(figsize=(7,7))
plt.subplot(2, 1, 1)
plt.plot(t, y1, '-r', label=r'$f(t) = t^2 e^{-t^2}$')
plt.ylabel(r'$f(t)$')
plt.legend()
plt.subplot(2,1,2)
plt.plot(t, y2, '.b', label=r'$g(t) = t \sin 2t$')
plt.ylabel(r'$g(t)$')
plt.xlabel(r'$t$')
plt.legend()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = f(t)
y2 = g(t)
fig, ax = plt.subplots(2, figsize=(7,7))
ax[0].plot(t, y1, '-r', label=r'$f(t) = t^2 e^{-t^2}$')
ax[0].set_ylabel(r'$f(t)$')
ax[0].legend()
ax[1].plot(t, y2, '.b', label=r'$g(t) = t \sin 2t$')
ax[1].set_ylabel(r'$g(t)$')
ax[1].set_xlabel(r'$t$')
ax[1].legend()
plt.show()
```
### Seaborn Setup
```
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
x = np.linspace(0, 10, 100)
y1 = np.tan(x)
plt.figure(figsize=(7,7))
plt.plot(x, y1, 'r', label=r'$f(x) = \tan x$')
#plt.plot(t, y2, 'b', label=r'$g(t) = t \sin t $')
#plt.plot(t, y3, '--g', label=r'$ \cos t$')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.title('Three Curves')
plt.legend()
plt.show()
```
---
### An Important Plot!
```
import math
from matplotlib import pyplot as plt
Y = np.arange(-4,4,.005)
X = np.zeros((0))
for y in Y:
X = np.append(X,abs(y/2)- 0.09137*y**2 + math.sqrt(1-(abs(abs(y)-2)-1)**2) -3)
Y1 = np.append(np.arange(-7,-3,.01), np.arange(3,7,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 3*math.sqrt(-(y/7)**2+1))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-7.,-4,.01), np.arange(4,7.01,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, -3*math.sqrt(-(y/7)**2+1))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-1,-.8,.01), np.arange(.8, 1,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 9-8*abs(y))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.arange(-.5,.5,.05)
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1,2)
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-2.9,-1,.01), np.arange(1, 2.9,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 1.5 - .5*abs(y) - 1.89736*(math.sqrt(3-y**2+2*abs(y))-2) )
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-.7,-.45,.01), np.arange(.45, .7,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 3*abs(y)+.75)
X = np.append(X,X1)
Y = np.append(Y, Y1)
plt.plot(Y,X, 'y.')
ax = plt.gca()
ax.set_facecolor((0, 0, 0))
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.show()
```
|
github_jupyter
|
import numpy as np
from matplotlib import pyplot as plt
data = np.loadtxt('plotdata.txt', comments='#')
x = data[:,0]
y = data[:,1]
plt.plot(x,y)
plt.show()
plt.plot(x, y, label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.show()
plt.plot(x, y, '--r', label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.show()
plt.figure(figsize=(7,7))
plt.plot(x, y, '-', color='blue', linewidth=3, label=r'first curve label')
plt.xlabel(r'$x$ axis label')
plt.ylabel(r'$y$ axis label')
plt.legend()
plt.grid()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
data = np.loadtxt('plotdata.txt', comments='#')
x = data[:,0]
y1 = data[:,1]
y2 = data[:,2]
fig, ax = plt.subplots()
ax.plot(x, y1, 'r')
ax.plot(x, y2, 'b')
plt.xlim(min(x), max(x))
plt.ylim(min(y1)*1.1, max(y1)*1.1)
ax.set(xlabel=r'$X$ label', ylabel=r'$Y$ label', title=' Title of the Plot')
ax.grid()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = np.zeros(len(t))
for i in range(len(t)):
y1[i] = f(t[i])
y2 = np.zeros(len(t))
for i in range(len(t)):
y2[i] = g(t[i])
y3 = np.cos(t)
plt.figure()
plt.plot(t, y1, 'r', label=r'$f(t) = t^2 e^{-t^2}$')
plt.plot(t, y2, 'b', label=r'$g(t) = t \sin t $')
plt.plot(t, y3, '--g', label=r'$ \cos t$')
plt.xlabel('t')
plt.title('Three Curves')
plt.legend()
plt.grid()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = f(t)
y2 = g(t)
plt.figure(figsize=(7,7))
plt.subplot(2, 1, 1)
plt.plot(t, y1, '-r', label=r'$f(t) = t^2 e^{-t^2}$')
plt.ylabel(r'$f(t)$')
plt.legend()
plt.subplot(2,1,2)
plt.plot(t, y2, '.b', label=r'$g(t) = t \sin 2t$')
plt.ylabel(r'$g(t)$')
plt.xlabel(r'$t$')
plt.legend()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
def f(t):
return t**2 *np.exp(-t**2)
def g(t):
return t*np.sin(2*t)
t = np.linspace(-3, 3, 100)
y1 = f(t)
y2 = g(t)
fig, ax = plt.subplots(2, figsize=(7,7))
ax[0].plot(t, y1, '-r', label=r'$f(t) = t^2 e^{-t^2}$')
ax[0].set_ylabel(r'$f(t)$')
ax[0].legend()
ax[1].plot(t, y2, '.b', label=r'$g(t) = t \sin 2t$')
ax[1].set_ylabel(r'$g(t)$')
ax[1].set_xlabel(r'$t$')
ax[1].legend()
plt.show()
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
x = np.linspace(0, 10, 100)
y1 = np.tan(x)
plt.figure(figsize=(7,7))
plt.plot(x, y1, 'r', label=r'$f(x) = \tan x$')
#plt.plot(t, y2, 'b', label=r'$g(t) = t \sin t $')
#plt.plot(t, y3, '--g', label=r'$ \cos t$')
plt.xlabel(r'$x$')
plt.ylabel(r'$f(x)$')
plt.title('Three Curves')
plt.legend()
plt.show()
import math
from matplotlib import pyplot as plt
Y = np.arange(-4,4,.005)
X = np.zeros((0))
for y in Y:
X = np.append(X,abs(y/2)- 0.09137*y**2 + math.sqrt(1-(abs(abs(y)-2)-1)**2) -3)
Y1 = np.append(np.arange(-7,-3,.01), np.arange(3,7,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 3*math.sqrt(-(y/7)**2+1))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-7.,-4,.01), np.arange(4,7.01,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, -3*math.sqrt(-(y/7)**2+1))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-1,-.8,.01), np.arange(.8, 1,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 9-8*abs(y))
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.arange(-.5,.5,.05)
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1,2)
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-2.9,-1,.01), np.arange(1, 2.9,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 1.5 - .5*abs(y) - 1.89736*(math.sqrt(3-y**2+2*abs(y))-2) )
X = np.append(X,X1)
Y = np.append(Y, Y1)
Y1 = np.append(np.arange(-.7,-.45,.01), np.arange(.45, .7,.01))
X1 = np.zeros((0))
for y in Y1:
X1 = np.append(X1, 3*abs(y)+.75)
X = np.append(X,X1)
Y = np.append(Y, Y1)
plt.plot(Y,X, 'y.')
ax = plt.gca()
ax.set_facecolor((0, 0, 0))
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.show()
| 0.591369 | 0.975969 |
```
import dgl.nn as dglnn
from dgl import from_networkx
import torch.nn as nn
import torch as th
import torch.nn.functional as F
import dgl.function as fn
import networkx as nx
import pandas as pd
import socket
import struct
import random
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import category_encoders as ce
from sklearn.decomposition import PCA
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
data = pd.read_csv('NF-BoT-IoT.csv')
data
data['IPV4_SRC_ADDR'] = data.IPV4_SRC_ADDR.apply(lambda x: socket.inet_ntoa(struct.pack('>I', random.randint(0xac100001, 0xac1f0001))))
data['IPV4_SRC_ADDR'] = data.IPV4_SRC_ADDR.apply(str)
data['L4_SRC_PORT'] = data.L4_SRC_PORT.apply(str)
data['IPV4_DST_ADDR'] = data.IPV4_DST_ADDR.apply(str)
data['L4_DST_PORT'] = data.L4_DST_PORT.apply(str)
data['IPV4_SRC_ADDR'] = data['IPV4_SRC_ADDR'] + ':' + data['L4_SRC_PORT']
data['IPV4_DST_ADDR'] = data['IPV4_DST_ADDR'] + ':' + data['L4_DST_PORT']
data.drop(columns=['L4_SRC_PORT','L4_DST_PORT'],inplace=True)
data
data.drop(columns=['Attack'],inplace = True)
data.rename(columns={"Label": "label"},inplace = True)
label = data.label
data.drop(columns=['label'],inplace = True)
scaler = StandardScaler()
data = pd.concat([data, label], axis=1)
data
X_train, X_test, y_train, y_test = train_test_split(
data, label, test_size=0.3, random_state=123,stratify= label)
encoder = ce.TargetEncoder(cols=['TCP_FLAGS','L7_PROTO','PROTOCOL'])
encoder.fit(X_train, y_train)
X_train = encoder.transform(X_train)
cols_to_norm = list(set(list(X_train.iloc[:, 2:].columns )) - set(list(['label'])) )
X_train[cols_to_norm] = scaler.fit_transform(X_train[cols_to_norm])
X_train
X_train['h'] = X_train[ cols_to_norm ].values.tolist()
G = nx.from_pandas_edgelist(X_train, "IPV4_SRC_ADDR", "IPV4_DST_ADDR", ['h','label'],create_using=nx.MultiGraph())
G = G.to_directed()
G = from_networkx(G,edge_attrs=['h','label'] )
# Eq1
G.ndata['h'] = th.ones(G.num_nodes(), G.edata['h'].shape[1])
G.edata['train_mask'] = th.ones(len(G.edata['h']), dtype=th.bool)
G.edata['train_mask']
def compute_accuracy(pred, labels):
return (pred.argmax(1) == labels).float().mean().item()
class SAGELayer(nn.Module):
def __init__(self, ndim_in, edims, ndim_out, activation):
super(SAGELayer, self).__init__()
### force to outut fix dimensions
self.W_msg = nn.Linear(ndim_in + edims, ndim_out)
### apply weight
self.W_apply = nn.Linear(ndim_in + ndim_out, ndim_out)
self.activation = activation
def message_func(self, edges):
return {'m': self.W_msg(th.cat([edges.src['h'], edges.data['h']], 2))}
def forward(self, g_dgl, nfeats, efeats):
with g_dgl.local_scope():
g = g_dgl
g.ndata['h'] = nfeats
g.edata['h'] = efeats
# Eq4
g.update_all(self.message_func, fn.mean('m', 'h_neigh'))
# Eq5
g.ndata['h'] = F.relu(self.W_apply(th.cat([g.ndata['h'], g.ndata['h_neigh']], 2)))
return g.ndata['h']
class SAGE(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super(SAGE, self).__init__()
self.layers = nn.ModuleList()
self.layers.append(SAGELayer(ndim_in, edim, 128, activation))
self.layers.append(SAGELayer(128, edim, ndim_out, activation))
self.dropout = nn.Dropout(p=dropout)
def forward(self, g, nfeats, efeats):
for i, layer in enumerate(self.layers):
if i != 0:
nfeats = self.dropout(nfeats)
nfeats = layer(g, nfeats, efeats)
return nfeats.sum(1)
class MLPPredictor(nn.Module):
def __init__(self, in_features, out_classes):
super().__init__()
self.W = nn.Linear(in_features * 2, out_classes)
def apply_edges(self, edges):
h_u = edges.src['h']
h_v = edges.dst['h']
score = self.W(th.cat([h_u, h_v], 1))
return {'score': score}
def forward(self, graph, h):
with graph.local_scope():
graph.ndata['h'] = h
graph.apply_edges(self.apply_edges)
return graph.edata['score']
G.ndata['h'] = th.reshape(G.ndata['h'], (G.ndata['h'].shape[0], 1,G.ndata['h'].shape[1]))
G.edata['h'] = th.reshape(G.edata['h'], (G.edata['h'].shape[0], 1,G.edata['h'].shape[1]))
class Model(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super().__init__()
self.gnn = SAGE(ndim_in, ndim_out, edim, activation, dropout)
self.pred = MLPPredictor(ndim_out, 2)
def forward(self, g, nfeats, efeats):
h = self.gnn(g, nfeats, efeats)
return self.pred(g, h)
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(G.edata['label'].cpu().numpy()),
G.edata['label'].cpu().numpy())
class_weights
class_weights = th.FloatTensor(class_weights).cuda()
criterion = nn.CrossEntropyLoss(weight=class_weights)
G = G.to('cuda:0')
G.device
G.ndata['h'].device
G.edata['h'].device
node_features = G.ndata['h']
edge_features = G.edata['h']
edge_label = G.edata['label']
train_mask = G.edata['train_mask']
model = Model(G.ndata['h'].shape[2], 128, G.ndata['h'].shape[2], F.relu, 0.2).cuda()
opt = th.optim.Adam(model.parameters())
for epoch in range(1,5500):
pred = model(G, node_features,edge_features).cuda()
loss = criterion(pred[train_mask] ,edge_label[train_mask])
opt.zero_grad()
loss.backward()
opt.step()
if epoch % 100 == 0:
print('Training acc:', compute_accuracy(pred[train_mask], edge_label[train_mask]))
X_test = encoder.transform(X_test)
X_test[cols_to_norm] = scaler.transform(X_test[cols_to_norm])
X_test
X_test['h'] = X_test[ cols_to_norm ].values.tolist()
G_test = nx.from_pandas_edgelist(X_test, "IPV4_SRC_ADDR", "IPV4_DST_ADDR", ['h','label'],create_using=nx.MultiGraph())
G_test = G_test.to_directed()
G_test = from_networkx(G_test,edge_attrs=['h','label'] )
actual = G_test.edata.pop('label')
G_test.ndata['feature'] = th.ones(G_test.num_nodes(), G.ndata['h'].shape[2])
G_test.ndata['feature'] = th.reshape(G_test.ndata['feature'], (G_test.ndata['feature'].shape[0], 1, G_test.ndata['feature'].shape[1]))
G_test.edata['h'] = th.reshape(G_test.edata['h'], (G_test.edata['h'].shape[0], 1, G_test.edata['h'].shape[1]))
G_test = G_test.to('cuda:0')
import timeit
start_time = timeit.default_timer()
node_features_test = G_test.ndata['feature']
edge_features_test = G_test.edata['h']
test_pred = model(G_test, node_features_test, edge_features_test).cuda()
elapsed = timeit.default_timer() - start_time
print(str(elapsed) + ' seconds')
test_pred = test_pred.argmax(1)
test_pred = th.Tensor.cpu(test_pred).detach().numpy()
actual = ["Normal" if i == 0 else "Attack" for i in actual]
test_pred = ["Normal" if i == 0 else "Attack" for i in test_pred]
from sklearn.metrics import plot_confusion_matrix
import numpy as np
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(12, 12))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
from sklearn.metrics import confusion_matrix
plot_confusion_matrix(cm = confusion_matrix(actual, test_pred),
normalize = False,
target_names = np.unique(actual),
title = "Confusion Matrix")
```
|
github_jupyter
|
import dgl.nn as dglnn
from dgl import from_networkx
import torch.nn as nn
import torch as th
import torch.nn.functional as F
import dgl.function as fn
import networkx as nx
import pandas as pd
import socket
import struct
import random
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
import category_encoders as ce
from sklearn.decomposition import PCA
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
data = pd.read_csv('NF-BoT-IoT.csv')
data
data['IPV4_SRC_ADDR'] = data.IPV4_SRC_ADDR.apply(lambda x: socket.inet_ntoa(struct.pack('>I', random.randint(0xac100001, 0xac1f0001))))
data['IPV4_SRC_ADDR'] = data.IPV4_SRC_ADDR.apply(str)
data['L4_SRC_PORT'] = data.L4_SRC_PORT.apply(str)
data['IPV4_DST_ADDR'] = data.IPV4_DST_ADDR.apply(str)
data['L4_DST_PORT'] = data.L4_DST_PORT.apply(str)
data['IPV4_SRC_ADDR'] = data['IPV4_SRC_ADDR'] + ':' + data['L4_SRC_PORT']
data['IPV4_DST_ADDR'] = data['IPV4_DST_ADDR'] + ':' + data['L4_DST_PORT']
data.drop(columns=['L4_SRC_PORT','L4_DST_PORT'],inplace=True)
data
data.drop(columns=['Attack'],inplace = True)
data.rename(columns={"Label": "label"},inplace = True)
label = data.label
data.drop(columns=['label'],inplace = True)
scaler = StandardScaler()
data = pd.concat([data, label], axis=1)
data
X_train, X_test, y_train, y_test = train_test_split(
data, label, test_size=0.3, random_state=123,stratify= label)
encoder = ce.TargetEncoder(cols=['TCP_FLAGS','L7_PROTO','PROTOCOL'])
encoder.fit(X_train, y_train)
X_train = encoder.transform(X_train)
cols_to_norm = list(set(list(X_train.iloc[:, 2:].columns )) - set(list(['label'])) )
X_train[cols_to_norm] = scaler.fit_transform(X_train[cols_to_norm])
X_train
X_train['h'] = X_train[ cols_to_norm ].values.tolist()
G = nx.from_pandas_edgelist(X_train, "IPV4_SRC_ADDR", "IPV4_DST_ADDR", ['h','label'],create_using=nx.MultiGraph())
G = G.to_directed()
G = from_networkx(G,edge_attrs=['h','label'] )
# Eq1
G.ndata['h'] = th.ones(G.num_nodes(), G.edata['h'].shape[1])
G.edata['train_mask'] = th.ones(len(G.edata['h']), dtype=th.bool)
G.edata['train_mask']
def compute_accuracy(pred, labels):
return (pred.argmax(1) == labels).float().mean().item()
class SAGELayer(nn.Module):
def __init__(self, ndim_in, edims, ndim_out, activation):
super(SAGELayer, self).__init__()
### force to outut fix dimensions
self.W_msg = nn.Linear(ndim_in + edims, ndim_out)
### apply weight
self.W_apply = nn.Linear(ndim_in + ndim_out, ndim_out)
self.activation = activation
def message_func(self, edges):
return {'m': self.W_msg(th.cat([edges.src['h'], edges.data['h']], 2))}
def forward(self, g_dgl, nfeats, efeats):
with g_dgl.local_scope():
g = g_dgl
g.ndata['h'] = nfeats
g.edata['h'] = efeats
# Eq4
g.update_all(self.message_func, fn.mean('m', 'h_neigh'))
# Eq5
g.ndata['h'] = F.relu(self.W_apply(th.cat([g.ndata['h'], g.ndata['h_neigh']], 2)))
return g.ndata['h']
class SAGE(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super(SAGE, self).__init__()
self.layers = nn.ModuleList()
self.layers.append(SAGELayer(ndim_in, edim, 128, activation))
self.layers.append(SAGELayer(128, edim, ndim_out, activation))
self.dropout = nn.Dropout(p=dropout)
def forward(self, g, nfeats, efeats):
for i, layer in enumerate(self.layers):
if i != 0:
nfeats = self.dropout(nfeats)
nfeats = layer(g, nfeats, efeats)
return nfeats.sum(1)
class MLPPredictor(nn.Module):
def __init__(self, in_features, out_classes):
super().__init__()
self.W = nn.Linear(in_features * 2, out_classes)
def apply_edges(self, edges):
h_u = edges.src['h']
h_v = edges.dst['h']
score = self.W(th.cat([h_u, h_v], 1))
return {'score': score}
def forward(self, graph, h):
with graph.local_scope():
graph.ndata['h'] = h
graph.apply_edges(self.apply_edges)
return graph.edata['score']
G.ndata['h'] = th.reshape(G.ndata['h'], (G.ndata['h'].shape[0], 1,G.ndata['h'].shape[1]))
G.edata['h'] = th.reshape(G.edata['h'], (G.edata['h'].shape[0], 1,G.edata['h'].shape[1]))
class Model(nn.Module):
def __init__(self, ndim_in, ndim_out, edim, activation, dropout):
super().__init__()
self.gnn = SAGE(ndim_in, ndim_out, edim, activation, dropout)
self.pred = MLPPredictor(ndim_out, 2)
def forward(self, g, nfeats, efeats):
h = self.gnn(g, nfeats, efeats)
return self.pred(g, h)
from sklearn.utils import class_weight
class_weights = class_weight.compute_class_weight('balanced',
np.unique(G.edata['label'].cpu().numpy()),
G.edata['label'].cpu().numpy())
class_weights
class_weights = th.FloatTensor(class_weights).cuda()
criterion = nn.CrossEntropyLoss(weight=class_weights)
G = G.to('cuda:0')
G.device
G.ndata['h'].device
G.edata['h'].device
node_features = G.ndata['h']
edge_features = G.edata['h']
edge_label = G.edata['label']
train_mask = G.edata['train_mask']
model = Model(G.ndata['h'].shape[2], 128, G.ndata['h'].shape[2], F.relu, 0.2).cuda()
opt = th.optim.Adam(model.parameters())
for epoch in range(1,5500):
pred = model(G, node_features,edge_features).cuda()
loss = criterion(pred[train_mask] ,edge_label[train_mask])
opt.zero_grad()
loss.backward()
opt.step()
if epoch % 100 == 0:
print('Training acc:', compute_accuracy(pred[train_mask], edge_label[train_mask]))
X_test = encoder.transform(X_test)
X_test[cols_to_norm] = scaler.transform(X_test[cols_to_norm])
X_test
X_test['h'] = X_test[ cols_to_norm ].values.tolist()
G_test = nx.from_pandas_edgelist(X_test, "IPV4_SRC_ADDR", "IPV4_DST_ADDR", ['h','label'],create_using=nx.MultiGraph())
G_test = G_test.to_directed()
G_test = from_networkx(G_test,edge_attrs=['h','label'] )
actual = G_test.edata.pop('label')
G_test.ndata['feature'] = th.ones(G_test.num_nodes(), G.ndata['h'].shape[2])
G_test.ndata['feature'] = th.reshape(G_test.ndata['feature'], (G_test.ndata['feature'].shape[0], 1, G_test.ndata['feature'].shape[1]))
G_test.edata['h'] = th.reshape(G_test.edata['h'], (G_test.edata['h'].shape[0], 1, G_test.edata['h'].shape[1]))
G_test = G_test.to('cuda:0')
import timeit
start_time = timeit.default_timer()
node_features_test = G_test.ndata['feature']
edge_features_test = G_test.edata['h']
test_pred = model(G_test, node_features_test, edge_features_test).cuda()
elapsed = timeit.default_timer() - start_time
print(str(elapsed) + ' seconds')
test_pred = test_pred.argmax(1)
test_pred = th.Tensor.cpu(test_pred).detach().numpy()
actual = ["Normal" if i == 0 else "Attack" for i in actual]
test_pred = ["Normal" if i == 0 else "Attack" for i in test_pred]
from sklearn.metrics import plot_confusion_matrix
import numpy as np
def plot_confusion_matrix(cm,
target_names,
title='Confusion matrix',
cmap=None,
normalize=True):
import matplotlib.pyplot as plt
import numpy as np
import itertools
accuracy = np.trace(cm) / float(np.sum(cm))
misclass = 1 - accuracy
if cmap is None:
cmap = plt.get_cmap('Blues')
plt.figure(figsize=(12, 12))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
if target_names is not None:
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 1.5 if normalize else cm.max() / 2
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, "{:0.4f}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, "{:,}".format(cm[i, j]),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label\naccuracy={:0.4f}; misclass={:0.4f}'.format(accuracy, misclass))
plt.show()
from sklearn.metrics import confusion_matrix
plot_confusion_matrix(cm = confusion_matrix(actual, test_pred),
normalize = False,
target_names = np.unique(actual),
title = "Confusion Matrix")
| 0.76986 | 0.376795 |
# HW04: Sentiment Analysis
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.model import fit
from fastai.dataset import *
import torchtext
from torchtext import vocab, data
from torchtext.datasets import language_modeling
from fastai.rnn_reg import *
from fastai.rnn_train import *
from fastai.nlp import *
from fastai.lm_rnn import *
import dill as pickle
bs,bptt = 32,35
```
## Language modeling
### Data
```
PATH='./data/sentiment/'
df_imdb = pd.read_csv(f'{PATH}imdb_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_amzn = pd.read_csv(f'{PATH}amazon_cells_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_yelp = pd.read_csv(f'{PATH}yelp_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_all = pd.concat([df_imdb, df_amzn, df_yelp])
n=len(df_all);
print(n)
df_imdb.head()
os.makedirs(f'{PATH}trn/yes', exist_ok=True)
os.makedirs(f'{PATH}val/yes', exist_ok=True)
os.makedirs(f'{PATH}trn/no', exist_ok=True)
os.makedirs(f'{PATH}val/no', exist_ok=True)
os.makedirs(f'{PATH}all/trn', exist_ok=True)
os.makedirs(f'{PATH}all/val', exist_ok=True)
os.makedirs(f'{PATH}models', exist_ok=True)
for (i,(_,r)) in enumerate(df_all.iterrows()):
dset = 'trn' if random.random()>0.1 else 'val'
open(f'{PATH}all/{dset}/{i}.txt', 'w').write(r['text'])
for (i,(_,r)) in enumerate(df_imdb.iterrows()):
lbl = 'yes' if r.label else 'no'
dset = 'trn' if random.random()>0.1 else 'val'
open(f'{PATH}{dset}/{lbl}/{i}.txt', 'w').write(r['text'])
from spacy.symbols import ORTH
my_tok = spacy.load('en')
# my_tok.tokenizer.add_special_case('<SUMM>', [{ORTH: '<SUMM>'}])
def my_spacy_tok(x): return [tok.text for tok in my_tok.tokenizer(x)]
TEXT = data.Field(lower=True, tokenize=my_spacy_tok)
FILES = dict(train='trn', validation='val', test='val')
md = LanguageModelData.from_text_files(f'{PATH}all/', TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
TEXT.vocab.itos[:12]
' '.join(md.trn_ds[0].text[:150])
```
### Train
```
em_sz = 200
nh = 500
nl = 3
opt_fn = partial(optim.Adam, betas=(0.7, 0.99))
learner = md.get_model(opt_fn, em_sz, nh, nl,
dropout=0.05, dropouth=0.1, dropouti=0.05, dropoute=0.02, wdrop=0.2)
# dropout=0.4, dropouth=0.3, dropouti=0.65, dropoute=0.1, wdrop=0.5
# dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05)
learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
learner.clip=0.3
learner.fit(3e-3, 1, wds=1e-6)
learner.fit(3e-3, 3, wds=1e-6, cycle_len=1, cycle_mult=2)
learner.save_encoder('adam2_enc')
```
keep on running this block until you get a good trn and val loss that is not overfitting
```
learner.fit(3e-3, 1, wds=1e-6, cycle_len=5, cycle_save_name='adam3_10')
learner.save_encoder('adam3_10_enc')
```
### Test
```
def proc_str(s): return TEXT.preprocess(TEXT.tokenize(s))
def num_str(s): return TEXT.numericalize([proc_str(s)])
m=learner.model
s="""very, very slow-moving, aimless movie"""
def sample_model(m, s, l=50):
t = num_str(s)
m[0].bs=1
m.eval()
m.reset()
res,*_ = m(t)
print('...', end='')
for i in range(l):
n=res[-1].topk(2)[1]
n = n[1] if n.data[0]==0 else n[0]
word = TEXT.vocab.itos[n.data[0]]
print(word, end=' ')
if word=='<eos>': break
res,*_ = m(n[0].unsqueeze(0))
m[0].bs=bs
sample_model(m,s)
```
### Sentiment
```
TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb'))
class ReviewDataset(torchtext.data.Dataset):
def __init__(self, path, text_field, label_field, **kwargs):
fields = [('text', text_field), ('label', label_field)]
examples = []
for label in ['yes', 'no']:
for fname in glob(os.path.join(path, label, '*.txt')):
with open(fname, 'r') as f: text = f.readline()
examples.append(data.Example.fromlist([text, label], fields))
super().__init__(examples, fields, **kwargs)
@staticmethod
def sort_key(ex): return len(ex.text)
@classmethod
def splits(cls, text_field, label_field, root='.data',
train='train', test='test', **kwargs):
return super().splits(
root, text_field=text_field, label_field=label_field,
train=train, validation=None, test=test, **kwargs)
REV_LABEL = data.Field(sequential=False)
splits = ReviewDataset.splits(TEXT, REV_LABEL, PATH, train='trn', test='val')
md2 = TextData.from_splits(PATH, splits, bs)
# dropout=0.3, dropouti=0.4, wdrop=0.3, dropoute=0.05, dropouth=0.2)
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
def prec_at_6(preds,targs):
precision, recall, _ = precision_recall_curve(targs==2, preds[:,2])
print(recall[precision>=0.6][0])
return recall[precision>=0.6][0]
# dropout=0.4, dropouth=0.3, dropouti=0.65, dropoute=0.1, wdrop=0.5
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl,
dropout=0.1, dropouti=0.65, wdrop=0.5, dropoute=0.1, dropouth=0.3)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.clip=25.
m3.load_encoder(f'adam3_10_enc')
lrs=np.array([1e-4,1e-3,1e-2])
m3.freeze_to(-1)
m3.fit(lrs/2, 1, metrics=[accuracy])
m3.unfreeze()
m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1)
m3.fit(lrs, 2, metrics=[accuracy], cycle_len=4, cycle_save_name='imdb2')
prec_at_6(*m3.predict_with_targs())
m3.fit(lrs, 4, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2')
prec_at_6(*m3.predict_with_targs())
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.model import fit
from fastai.dataset import *
import torchtext
from torchtext import vocab, data
from torchtext.datasets import language_modeling
from fastai.rnn_reg import *
from fastai.rnn_train import *
from fastai.nlp import *
from fastai.lm_rnn import *
import dill as pickle
bs,bptt = 32,35
PATH='./data/sentiment/'
df_imdb = pd.read_csv(f'{PATH}imdb_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_amzn = pd.read_csv(f'{PATH}amazon_cells_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_yelp = pd.read_csv(f'{PATH}yelp_labelled.txt', sep='\t', header=None, names=['text', 'label'])
df_all = pd.concat([df_imdb, df_amzn, df_yelp])
n=len(df_all);
print(n)
df_imdb.head()
os.makedirs(f'{PATH}trn/yes', exist_ok=True)
os.makedirs(f'{PATH}val/yes', exist_ok=True)
os.makedirs(f'{PATH}trn/no', exist_ok=True)
os.makedirs(f'{PATH}val/no', exist_ok=True)
os.makedirs(f'{PATH}all/trn', exist_ok=True)
os.makedirs(f'{PATH}all/val', exist_ok=True)
os.makedirs(f'{PATH}models', exist_ok=True)
for (i,(_,r)) in enumerate(df_all.iterrows()):
dset = 'trn' if random.random()>0.1 else 'val'
open(f'{PATH}all/{dset}/{i}.txt', 'w').write(r['text'])
for (i,(_,r)) in enumerate(df_imdb.iterrows()):
lbl = 'yes' if r.label else 'no'
dset = 'trn' if random.random()>0.1 else 'val'
open(f'{PATH}{dset}/{lbl}/{i}.txt', 'w').write(r['text'])
from spacy.symbols import ORTH
my_tok = spacy.load('en')
# my_tok.tokenizer.add_special_case('<SUMM>', [{ORTH: '<SUMM>'}])
def my_spacy_tok(x): return [tok.text for tok in my_tok.tokenizer(x)]
TEXT = data.Field(lower=True, tokenize=my_spacy_tok)
FILES = dict(train='trn', validation='val', test='val')
md = LanguageModelData.from_text_files(f'{PATH}all/', TEXT, **FILES, bs=bs, bptt=bptt, min_freq=10)
pickle.dump(TEXT, open(f'{PATH}models/TEXT.pkl','wb'))
len(md.trn_dl), md.nt, len(md.trn_ds), len(md.trn_ds[0].text)
TEXT.vocab.itos[:12]
' '.join(md.trn_ds[0].text[:150])
em_sz = 200
nh = 500
nl = 3
opt_fn = partial(optim.Adam, betas=(0.7, 0.99))
learner = md.get_model(opt_fn, em_sz, nh, nl,
dropout=0.05, dropouth=0.1, dropouti=0.05, dropoute=0.02, wdrop=0.2)
# dropout=0.4, dropouth=0.3, dropouti=0.65, dropoute=0.1, wdrop=0.5
# dropouti=0.05, dropout=0.05, wdrop=0.1, dropoute=0.02, dropouth=0.05)
learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
learner.clip=0.3
learner.fit(3e-3, 1, wds=1e-6)
learner.fit(3e-3, 3, wds=1e-6, cycle_len=1, cycle_mult=2)
learner.save_encoder('adam2_enc')
learner.fit(3e-3, 1, wds=1e-6, cycle_len=5, cycle_save_name='adam3_10')
learner.save_encoder('adam3_10_enc')
def proc_str(s): return TEXT.preprocess(TEXT.tokenize(s))
def num_str(s): return TEXT.numericalize([proc_str(s)])
m=learner.model
s="""very, very slow-moving, aimless movie"""
def sample_model(m, s, l=50):
t = num_str(s)
m[0].bs=1
m.eval()
m.reset()
res,*_ = m(t)
print('...', end='')
for i in range(l):
n=res[-1].topk(2)[1]
n = n[1] if n.data[0]==0 else n[0]
word = TEXT.vocab.itos[n.data[0]]
print(word, end=' ')
if word=='<eos>': break
res,*_ = m(n[0].unsqueeze(0))
m[0].bs=bs
sample_model(m,s)
TEXT = pickle.load(open(f'{PATH}models/TEXT.pkl','rb'))
class ReviewDataset(torchtext.data.Dataset):
def __init__(self, path, text_field, label_field, **kwargs):
fields = [('text', text_field), ('label', label_field)]
examples = []
for label in ['yes', 'no']:
for fname in glob(os.path.join(path, label, '*.txt')):
with open(fname, 'r') as f: text = f.readline()
examples.append(data.Example.fromlist([text, label], fields))
super().__init__(examples, fields, **kwargs)
@staticmethod
def sort_key(ex): return len(ex.text)
@classmethod
def splits(cls, text_field, label_field, root='.data',
train='train', test='test', **kwargs):
return super().splits(
root, text_field=text_field, label_field=label_field,
train=train, validation=None, test=test, **kwargs)
REV_LABEL = data.Field(sequential=False)
splits = ReviewDataset.splits(TEXT, REV_LABEL, PATH, train='trn', test='val')
md2 = TextData.from_splits(PATH, splits, bs)
# dropout=0.3, dropouti=0.4, wdrop=0.3, dropoute=0.05, dropouth=0.2)
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
def prec_at_6(preds,targs):
precision, recall, _ = precision_recall_curve(targs==2, preds[:,2])
print(recall[precision>=0.6][0])
return recall[precision>=0.6][0]
# dropout=0.4, dropouth=0.3, dropouti=0.65, dropoute=0.1, wdrop=0.5
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl,
dropout=0.1, dropouti=0.65, wdrop=0.5, dropoute=0.1, dropouth=0.3)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.clip=25.
m3.load_encoder(f'adam3_10_enc')
lrs=np.array([1e-4,1e-3,1e-2])
m3.freeze_to(-1)
m3.fit(lrs/2, 1, metrics=[accuracy])
m3.unfreeze()
m3.fit(lrs, 1, metrics=[accuracy], cycle_len=1)
m3.fit(lrs, 2, metrics=[accuracy], cycle_len=4, cycle_save_name='imdb2')
prec_at_6(*m3.predict_with_targs())
m3.fit(lrs, 4, metrics=[accuracy], cycle_len=2, cycle_save_name='imdb2')
prec_at_6(*m3.predict_with_targs())
| 0.456894 | 0.594698 |
# Exploring Raw Data with _ctapipe_
Here are just some very simplistic examples of going through and inspecting the raw data, using only the very simple pieces that are implemented right now.
```
# some setup (need to import the things we will use later)
from ctapipe.utils.datasets import get_path
from ctapipe.io.hessio import hessio_event_source
from ctapipe import visualization, io
from matplotlib import pyplot as plt
from astropy import units as u
%matplotlib inline
```
to read HESSIO format data, one must first make sure you install the `pyhessioxxx` module separately (currently it is not included as part of ctapipe), and make sure it is in your `PYTHONPATH`. Then the following line will work:
```
source = hessio_event_source(get_path("gamma_test.simtel.gz"), max_events=100)
```
## looking at what is in the event
```
event = next(source) # get next event
print(event)
print(event.dl0)
```
the event is just a class with a bunch of data items in it. You can see a more compact represntation via:
```
print(repr(event))
print(repr(event.dl0))
print(event.dl0.tels_with_data)
```
note that the event has 2 telescopes in it: 38,40... Let's try the next one:
```
event = next(source) # get the next event
print(event.dl0.tels_with_data)
```
now, we have a larger event with many telescopes... Let's look at the data from **CT24**:
```
teldata = event.dl0.tel[24]
print(teldata)
teldata
```
again, `event.tel_data` contains a data structure for the telescope data, with some fields like `adc_samples`.
Let's make a 2D plot of the sample data (sample vs pixel), so we can see if we see the event:
```
plt.pcolormesh(teldata.adc_samples[0]) # note the [0] is for channel 0
plt.xlabel("sample number")
plt.ylabel("Pixel_id")
```
Let's zoom in to see if we can identify the pixels that have the Cherenkov signal in them
```
plt.pcolormesh(teldata.adc_samples[0])
plt.ylim(260,290)
plt.xlabel("sample number")
plt.ylabel("pixel_id")
print("adc_samples[0] is an array of shape (N_pix,N_slice) =",teldata.adc_samples[0].shape)
```
Now we can really see that some pixels have a signal in them!
Lets look at a 1D plot of pixel 270 in channel 0 and see the signal:
```
trace = teldata.adc_samples[0][270]
plt.plot(trace)
```
Great! It looks like a *standard Cherenkov signal*!
Let's take a look at several traces to see if the peaks area aligned:
```
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.adc_samples[0][pix_id], label="pix {}".format(pix_id))
plt.legend()
```
Let's define the integration windows first:
By eye, they seem to be reaonsable from sample 8 to 13 for signal, and 20 to 29 for pedestal
```
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.adc_samples[0][pix_id],'+-')
plt.fill_betweenx([0,1200],20,29,color='red',alpha=0.3)
plt.fill_betweenx([0,1200],8,13,color='green',alpha=0.3)
```
## Very simplisitic trace analysis:
Now, let's for example calculate a signal and background in a the fixed windows we defined for this single event:
```
data = teldata.adc_samples[0]
peds = data[:, 20:29].mean(axis=1) # mean of samples 20 to 30 for all pixels
sums = data[:, 8:13].sum(axis=1)/(13-8) # simple sum integration
phist = plt.hist(peds, bins=40, range=[0,150])
plt.title("Pedestal Distribution for a single event")
```
let's now take a look at the pedestal-subtracted sums and a pedestal-subtracted signal:
```
plt.plot(sums - peds)
# we can also subtract the pedestals from the traces themselves, which would be needed to compare peaks properly
for ii in range(270,280):
plt.plot(data[ii] - peds[ii])
```
## camera displays
better yet, let's do it in 2D! At this point, the ArrayConfig data model is not implemented, so there is not a good way to load all the camera definitions (right now it is hacked into the `hessio_event_source`, which will at least read the pixel positions from the file)
```
pix_x, pix_y= event.meta.pixel_pos[24]
camgeom = io.CameraGeometry.guess(pix_x*u.m, pix_y*u.m) # just guess the geometry from the pix pos
title="CT24, run {} event {} ped-sub".format(event.dl0.run_id,event.dl0.event_id)
disp = visualization.CameraDisplay(camgeom,title=title)
disp.image = sums - peds
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95) # autoscale
```
It looks like a nice signal! We have plotted our pedestal-subtracted trace integral, and see the shower clearly!
Let's look at all telescopes:
```
for tel in event.dl0.tels_with_data:
plt.figure()
pix_x, pix_y= event.meta.pixel_pos[tel]
camgeom = io.CameraGeometry.guess(pix_x*u.m, pix_y*u.m) # just guess the geometry from the pix pos
title="CT{}, run {} event {}".format(tel,event.dl0.run_id,event.dl0.event_id)
disp = visualization.CameraDisplay(camgeom,title=title)
disp.image = event.dl0.tel[tel].adc_sums[0]
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95)
```
# some signal processing...
Let's try to detect the peak using the scipy.signal package:
http://docs.scipy.org/doc/scipy/reference/signal.html
```
from scipy import signal
import numpy as np
pix_ids = np.arange(len(data))
has_signal = sums > 300
widths = np.array([8,]) # peak widths to search for (let's fix it at 8 samples, about the width of the peak)
peaks = [signal.find_peaks_cwt(trace,widths) for trace in data[has_signal] ]
for p,s in zip(pix_ids[has_signal],peaks):
print("pix{} has peaks at sample {}".format(p,s))
plt.plot(data[p])
plt.scatter(np.array(s),data[p,s])
```
clearly the signal needs to be filtered first, or an appropriate wavelet used, but the idea is nice
|
github_jupyter
|
# some setup (need to import the things we will use later)
from ctapipe.utils.datasets import get_path
from ctapipe.io.hessio import hessio_event_source
from ctapipe import visualization, io
from matplotlib import pyplot as plt
from astropy import units as u
%matplotlib inline
source = hessio_event_source(get_path("gamma_test.simtel.gz"), max_events=100)
event = next(source) # get next event
print(event)
print(event.dl0)
print(repr(event))
print(repr(event.dl0))
print(event.dl0.tels_with_data)
event = next(source) # get the next event
print(event.dl0.tels_with_data)
teldata = event.dl0.tel[24]
print(teldata)
teldata
plt.pcolormesh(teldata.adc_samples[0]) # note the [0] is for channel 0
plt.xlabel("sample number")
plt.ylabel("Pixel_id")
plt.pcolormesh(teldata.adc_samples[0])
plt.ylim(260,290)
plt.xlabel("sample number")
plt.ylabel("pixel_id")
print("adc_samples[0] is an array of shape (N_pix,N_slice) =",teldata.adc_samples[0].shape)
trace = teldata.adc_samples[0][270]
plt.plot(trace)
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.adc_samples[0][pix_id], label="pix {}".format(pix_id))
plt.legend()
for pix_id in [269,270,271,272,273,274,275,276]:
plt.plot(teldata.adc_samples[0][pix_id],'+-')
plt.fill_betweenx([0,1200],20,29,color='red',alpha=0.3)
plt.fill_betweenx([0,1200],8,13,color='green',alpha=0.3)
data = teldata.adc_samples[0]
peds = data[:, 20:29].mean(axis=1) # mean of samples 20 to 30 for all pixels
sums = data[:, 8:13].sum(axis=1)/(13-8) # simple sum integration
phist = plt.hist(peds, bins=40, range=[0,150])
plt.title("Pedestal Distribution for a single event")
plt.plot(sums - peds)
# we can also subtract the pedestals from the traces themselves, which would be needed to compare peaks properly
for ii in range(270,280):
plt.plot(data[ii] - peds[ii])
pix_x, pix_y= event.meta.pixel_pos[24]
camgeom = io.CameraGeometry.guess(pix_x*u.m, pix_y*u.m) # just guess the geometry from the pix pos
title="CT24, run {} event {} ped-sub".format(event.dl0.run_id,event.dl0.event_id)
disp = visualization.CameraDisplay(camgeom,title=title)
disp.image = sums - peds
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95) # autoscale
for tel in event.dl0.tels_with_data:
plt.figure()
pix_x, pix_y= event.meta.pixel_pos[tel]
camgeom = io.CameraGeometry.guess(pix_x*u.m, pix_y*u.m) # just guess the geometry from the pix pos
title="CT{}, run {} event {}".format(tel,event.dl0.run_id,event.dl0.event_id)
disp = visualization.CameraDisplay(camgeom,title=title)
disp.image = event.dl0.tel[tel].adc_sums[0]
disp.cmap = plt.cm.RdBu_r
disp.add_colorbar()
disp.set_limits_percent(95)
from scipy import signal
import numpy as np
pix_ids = np.arange(len(data))
has_signal = sums > 300
widths = np.array([8,]) # peak widths to search for (let's fix it at 8 samples, about the width of the peak)
peaks = [signal.find_peaks_cwt(trace,widths) for trace in data[has_signal] ]
for p,s in zip(pix_ids[has_signal],peaks):
print("pix{} has peaks at sample {}".format(p,s))
plt.plot(data[p])
plt.scatter(np.array(s),data[p,s])
| 0.460774 | 0.976152 |
```
%matplotlib inline
```
# L1 Penalty and Sparsity in Logistic Regression
Comparison of the sparsity (percentage of zero coefficients) of solutions when
L1, L2 and Elastic-Net penalty are used for different values of C. We can see
that large values of C give more freedom to the model. Conversely, smaller
values of C constrain the model more. In the L1 penalty case, this leads to
sparser solutions. As expected, the Elastic-Net penalty sparsity is between
that of L1 and L2.
We classify 8x8 images of digits into two classes: 0-4 against 5-9.
The visualization shows coefficients of the models for varying C.
```
print(__doc__)
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Andreas Mueller <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
X, y = datasets.load_digits(return_X_y=True)
X = StandardScaler().fit_transform(X)
# classify small against large digits
y = (y > 4).astype(np.int)
l1_ratio = 0.5 # L1 weight in the Elastic-Net regularization
fig, axes = plt.subplots(3, 3)
# Set regularization parameter
for i, (C, axes_row) in enumerate(zip((1, 0.1, 0.01), axes)):
# turn down tolerance for short training time
clf_l1_LR = LogisticRegression(C=C, penalty='l1', tol=0.01, solver='saga')
clf_l2_LR = LogisticRegression(C=C, penalty='l2', tol=0.01, solver='saga')
clf_en_LR = LogisticRegression(C=C, penalty='elasticnet', solver='saga',
l1_ratio=l1_ratio, tol=0.01)
clf_l1_LR.fit(X, y)
clf_l2_LR.fit(X, y)
clf_en_LR.fit(X, y)
coef_l1_LR = clf_l1_LR.coef_.ravel()
coef_l2_LR = clf_l2_LR.coef_.ravel()
coef_en_LR = clf_en_LR.coef_.ravel()
# coef_l1_LR contains zeros due to the
# L1 sparsity inducing norm
sparsity_l1_LR = np.mean(coef_l1_LR == 0) * 100
sparsity_l2_LR = np.mean(coef_l2_LR == 0) * 100
sparsity_en_LR = np.mean(coef_en_LR == 0) * 100
print("C=%.2f" % C)
print("{:<40} {:.2f}%".format("Sparsity with L1 penalty:", sparsity_l1_LR))
print("{:<40} {:.2f}%".format("Sparsity with Elastic-Net penalty:",
sparsity_en_LR))
print("{:<40} {:.2f}%".format("Sparsity with L2 penalty:", sparsity_l2_LR))
print("{:<40} {:.2f}".format("Score with L1 penalty:",
clf_l1_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with Elastic-Net penalty:",
clf_en_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with L2 penalty:",
clf_l2_LR.score(X, y)))
if i == 0:
axes_row[0].set_title("L1 penalty")
axes_row[1].set_title("Elastic-Net\nl1_ratio = %s" % l1_ratio)
axes_row[2].set_title("L2 penalty")
for ax, coefs in zip(axes_row, [coef_l1_LR, coef_en_LR, coef_l2_LR]):
ax.imshow(np.abs(coefs.reshape(8, 8)), interpolation='nearest',
cmap='binary', vmax=1, vmin=0)
ax.set_xticks(())
ax.set_yticks(())
axes_row[0].set_ylabel('C = %s' % C)
plt.show()
```
|
github_jupyter
|
%matplotlib inline
print(__doc__)
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Andreas Mueller <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
X, y = datasets.load_digits(return_X_y=True)
X = StandardScaler().fit_transform(X)
# classify small against large digits
y = (y > 4).astype(np.int)
l1_ratio = 0.5 # L1 weight in the Elastic-Net regularization
fig, axes = plt.subplots(3, 3)
# Set regularization parameter
for i, (C, axes_row) in enumerate(zip((1, 0.1, 0.01), axes)):
# turn down tolerance for short training time
clf_l1_LR = LogisticRegression(C=C, penalty='l1', tol=0.01, solver='saga')
clf_l2_LR = LogisticRegression(C=C, penalty='l2', tol=0.01, solver='saga')
clf_en_LR = LogisticRegression(C=C, penalty='elasticnet', solver='saga',
l1_ratio=l1_ratio, tol=0.01)
clf_l1_LR.fit(X, y)
clf_l2_LR.fit(X, y)
clf_en_LR.fit(X, y)
coef_l1_LR = clf_l1_LR.coef_.ravel()
coef_l2_LR = clf_l2_LR.coef_.ravel()
coef_en_LR = clf_en_LR.coef_.ravel()
# coef_l1_LR contains zeros due to the
# L1 sparsity inducing norm
sparsity_l1_LR = np.mean(coef_l1_LR == 0) * 100
sparsity_l2_LR = np.mean(coef_l2_LR == 0) * 100
sparsity_en_LR = np.mean(coef_en_LR == 0) * 100
print("C=%.2f" % C)
print("{:<40} {:.2f}%".format("Sparsity with L1 penalty:", sparsity_l1_LR))
print("{:<40} {:.2f}%".format("Sparsity with Elastic-Net penalty:",
sparsity_en_LR))
print("{:<40} {:.2f}%".format("Sparsity with L2 penalty:", sparsity_l2_LR))
print("{:<40} {:.2f}".format("Score with L1 penalty:",
clf_l1_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with Elastic-Net penalty:",
clf_en_LR.score(X, y)))
print("{:<40} {:.2f}".format("Score with L2 penalty:",
clf_l2_LR.score(X, y)))
if i == 0:
axes_row[0].set_title("L1 penalty")
axes_row[1].set_title("Elastic-Net\nl1_ratio = %s" % l1_ratio)
axes_row[2].set_title("L2 penalty")
for ax, coefs in zip(axes_row, [coef_l1_LR, coef_en_LR, coef_l2_LR]):
ax.imshow(np.abs(coefs.reshape(8, 8)), interpolation='nearest',
cmap='binary', vmax=1, vmin=0)
ax.set_xticks(())
ax.set_yticks(())
axes_row[0].set_ylabel('C = %s' % C)
plt.show()
| 0.692018 | 0.915658 |
# Aprendizado de máquina - Parte 1
_Aprendizado de máquina_ (_machine learning_, ML) é um subcampo da inteligência artificial que tem por objetivo permitir que o computador _aprenda com os dados_ sem ser explicitamente programado. Em linhas gerais, no _machine learning_ se constrói algoritmos que leem dados, aprendem com a "experiência" deles e inferem coisas a partir do conhecimento adquirido. Esta área tem sido de grande valor para muitos setores por ser capaz de transformar dados aparentemente desconexos em informações cruciais para a tomada de decisões pelo reconhecimento de padrões significativos.
## Modelagem e a subdivisão da área
Os problemas fundamentais de ML em geral podem ser explicados por meio de _modelos_. Um modelo matemático (ou probabilístico) nada mais é do que uma relação entre variáveis. As duas maiores classes de problemas de ML são as seguintes.
- **Aprendizagem supervisionada (_supervised learning_)**, aplicável a situações em que desejamos predizer valores. Neste caso, os algoritmos aprendem a partir de um conjunto de treinamento rotulado (_labels_ ou _exemplars_) e procuram _generalizações_ para todos os dados de entrada possíveis. Em problemas supervisionados, é necessário saber que dado fornece a "verdade fundamental" para que outros possam a ele ser comparados. Popularmente, este termo é chamado de _ground-truth_. Exemplos de algoritmos desta classe são: regressão logística (_logistic regression_), máquinas de vetor de suporte (_support vector machines_) e floresta aleatória (_random forest_).
- **Aprendizagem não-supervisionada (_unsupervised learning_)**, aplicável a situações em que desejamos explorar os dados para explicá-los. Neste caso, os algoritmos aprendem a partir de um conjunto de treinamento não rotulado (_unlabeled) e buscam _explicações_ a partir de algum critério estatístico, geométrico ou de similaridade. Exemplos de algoritmos desta classe são: clusterização por _k-means_ (_k-means clustering_ e núcleo-estimador da função densidade (_kernel density estimation_).
Existe ainda uma terceira classe que não estudaremos neste curso, a qual corresponde à **aprendizagem por reforço** (_reinforcement learning_), cujos algoritmos aprendem a partir de reforço para aperfeiçoar a qualidade de uma resposta explorando o espaço de solução iterativamente.
Como a {numref}`overview-ml` resume, problemas de aprendizagem supervisionada podem ser de:
- _classificação_, se a resposta procurada é discreta, isto é, se há apenas alguns valores possíveis para atribuição (p.ex. classificar se uma família é de baixa, média ou alta renda a partir de dados econômicos);
- _regressão_, se a resposta procurada é contínua, isto é, se admite valores variáveis (p.ex. determinar a renda dos membros de uma família com base em suas profissões).
Por outro lado, problemas de aprendizagem não supervisionada podem ser de:
- _clusterização_, se a resposta procurada deve ser organizada em vários grupos. A clusterização tem similaridades com o problema de classificação, exceto pelo desconhecimento _a priori_, de quantas classes existem;
- _estimativa de densidade_, se a resposta procurada é a explicação de processos fundamentais responsáveis pela distribuição dos dados.
```{figure} ../figs/13/visao-geral-ml.png
---
width: 600px
name: overview-ml
---
Classes principais e problemas fundamentais do _machine learning_. Fonte: adaptado de Chah.
```
## Estudo de caso: classificação de empréstimos bancários
O problema que estudaremos consiste em predizer se o pedido de empréstimo de uma pessoa será parcial ou totalmente aprovado por uma financeira. O banco de dados disponível da financeira abrange os anos de 2007 a 2011.
A aprovação do pedido baseia-se em uma análise de risco que usa diversas informações, tais como renda anual da pessoa, endividamento, calotes, taxa de juros do empréstimo, etc.
Matematicamente, o pedido da pessoa será bem-sucedido se
$$\alpha = \frac{E - F}{E} \ge 0.95,$$
onde $E$ é o valor do empréstimo requisitado e $F$ o financiamento liberado. O classificador binário pode ser escrito pela função
$$h({\bf X}): \mathbb{M}_{n \, \times \, d} \to \mathbb{K},$$
com $\mathbb{K} = \{+1,-1\}$ e ${\bf X}$ é uma matriz de $n$ amostras e $d$ _features_ pertencente ao conjunto abstrato $\mathbb{M}_{n \, \times \, d}$.
```{note}
Em um problema de classificação, se a resposta admite apenas dois valores (duas classes), como "sim" e "não", diz-se que o classificador é **binário**. Se mais valores são admissíveis, diz-se que o classificador é **mutliclasse**.
```
```
import pickle
import numpy as np
import matplotlib.pyplot as plt
```
Vamos ler o banco de dados.
```
import pickle
f = open('../database/dataset_small.pkl','rb')
# necessário encoding 'latin1'
(x,y) = pickle.load(f,encoding='latin1')
```
Aqui, `x` é a nossa matriz de features.
```
# 4140 amostras
# 15 features
x.shape
```
`y` é o vetor de _labels_
```
# 4140 targets +1 ou -1
y,y.shape
```
Comentários:
- As _features_ (atributos) são características que nos permitem distinguir um item. Neste exemplo, são todas as informações coletadas sobre a pessoa ou sobre o mecanismo de empréstimo. São 15, no total, com 4140 valores reais (amostras) cada.
- Em geral, uma amostra pode ser um documento, figura, arquivo de áudio, linha de uma planilha.
- _Features_ são geralmente valores reais, mas podem ser booleanos, discretos, ou categóricos.
- O vetor-alvo (_target_) contém valores que marcam se empréstimos passados no histórico da financeira foram aprovados ou reprovados.
### Interfaces do `scikit-learn`
Usaremos o módulo `scikit-learn` para resolver o problema. Este módulo usa três interfaces:
- `fit()` (estimador), para construir modelos de ajuste;
- `predict()` (preditor), para fazer predições;
- `transform()` (transformador), para converter dados;
O objetivo é predizer empréstimos malsucedidos, isto é, aqueles que se acham aquém do limiar de 95% de $\alpha$.
```
from sklearn import neighbors
# cria uma instância de classificação
# 11 vizinhos mais próximos
nn = 11
knn = neighbors.KNeighborsClassifier(n_neighbors=nn)
# treina o classificador
knn.fit(x,y)
# calcula a predição
yh = knn.predict(x)
# predição, real
y,yh
# altere nn e verifique diferenças
#from numpy import size, where
#size(where(y - yh == 0))
```
```{note}
O algoritmo de classificação dos _K_ vizinhos mais próximos foi proposto em 1975. A base de seu funcionamento é a determinação do rótulo de classificação de uma amostra a partir de _K_ amostras vizinhas em um conjunto de treinamento. Saiba mais [aqui](http://computacaointeligente.com.br/algoritmos/k-vizinhos-mais-proximos/).
```
#### Acurácia
Podemos medir o desempenho do classificador usando métricas. A métrica padrão para o método _KNN_ é a _acurácia_, dada por:
$$acc = 1 - erro = \frac{\text{no. de predições corretas}}{n}.$$
```
knn.score(x,y)
```
Este _score_ parece bom, mas há o que analisar... Vamos plotar a distribuição dos rótulos.
```
# gráfico "torta" (pie chart)
plt.pie(np.c_[np.sum(np.where(y == 1,1,0)),
np.sum(np.where(y == -1,1,0))][0],
labels=['E parcial','E total'],colors=['r','g'],
shadow=False,autopct='%.2f')
plt.gcf().set_size_inches((6,6))
```
O gráfico mostra que o banco de dados está desequilibrado, já que 81,57% dos empréstimos foram liberados integralmente. Isso pode implicar que a predição será pela "maioria".
#### Matriz de confusão
Há casos em que a acurácia não é uma boa métrica de desempenho. Quando análises mais detalhadas são necessárias, podemos usar a _matriz de confusão_.
Com a matriz de confusão, podemos definir métricas para cenários distintos que levam em conta os valores obtidos pelo classificador e os valores considerados como corretos (_ground-truth_, isto é, o "padrão-ouro" (_gold standard_).
Em um classificador binário, há quatro casos a considerar, ilustrados na {numref}`matriz-confusao`:
- _Verdadeiro positivo_ (VP). O classificador prediz uma amostra como positiva que, de fato, é positiva.
- _Falso positivo_ (FP). O classificador prediz uma amostra como positiva que, na verdade, é negativa.
- _Verdadeiro negativo_ (VN). O classificador prediz uma amostra como negativa que, de fato, é negativa.
- _Falso negativo_ (FN). O classificador prediz uma amostra como negativa que, na verdade, é positiva.
```{figure} ../figs/13/matriz-confusao.png
---
width: 600px
name: matriz-confusao
---
Matriz de confusão. Fonte: elaboração própria.
```
Combinando esses quatro conceitos, podemos definir as métricas _acurácia_, _recall_ (ou _sensibilidade_), _especificidade_, _precisão_ (ou _valor previsto positivo_), _valor previsto negativo_, nesta ordem, da seguinte maneira:
$$\text{acc} = \dfrac{TP + TN}{TP + TN + FP + FN}$$
$$\text{rec} = \dfrac{TP}{TP + FP}$$
$$\text{spec} = \dfrac{TN}{TN + FP}$$
$$\text{prec} = \dfrac{TP}{TP + FP}$$
$$\text{npv} = \dfrac{TN}{TN + FN}$$
```{note}
Para uma interpretação ilustrada sobre essas métricas, veja este [post](https://medium.com/swlh/explaining-accuracy-precision-recall-and-f1-score-f29d370caaa8).
```
Podemos computar a matriz de confusão com
```
conf = lambda a,b: np.sum(np.logical_and(yh == a, y == b))
TP, TN, FP, FN = conf(-1,-1), conf(1,1), conf(-1,1), conf(1,-1)
np.array([[TP,FP],[FN,TN]])
```
ou, usando o `scikit-learn`, com
```
from sklearn import metrics
metrics.confusion_matrix(yh,y) # switch (prediction, target)
```
#### Conjuntos de treinamento e de teste
Vejamos um exemplo com `nn=1`.
```
knn = neighbors.KNeighborsClassifier(n_neighbors=1)
knn.fit(x,y)
yh = knn.predict(x)
metrics.accuracy_score(yh,y), metrics.confusion_matrix(yh,y)
```
Este caso tem 100% de acurácia e uma matriz de confusão diagonal. No exemplo anterior, não diferenciamos o conjunto usado para treinamento e predição.
Porém, em problemas reais, as chances dessa perfeição ocorrer são minimas. Da mesma forma, o classificador em geral será aplicado em dados previamente desconhecidos. Esta condição força-nos a dividir os dados em dois conjuntos: aquele usado para aprendizagem (_conjunto de treinamento_) e outro para testar a acurácia (_conjunto de teste_.
Vejamos uma simulação mais realista.
```
# Randomiza e divide dados
# PRC*100% para treinamento
# (1-PRC)*100% para teste
PRC = 0.7
perm = np.random.permutation(y.size)
split_point = int(np.ceil(y.shape[0]*PRC))
X_train = x[perm[:split_point].ravel(),:]
y_train = y[perm[:split_point].ravel()]
X_test = x[perm[split_point:].ravel(),:]
y_test = y[perm[split_point:].ravel()]
aux = {'training': X_train,
'training target':y_train,
'test':X_test,
'test target':y_test}
for k,v in aux.items():
print(k,v.shape,sep=': ')
```
Agora treinaremos o modelo com esta nova partição.
```
knn = neighbors.KNeighborsClassifier(n_neighbors = 1)
knn.fit(X_train, y_train)
yht = knn.predict(X_train)
for k,v in {'acc': str(metrics.accuracy_score(yht, y_train)),
'conf. matrix': '\n' + str(metrics.confusion_matrix(y_train, yht))}.items():
print(k,v,sep=': ')
```
Para `nn = 1`, a acurácia é de 100%. Vejamos o que acontecerá nesta simulação com dados ainda não vistos.
```
yht2 = knn.predict(X_test)
for k,v in {'acc': str(metrics.accuracy_score(yht2, y_test)),
'conf. matrix': '\n' + str(metrics.confusion_matrix(yht2, y_test))}.items():
print(k,v,sep=': ')
```
Neste caso, a acurácia naturalmente reduziu.
|
github_jupyter
|
## Estudo de caso: classificação de empréstimos bancários
O problema que estudaremos consiste em predizer se o pedido de empréstimo de uma pessoa será parcial ou totalmente aprovado por uma financeira. O banco de dados disponível da financeira abrange os anos de 2007 a 2011.
A aprovação do pedido baseia-se em uma análise de risco que usa diversas informações, tais como renda anual da pessoa, endividamento, calotes, taxa de juros do empréstimo, etc.
Matematicamente, o pedido da pessoa será bem-sucedido se
$$\alpha = \frac{E - F}{E} \ge 0.95,$$
onde $E$ é o valor do empréstimo requisitado e $F$ o financiamento liberado. O classificador binário pode ser escrito pela função
$$h({\bf X}): \mathbb{M}_{n \, \times \, d} \to \mathbb{K},$$
com $\mathbb{K} = \{+1,-1\}$ e ${\bf X}$ é uma matriz de $n$ amostras e $d$ _features_ pertencente ao conjunto abstrato $\mathbb{M}_{n \, \times \, d}$.
Vamos ler o banco de dados.
Aqui, `x` é a nossa matriz de features.
`y` é o vetor de _labels_
Comentários:
- As _features_ (atributos) são características que nos permitem distinguir um item. Neste exemplo, são todas as informações coletadas sobre a pessoa ou sobre o mecanismo de empréstimo. São 15, no total, com 4140 valores reais (amostras) cada.
- Em geral, uma amostra pode ser um documento, figura, arquivo de áudio, linha de uma planilha.
- _Features_ são geralmente valores reais, mas podem ser booleanos, discretos, ou categóricos.
- O vetor-alvo (_target_) contém valores que marcam se empréstimos passados no histórico da financeira foram aprovados ou reprovados.
### Interfaces do `scikit-learn`
Usaremos o módulo `scikit-learn` para resolver o problema. Este módulo usa três interfaces:
- `fit()` (estimador), para construir modelos de ajuste;
- `predict()` (preditor), para fazer predições;
- `transform()` (transformador), para converter dados;
O objetivo é predizer empréstimos malsucedidos, isto é, aqueles que se acham aquém do limiar de 95% de $\alpha$.
#### Acurácia
Podemos medir o desempenho do classificador usando métricas. A métrica padrão para o método _KNN_ é a _acurácia_, dada por:
$$acc = 1 - erro = \frac{\text{no. de predições corretas}}{n}.$$
Este _score_ parece bom, mas há o que analisar... Vamos plotar a distribuição dos rótulos.
O gráfico mostra que o banco de dados está desequilibrado, já que 81,57% dos empréstimos foram liberados integralmente. Isso pode implicar que a predição será pela "maioria".
#### Matriz de confusão
Há casos em que a acurácia não é uma boa métrica de desempenho. Quando análises mais detalhadas são necessárias, podemos usar a _matriz de confusão_.
Com a matriz de confusão, podemos definir métricas para cenários distintos que levam em conta os valores obtidos pelo classificador e os valores considerados como corretos (_ground-truth_, isto é, o "padrão-ouro" (_gold standard_).
Em um classificador binário, há quatro casos a considerar, ilustrados na {numref}`matriz-confusao`:
- _Verdadeiro positivo_ (VP). O classificador prediz uma amostra como positiva que, de fato, é positiva.
- _Falso positivo_ (FP). O classificador prediz uma amostra como positiva que, na verdade, é negativa.
- _Verdadeiro negativo_ (VN). O classificador prediz uma amostra como negativa que, de fato, é negativa.
- _Falso negativo_ (FN). O classificador prediz uma amostra como negativa que, na verdade, é positiva.
Combinando esses quatro conceitos, podemos definir as métricas _acurácia_, _recall_ (ou _sensibilidade_), _especificidade_, _precisão_ (ou _valor previsto positivo_), _valor previsto negativo_, nesta ordem, da seguinte maneira:
$$\text{acc} = \dfrac{TP + TN}{TP + TN + FP + FN}$$
$$\text{rec} = \dfrac{TP}{TP + FP}$$
$$\text{spec} = \dfrac{TN}{TN + FP}$$
$$\text{prec} = \dfrac{TP}{TP + FP}$$
$$\text{npv} = \dfrac{TN}{TN + FN}$$
Podemos computar a matriz de confusão com
ou, usando o `scikit-learn`, com
#### Conjuntos de treinamento e de teste
Vejamos um exemplo com `nn=1`.
Este caso tem 100% de acurácia e uma matriz de confusão diagonal. No exemplo anterior, não diferenciamos o conjunto usado para treinamento e predição.
Porém, em problemas reais, as chances dessa perfeição ocorrer são minimas. Da mesma forma, o classificador em geral será aplicado em dados previamente desconhecidos. Esta condição força-nos a dividir os dados em dois conjuntos: aquele usado para aprendizagem (_conjunto de treinamento_) e outro para testar a acurácia (_conjunto de teste_.
Vejamos uma simulação mais realista.
Agora treinaremos o modelo com esta nova partição.
Para `nn = 1`, a acurácia é de 100%. Vejamos o que acontecerá nesta simulação com dados ainda não vistos.
| 0.603348 | 0.945601 |
Videocvičení naleznete zde: https://youtu.be/yL-A0N5JDJo
# Práce s obrázky
### Načítání balíčků
K práci s obrázky budeme používat knihovnu **cv2** s aliasem **cv**. Dále budeme používat knihovnu **numpy** s aliasem **np** pro matematické funkce a práci s poli a knihovnu **matplotlib** s aliasem **plt** pro vykreslování výsledků.
Následujícím kódem naimportujeme balíčky.
```
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import NoNorm
```
Pro interaktivní vykreslování grafů v Jupyter notebooku ještě potřebujeme toto:
```
%matplotlib notebook
```
## Obrázek jako matice
Nejprve je třeba načíst obrázek do paměti. To provedeme příkazem **imread** z knihovny **cv2**.
***Pozor:*** Barevné vrstvy se načtou v pořadí **blue**, **green**, **red** namísto obvyklého **red**, **green**, **blue**. A protože jsme konzervativní, obrázek si převedeme.
```
img_bgr = cv.imread("lena_original.jpg",cv.IMREAD_UNCHANGED)
img = cv.cvtColor(img_bgr, cv.COLOR_BGR2RGB)
```
Chceme-li si obrázek zobrazit, použijeme příkaz **imshow** z knihovny **matplotlib**.
```
plt.figure()
plt.imshow(img)
```
Pro zkoušení algoritmů pro zpracování obrazu se běžně používá jen výřez tohoto obrázku. Ten si teď vyrobíme.
S obrázkem teď můžeme zacházet jako s polem. Můžeme tedy zadat, jaký rozsah indexů chceme nadále používat:
```
img_crop = img[20:270,150:400,:]
```
Výřez si můžeme zase prohlédnout a, protože jej budeme později používat, také uložit.
```
plt.figure()
plt.imshow(img_crop)
cv.imwrite('lena_crop.jpg',cv.cvtColor(img_crop, cv.COLOR_RGB2BGR))
```
### Barevné složky
Chceme-li pracovat s některou z barevných složek obrázku, můžeme ji z obrázku získat např. tak, že vynulujeme ostatní složky.
```
b = img_crop.copy()
g = img_crop.copy()
r = img_crop.copy()
r[:,:,1] = 0
r[:,:,2] = 0
g[:,:,0] = 0
g[:,:,2] = 0
b[:,:,0] = 0
b[:,:,1] = 0
```
Výsledné obrázky pak budou vypadat následnovně:
```
plt.figure()
plt.imshow(r)
plt.figure()
plt.imshow(g)
plt.figure()
plt.imshow(b)
```
### Základní úpravy obrázku
V následující části cvičení si pro jednoduchost vystačíme s černobílým obrázkem. Načteme si výřez, který jsme si před chvílí uložili. Funkci **imread** ale řekneme, že chceme načíst obrázek jen v odstínech šedé pomocí **cv.IMREAD_GRAYSCALE**.
```
img_grey = cv.imread("lena_crop.jpg",cv.IMREAD_GRAYSCALE)
```
Můžeme si jej vykreslit, ať víme, že vše proběhlo v pořádku. U černobílých obrázků budeme specifikovat, že chceme vykreslit v odstínech šedé pomocí **cmap='gray'** a také, že nechceme, aby během vykreslování došlo k automatickému vyrovnání histogramu (tím se budeme zabývat za chvilku). To se udělá pomocí **norm=NoNorm()**.
```
plt.figure()
plt.imshow(img_grey,cmap='gray',norm=NoNorm())
```
<hr style="border:1px solid black"> </hr>
## Úkol 1:
vytvořte funkci, která zadaný obrázek zesvětlí o zadaný počet odstínů. Zesvětlený obrázek vykreslete.
<hr style="border:1px solid black"> </hr>
```
def lighten(img, amount):
return img
```
<hr style="border:1px solid black"> </hr>
## Úkol 2:
vytvořte funkci, která zadaný obrázek ztmaví o zadaný počet odstínů. Tmavší obrázek vykreslete.
<hr style="border:1px solid black"> </hr>
```
def darken(img, amount):
return img
```
<hr style="border:1px solid black"> </hr>
## Úkol 3:
vytvořte funkci, která vytvoří inverzi (negativ) zadaného obrázku. Negativ vykreslete.
<hr style="border:1px solid black"> </hr>
```
def invert(img):
return img
```
### Prahování
Prahování spočívá v tom, že se všechny hodnoty obrázku, které jsou menší než námi zvolený práh (threshold), nastaví na černou, zatímco se zbývanící hodnoty nastaví na bílou.
<hr style="border:1px solid black"> </hr>
## Úkol 4:
vytvořte funkci, která provede prahování obrázku pomocí zadané hodnoty. Výsledný obrázek vykreslete.
<hr style="border:1px solid black"> </hr>
```
def threshold(img, threshold):
return img
```
## Vyrovnání histogramu
### Získání histogramu
Nejprve si obrázek načteme. Nezapomeňte, že jej chceme v odstínech šedé.
```
img_uneq = cv.imread("uneq.jpg", cv.IMREAD_GRAYSCALE)
```
Provedeme kontrolu vykreslením.
```
plt.figure()
plt.imshow(img_uneq,cmap='gray',norm=NoNorm())
```
Vidíme, že s obrázkem něco není v pořádku. Zkusíme se podívat na jeho histogram. Histogram obrázku získáme tak, že pro každou možnou hodnotu jasu (v obrázku jich je dohromady 256, 0 reprezentuje černou, 255 bílou) spočítáme, kolikrát se v obrázku vyskytuje.
<hr style="border:1px solid black"> </hr>
## Úkol 5:
vytvořte funkci, která k zadanému obrázku vytvoří histogram a použijte ji na obrázek **img_uneq**. Výsledný histogram vykreslete.
***Nápověda***: k vykreslení histogramu se hodí použít sloupcový graf. Ten získáme pomocí příkazu **bar** z knihovny **matplotlib**.
<hr style="border:1px solid black"> </hr>
```
def get_hist(image):
return hist
```
### Vyrovnání histogramu
Z obrázku vidíme, že v obrázku je jen úzký rozsah jasu. Pokusíme se toto napravit a histogram tzv. vyrovnáme. Cílem je celý histogram roztáhnout tak, aby pokrýval celý rozsah of 0 do 255.
<hr style="border:1px solid black"> </hr>
## Úkol 6:
vytvořte funkci, která vyrovná histogram zadaného obrázku a použijte ji na obrázek **img_uneq**. Výsledný obrázek a histogram vykreslete.
<hr style="border:1px solid black"> </hr>
```
def eq_hist(img):
return img
```
## Konvoluce
Diskrétní konvoluce je operace, která obrázek modifikuje pomocí takzvané konvoluční masky. Konvoluční masku si můžeme představit jako čtvercovou matici, jejíž hodnoty představují váhy, s jakými do výsledného obrázku započítáváme hodnoty jasu obrázku původního.
V praxi konvoluce funguje tak, že masku přiložíme na obrázek tak, aby její střed byl v bodě, pro který chceme konvoluci počítat. Hodotu pak získáme tím, že po složkách vynásobíme masku s jasy obrázku a vše sečteme. Obrázek se schématem najdete na Wikipedii, ze které jsem si jej vypůjčil.

Naším úkolem nebude nic jiného, než diskrétní konvoluci naprogramovat a otestovat s různými maskami. Ty si ostatně můžeme rovnou zadefinovat.
```
average = np.array([[1, 1, 1],[1, 1, 1],[1, 1, 1]])
gauss_large = np.array([[1, 4, 7, 4, 1],[4, 16, 26, 16, 4],[7, 26, 41, 26, 7],[4, 16, 26, 16, 4],[1, 4, 7, 4, 1]])
gauss = np.array([[1, 2, 1],[2, 4, 2],[1, 2, 1]])
laplace = np.array([[0, 1, 0],[1, -4, 1],[0, 1, 0]])
edges = np.array([[0, -1, 0],[-1, 5, -1],[0, -1, 0]])
vertical_edges = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
horizontal_edges = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]])
```
Různé masky se hodí na různé operace, proto si je vyzkoušíme na dvou různých obrázcích. Jeden z nich bude zašuměná Lena
```
lena = cv.imread("lena_noise.jpg", cv.IMREAD_GRAYSCALE)
```
Druhý obrázek bude obrázek cihlové zdi, na kterém bude dobře vidět efekt masek, které zvýrazňují hrany.
```
bricks = cv.imread("bricks.jpg", cv.IMREAD_GRAYSCALE)
```
<hr style="border:1px solid black"> </hr>
## Úkol 7:
vytvořte funkci, která provede diskrétní konvoluci zadaného obrázku se zadanou konvoluční maskou. Otestujte efekty různých konvolučních masek.
<hr style="border:1px solid black"> </hr>
```
def convolution(img, mask):
return conv
```
|
github_jupyter
|
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import NoNorm
%matplotlib notebook
img_bgr = cv.imread("lena_original.jpg",cv.IMREAD_UNCHANGED)
img = cv.cvtColor(img_bgr, cv.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img)
img_crop = img[20:270,150:400,:]
plt.figure()
plt.imshow(img_crop)
cv.imwrite('lena_crop.jpg',cv.cvtColor(img_crop, cv.COLOR_RGB2BGR))
b = img_crop.copy()
g = img_crop.copy()
r = img_crop.copy()
r[:,:,1] = 0
r[:,:,2] = 0
g[:,:,0] = 0
g[:,:,2] = 0
b[:,:,0] = 0
b[:,:,1] = 0
plt.figure()
plt.imshow(r)
plt.figure()
plt.imshow(g)
plt.figure()
plt.imshow(b)
img_grey = cv.imread("lena_crop.jpg",cv.IMREAD_GRAYSCALE)
plt.figure()
plt.imshow(img_grey,cmap='gray',norm=NoNorm())
def lighten(img, amount):
return img
def darken(img, amount):
return img
def invert(img):
return img
def threshold(img, threshold):
return img
img_uneq = cv.imread("uneq.jpg", cv.IMREAD_GRAYSCALE)
plt.figure()
plt.imshow(img_uneq,cmap='gray',norm=NoNorm())
def get_hist(image):
return hist
def eq_hist(img):
return img
average = np.array([[1, 1, 1],[1, 1, 1],[1, 1, 1]])
gauss_large = np.array([[1, 4, 7, 4, 1],[4, 16, 26, 16, 4],[7, 26, 41, 26, 7],[4, 16, 26, 16, 4],[1, 4, 7, 4, 1]])
gauss = np.array([[1, 2, 1],[2, 4, 2],[1, 2, 1]])
laplace = np.array([[0, 1, 0],[1, -4, 1],[0, 1, 0]])
edges = np.array([[0, -1, 0],[-1, 5, -1],[0, -1, 0]])
vertical_edges = np.array([[-1, 0, 1],[-2, 0, 2],[-1, 0, 1]])
horizontal_edges = np.array([[-1, -2, -1],[0, 0, 0],[1, 2, 1]])
lena = cv.imread("lena_noise.jpg", cv.IMREAD_GRAYSCALE)
bricks = cv.imread("bricks.jpg", cv.IMREAD_GRAYSCALE)
def convolution(img, mask):
return conv
| 0.407569 | 0.9434 |
# Lab Notebook
Course: BioE 131
Lab No: Lab #7
Submission date:
Team members: Michael Fernandez, Jinho Ko
## Simulating the Data
```
import numpy as np
def b_generator(s, p):
data = np.random.choice( [0,1], size = s, replace = True, p = [p, 1.0-p])
data = np.packbits(data)
return data
def DNA_generator(s):
data = np.random.choice( ['A', 'T', 'C', 'G'], size = s, replace = True, p = [ 1.0/4.0 for _ in range(4) ] )
#data = np.packbits(data)
return data
def Protein_generator(s):
data = np.random.choice( list('ARNDCEQGHILKMFPSTWYV'), size = s, replace = True, p = [1.0/20.0 for _ in range(20)] )
#data = np.packbits(data)
return data
open('zeros_100p', 'wb').write( b_generator( 8* 100 * 2**20 , 1.0))
open('zeros_90p', 'wb').write( b_generator(8* 100 * 2**20, 0.9))
open('zeros_80p', 'wb').write( b_generator(8* 100 * 2**20, 0.8))
open('zeros_70p', 'wb').write( b_generator(8* 100 * 2**20, 0.7))
open('zeros_60p', 'wb').write( b_generator(8* 100 * 2**20, 0.6))
open('zeros_50p', 'wb').write( b_generator(8* 100 * 2**20, 0.5))
open('nt_seq.fa', 'w').write( ''.join(DNA_generator(10**6) ) )
open('aa_seq.fa', 'w').write( ''.join(Protein_generator(10**6)))
```
## memo of Compression Time, Rate
Zeros_100p : Input size : 105MB
- gzip : real 0.686 user 0.656, sys 0.028 | 102kb
- bzip2 :real 1.001s user 0.940 sys 0.060 | 112B
- pbzip2 : real 0.103 user 1.836, sys 0.092 | 6.52 kb
- ArithmeticCompress : 15.115 12.054 0.052 | 1.03 kb
Zeros_90p : Input size : 105MB
- gzip :real 19.292 user 19.287 sys 0.1 | 58.7 MB
- bzip2 :real 11.708 user 11.595. sys 0.112 | 61.2MB
- pbzip2 : real .767 user 18.935. sys 0.678 | 61.2 MB
- ArithmeticCompress :real 28.687 user 28.569, 0.128 | 49.2 MB
Zeros_80p : Input size : 105MB
- gzip :real 16.043 user 15.859 sys 0.148 | 81.2MB
- bzpi2 :real 12.742 user 12.637 sys 0.104 | 86.6MB
- pbzip2 :real 0.945 23.834, 0.865 | 86.7MB
- ArithmeticCompress :real 11.991, 11.875, 0.116 | 65.8MB
Zeros_70p : Input size : 105MB
- gzip :real 6.066. 5.929. 0.132 | 93.6MB
- bzpi2 :real 14.83 14.664, 0.153 | 99.8MB
- pbzip2 :real 1.194 29.801 0.746 | 99.8MB
- ArithmeticCompress :real 39.193 39.018 0.160 | 92.4MB
Zeros_60p : Input size : 105MB
- gzip : real 0m4.333s 102mb
user 0m4.124s
sys 0m0.208s
- bzip2 : real 0m15.934s 105mb
user 0m15.777s
sys 0m0.157s
- pbzip2 : real 0m1.412s 105mb
user 0m36.861s
sys 0m0.923s
- ArithmeticCompress : real 0m43.946s 102mb
user 0m43.804s
sys 0m0.129s
Zeros_50p : Input size : 105MB
- gzip : real 0m3.550s 105mb
user 0m3.442s
sys 0m0.108s
- bzip2 :real 0m17.856s 105mb
user 0m17.711s
sys 0m0.144s
- pbzip2 :real 0m1.512s 105mb
user 0m39.113s
sys 0m0.890s
- ArithmeticCompress :real 0m42.529s 105mb
user 0m42.367s
sys 0m0.160s
nt_seq.fa : Input size : 1MB
- gzip : real 0m0.126s | 293kb
user 0m0.121s
sys 0m0.004s
- bzip2 :real 0m0.105s | 274kb
user 0m0.101s
sys 0m0.004s
- pbzip2 :real 0m0.126s | 274kb
user 0m0.137s
sys 0m0.005s
- ArithmeticCompress :real 0m0.251s | 251kb
user 0m0.251s
sys 0m0.000s
aa_seq.fa : Input size : 1MB
- gzip :real 0m0.051s | 606kb
user 0m0.050s
sys 0m0.000s
- bzip2 :real 0m0.132s | 553kb
user 0m0.124s
sys 0m0.008s
- pbzip2 :real 0m0.132s | 553kb
user 0m0.138s
sys 0m0.009s
- ArithmeticCompress:real 0m0.288s | 541kb
user 0m0.288s
sys 0m0.000s
| % compression | 100 | 90 | 80 | 70 | 60 | 50 | nt | aa |
|--------|-------|------|------|------|-----|----|------|------|
| gzip | 99.9 | 44.1 | 22.7 | 10.8 | 2.8 | 0 | 70.7 | 39.4 |
| bzip2 | 100.0 | 41.7 | 17.5 | 5.0 | 0 | 0 | 72.6 | 44.7 |
| pbzip2 | 99.9 | 41.7 | 17.4 | 5.0 | 0 | 0 | 72.6 | 44.7 |
| AC | 99.9 | 53.1 | 37.3 | 12.0 | 2.8 | 0 | 74.9 | 71.2 |
| time | 100 | 90 | 80 | 70 | 60 | 50 | nt | aa |
|--------|--------|--------|--------|--------|--------|--------|------|------|
| gzip | .486 | 19.292 | 16.043 | 6.066 | 4.333 | 3.550 | .126 | .051 |
| bzip2 | 1.001 | 11.708 | 12.742 | 14.83 | 15.934 | 17.856 | .105 | .132 |
| pbzip2 | .103 | 0.767 | 0.945 | 1.194 | 1.412 | 1.512 | .126 | .132 |
| AC | 15.115 | 28.687 | 11.991 | 39.193 | 43.946 | 42.529 | .251 | .288 |
## Questions
#### Which algorithm achieves the best level of compression on each file type?
- for zeros_100p, bzip2 was the best, and in other files (7) , ArithmeticCompress was dominant.
#### Which algorithm is the fastest?
- for nt, bzip2 was the best, and in other 7 files, pbzip2 was dominant.
#### What is the difference between bzip2 and pbzip2? Do you expect one to be faster and why?
- pbzip2 should be faster because it supports parallel computing in multi-core CPU. Therefore, the time difference shown in our result means that our server is operated with multi-core CPU.
#### How does the level of compression change as the percentage of zeros increases? Why does this happen?
- As the precentages of zeros goes down(100->50) , the % compression decreases, because the algorithm works more effectively when there are more repetitive sequences. Therefore when there are 50% zeros, there are less repetitive sequences compared to 90 or 100.
#### What is the minimum number of bits required to store a single DNA base?
- by Shannon's theory of information, the bits required = log2(4) = 2 bits.
#### What is the minimum number of bits required to store an amino acid letter?
- by Shannon's theory of information, the bits required = log2(20) = 4.32 -> 5 bits.
#### In your tests, how many bits did gzip and bzip2 actually require to store your random DNA and protein sequences?
- theoretically, DNA sequence should be 2500kb and Protein sequence should be 6250kb.
- But in reality, DNA gzip/bzip2 required 293kb, 274kb, and Protein gzip/zip2 required 606kb, 553kb
#### Are gzip and bzip2 performing well on DNA and proteins?
- for DNA, around 70% was compressed, and for Protein around 40% was compressed. So we guess the algoritms work well on DNA and proteins.
## Compressing Real Data
From your knowledge about querying biological databases, find the nucleic acid sequences of gp120 homologs from at least 10 different HIV isolates and concatenate them together into a single multi-FASTA.
A priori, do you expect to achieve better or worse compression here than random data? Why?
- We guess the compression would work better because in real DNA sequences, there are often many repetitive sequences than randomized sequencees.
| % compression | nt | GP120_10 |
|--------|------|----------|
| gzip | 70.7 | 75.4 |
| bzip2 | 72.6 | 78.2 |
| pbzip2 | 72.6 | 78.2 |
| AC | 74.9 | 72.8 |
| time | nt | GP120_10 |
|--------|------|----------|
| gzip | .126 | .022 |
| bzip2 | .105 | .020 |
| pbzip2 | .126 | .018 |
| AC | .251 | .035 |
How does the compression ratio of this file compare to random data?
- lt looks like compression ratio for gzip, bzip, pbzip is better in gp120 homolgs fasta file than random nt datas. For Arithmetic Compression, random sequences were better.
## Estimating compression of 1000 terabytes
Most of the data, say 80%, is re-sequencing of genomes and plasmids that are very similar to each other.
Another 10% might be protein sequences,
and the last 10% are binary microscope images
which we’ll assume follow the worst-case scenario of being completely random.
Which algorithm do you propose to use for each type of data?
- For genomes and plasmids, we suggest pbzip2, because it's more faster and efficient(78.2%) compared to AC or gzip or bzip(in parallel computing environment)
- For protein sequences, we would suggest ArithmeticCompress, which compressed about 71.2% (double compression rate compared to other algorithms)in our experiment.
- For binary microscope images, we won't suggest any compression, because compression won't work in given situation (worst 50/50& of 0/1s). Instead, we can devote the memory to other progress.
Provide an estimate for the fraction of space you can save using your compression scheme.
- for 800TB (of genomes and plasmids) : 78.2 % compressed
- for 100TB (of protein seqs) : 71.2 % compressed
- for 100TB (of binary image) : 0 % compressed
-> totally, 174.4 + 28.8 + 0 = 203.2 TB / 1000TB (79.68% compressed)
How much of a bonus do you anticipate receiving this year?
- *50 dollars for 1TB is given as bonus.
- 50 dollars * 796.8 TB = 39,840 dollars are saved every day.
- Therefore, annual bonus the team gets is, 39,840*365 = 14,541,600 Dollars.
|
github_jupyter
|
import numpy as np
def b_generator(s, p):
data = np.random.choice( [0,1], size = s, replace = True, p = [p, 1.0-p])
data = np.packbits(data)
return data
def DNA_generator(s):
data = np.random.choice( ['A', 'T', 'C', 'G'], size = s, replace = True, p = [ 1.0/4.0 for _ in range(4) ] )
#data = np.packbits(data)
return data
def Protein_generator(s):
data = np.random.choice( list('ARNDCEQGHILKMFPSTWYV'), size = s, replace = True, p = [1.0/20.0 for _ in range(20)] )
#data = np.packbits(data)
return data
open('zeros_100p', 'wb').write( b_generator( 8* 100 * 2**20 , 1.0))
open('zeros_90p', 'wb').write( b_generator(8* 100 * 2**20, 0.9))
open('zeros_80p', 'wb').write( b_generator(8* 100 * 2**20, 0.8))
open('zeros_70p', 'wb').write( b_generator(8* 100 * 2**20, 0.7))
open('zeros_60p', 'wb').write( b_generator(8* 100 * 2**20, 0.6))
open('zeros_50p', 'wb').write( b_generator(8* 100 * 2**20, 0.5))
open('nt_seq.fa', 'w').write( ''.join(DNA_generator(10**6) ) )
open('aa_seq.fa', 'w').write( ''.join(Protein_generator(10**6)))
| 0.273477 | 0.786356 |
# Train faster, more flexible models with Amazon SageMaker Linear Learner
Today Amazon SageMaker is launching several additional features to the built-in linear learner algorithm. Amazon SageMaker algorithms are designed to scale effortlessly to massive datasets and take advantage of the latest hardware optimizations for unparalleled speed. The Amazon SageMaker linear learner algorithm encompasses both linear regression and binary classification algorithms. These algorithms are used extensively in banking, fraud/risk management, insurance, and healthcare. The new features of linear learner are designed to speed up training and help you customize models for different use cases. Examples include classification with unbalanced classes, where one of your outcomes happens far less frequently than another. Or specialized loss functions for regression, where it’s more important to penalize certain model errors more than others.
In this blog post we'll cover three things:
1. Early stopping and saving the best model
1. New ways to customize linear learner models, including:
* Hinge loss (support vector machines)
* Quantile loss
* Huber loss
* Epsilon-insensitive loss
* Class weights options
1. Then we'll walk you through a hands-on example of using class weights to boost performance in binary classification
## Early Stopping
Linear learner trains models using Stochastic Gradient Descent (SGD) or variants of SGD like Adam. Training requires multiple passes over the data, called *epochs*, in which the data are loaded into memory in chunks called *batches*, sometimes called *minibatches*. How do we know how many epochs to run? Ideally, we'd like to continue training until convergence - that is, until we no longer see any additional benefits. Running additional epochs after the model has converged is a waste of time and money, but guessing the right number of epochs is difficult to do before submitting a training job. If we train for too few epochs, our model will be less accurate than it should be, but if we train for too many epochs, we'll waste resources and potentially harm model accuracy by overfitting. To remove the guesswork and optimize model training, linear learner has added two new features: automatic early stopping and saving the best model.
Early stopping works in two basic regimes: with or without a validation set. Often we split our data into training, validation, and testing data sets. Training is for optimizing the loss, validation is for tuning hyperparameters, and testing is for producing an honest estimate of how the model will perform on unseen data in the future. If you provide linear learner with a validation data set, training will stop early when validation loss stops improving. If no validation set is available, training will stop early when training loss stops improving.
#### Early Stopping with a validation data set
One big benefit of having a validation data set is that we can tell if and when we start overfitting to the training data. Overfitting is when the model gives predictions that are too closely tailored to the training data, so that generalization performance (performance on future unseen data) will be poor. The following plot on the right shows a typical progression during training with a validation data set. Until epoch 5, the model has been learning from the training set and doing better and better on the validation set. But in epochs 7-10, we see that the model has begun to overfit on the training set, which shows up as worse performance on the validation set. Regardless of whether the model continues to improve (overfit) on the training data, we want to stop training after the model starts to overfit. And we want to restore the best model from just before the overfitting started. These two features are now turned on by default in linear learner.
The default parameter values for early stopping are shown in the following code. To tweak the behavior of early stopping, try changing the values. To turn off early stopping entirely, choose a patience value larger than the number of epochs you want to run.
early_stopping_patience=3,
early_stopping_tolerance=0.001,
The parameter early_stoping_patience defines how many epochs to wait before ending training if no improvement is made. It's useful to have a little patience when deciding to stop early, since the training curve can be bumpy. Performance may get worse for one or two epochs before continuing to improve. By default, linear learner will stop early if performance has degraded for three epochs in a row.
The parameter early_stopping_tolerance defines the size of an improvement that's considered significant. If the ratio of the improvement in loss divided by the previous best loss is smaller than this value, early stopping will consider the improvement to be zero.
#### Early stopping without a validation data set
When training with a training set only, we have no way to detect overfitting. But we still want to stop training once the model has converged and improvement has levelled off. In the left panel of the following figure, that happens around epoch 25.
<img src="images/early_stop.png">
#### Early stopping and calibration
You may already be familiar with the linear learner automated threshold tuning for binary classification models. Threshold tuning and early stopping work together seamlessly by default in linear learner.
When a binary classification model outputs a probability (e.g., logistic regression) or a raw score (SVM), we convert that to a binary prediction by applying a threshold, for example:
predicted_label = 1 if raw_prediction > 0.5 else 0
We might want to tune the threshold (0.5 in the example) based on the metric we care about most, such as accuracy or recall. Linear learner does this tuning automatically using the 'binary_classifier_model_selection_criteria' parameter. When threshold tuning and early stopping are both turned on (the default), then training stops early based on the metric you request. For example, if you provide a validation data set and request a logistic regression model with threshold tuning based on accuracy, then training will stop when the model with auto-thresholding reaches optimal performance on the validation data. If there is no validation set and auto-thresholding is turned off, then training will stop when the best value of the loss function on the training data is reached.
## New loss functions
The loss function is our definition of the cost of making an error in prediction. When we train a model, we push the model weights in the direction that minimizes loss, given the known labels in the training set. The most common and well-known loss function is squared loss, which is minimized when we train a standard linear regression model. Another common loss function is the one used in logistic regression, variously known as logistic loss, cross-entropy loss, or binomial likelihood. Ideally, the loss function we train on should be a close match to the business problem we're trying to solve. Having the flexibility to choose different loss functions at training time allows us to customize models to different use cases. In this section, we'll discuss when to use which loss function, and introduce several new loss functions that have been added to linear learner.
<img src="images/loss_functions.png">
### Squared loss
predictor_type='regressor',
loss='squared_loss',
$$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} (w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^2$$
We'll use the following notation in all of the loss functions we discuss:
$w_0$ is the bias that the model learns
$\mathbf{w}$ is the vector of feature weights that the model learns
$y_i$ and $\mathbf{x_i}$ are the label and feature vector, respectively, from example $i$ of the training data
$N$ is the total number of training examples
Squared loss is a first choice for most regression problems. It has the nice property of producing an estimate of the mean of the label given the features. As seen in the plot above, squared loss implies that we pay a very high cost for very wrong predictions. This can cause problems if our training data include some extreme outliers. A model trained on squared loss will be very sensitive to outliers. Squared loss is sometimes known as mean squared error (MSE), ordinary least squares (OLS), or $\text{L}_2$ loss. Read more about [squared loss](https://en.wikipedia.org/wiki/Least_squares) on wikipedia.
### Absolute loss
predictor_type='regressor',
loss='absolute_loss',
$$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} |w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i|$$
Absolute loss is less common than squared loss, but can be very useful. The main difference between the two is that training a model on absolute loss will produces estimates of the median of the label given the features. Squared loss estimates the mean, and absolute loss estimates the median. Whether you want to estimate the mean or median will depend on your use case. Let's look at a few examples:
* If an error of -2 costs you \$2 and an error of +50 costs you \$50, then absolute loss models your costs better than squared loss.
* If an error of -2 costs you \$2, while an error of +50 is simply unacceptably large, then it's important that your errors are generally small, and so squared loss is probably the right fit.
* If it's important that your predictions are too high as often as they're too low, then you want to estimate the median with absolute loss.
* If outliers in your training data are having too much influence on the model, try switching from squared to absolute loss. Large errors get a large amount of attention from absolute loss, but with squared loss, large errors get squared and become huge errors attracting a huge amount of attention. If the error is due to an outlier, it might not deserve a huge amount of attention.
Absolute loss is sometimes also known as $\text{L}_1$ loss or least absolute error. Read more about [absolute loss](https://en.wikipedia.org/wiki/Least_absolute_deviations) on wikipedia.
### Quantile loss
predictor_type='regressor',
loss='quantile_loss',
quantile=0.9,
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N q(y_i - w_o - \mathbf{x_i}^\intercal \mathbf{w})^\text{+} + (1-q)(w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^\text{+} $$
$$ \text{where the parameter } q \text{ is the quantile you want to predict}$$
Quantile loss lets us predict an upper or lower bound for the label, given the features. To make predictions that are larger than the true label 90% of the time, train quantile loss with the 0.9 quantile. An example would be predicting electricity demand where we want to build near peak demand since building to the average would result in brown-outs and upset customers. Read more about [quantile loss](https://en.wikipedia.org/wiki/Quantile_regression) on wikipedia.
### Huber loss
predictor_type='regressor',
loss='huber_loss',
huber_delta=0.5,
$$ \text{Let the error be } e_i = w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i \text{. Then Huber loss solves:}$$
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N I(|e_i| < \delta) \frac{e_i^2}{2} + I(|e_i| >= \delta) |e_i|\delta - \frac{\delta^2}{2} $$
$$ \text{where } I(a) = 1 \text{ if } a \text{ is true, else } 0 $$
Huber loss is an interesting hybrid of $\text{L}_1$ and $\text{L}_2$ losses. Huber loss counts small errors on a squared scale and large errors on an absolute scale. In the plot above, we see that Huber loss looks like squared loss when the error is near 0 and absolute loss beyond that. Huber loss is useful when we want to train with squared loss, but want to avoid squared loss's sensitivity to outliers. Huber loss gives less importance to outliers by not squaring the larger errors. Read more about [Huber loss](https://en.wikipedia.org/wiki/Huber_loss) on wikipedia.
### Epsilon-insensitive loss
predictor_type='regressor',
loss='eps_insensitive_squared_loss',
loss_insensitivity=0.25,
For epsilon-insensitive squared loss, we minimize
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N max(0, (w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i)^2 - \epsilon^2) $$
And for epsilon-insensitive absolute loss, we minimize
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N max(0, |w_0 + \mathbf{x_i}^\intercal \mathbf{w} - y_i| - \epsilon) $$
Epsilon-insensitive loss is useful when errors don't matter to you as long as they're below some threshold. Set the threshold that makes sense for your use case as epsilon. Epsilon-insensitive loss will allow the model to pay no cost for making errors smaller than epsilon.
### Logistic regression
predictor_type='binary_classifier',
loss='logistic',
binary_classifier_model_selection_criteria='recall_at_target_precision',
target_precision=0.9,
Each of the losses we've discussed is for regression problems, where the labels are floating point numbers. The last two losses we'll cover, logistic regression and support vector machines, are for binary classification problems where the labels are one of two classes. Linear learner expects the class labels to be 0 or 1. This may require some preprocessing, for example if your labels are coded as -1 and +1, or as blue and yellow. Logistic regression produces a predicted probability for each data point:
$$ p_i = \sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w}) $$
The loss function minimized in training a logistic regression model is the log likelihood of a binomial distribution. It assigns the highest cost to predictions that are confident and wrong, for example a prediction of 0.99 when the true label was 0, or a prediction of 0.002 when the true label was positive. The loss function is:
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N y_i \text{log}(p) - (1 - y_i) \text{log}(1 - p) $$
$$ \text{where } \sigma(x) = \frac{\text{exp}(x)}{1 + \text{exp}(x)} $$
Read more about [logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) on wikipedia.
### Hinge loss (support vector machine)
predictor_type='binary_classifier',
loss='hinge_loss',
margin=1.0,
binary_classifier_model_selection_criteria='recall_at_target_precision',
target_precision=0.9,
Another popular option for binary classification problems is the hinge loss, also known as a Support Vector Machine (SVM) or Support Vector Classifier (SVC) with a linear kernel. It places a high cost on any points that are misclassified or nearly misclassified. To tune the meaning of "nearly", adjust the margin parameter:
It's difficult to say in advance whether logistic regression or SVM will be the right model for a binary classification problem, though logistic regression is generally a more popular choice then SVM. If it's important to provide probabilities of the predicted class labels, then logistic regression will be the right choice. If all that matters is better accuracy, precision, or recall, then either model may be appropriate. One advantage of logistic regression is that it produces the probability of an example having a positive label. That can be useful, for example in an ad serving system where the predicted click probability is used as an input to a bidding mechanism. Hinge loss does not produce class probabilities.
Whichever model you choose, you're likely to benefit from linear learner's options for tuning the threshold that separates positive from negative predictions
$$\text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^{N} y_i(\frac{m+1}{2} - w_0 - \mathbf{x_i}^\text{T}\mathbf{w})^\text{+} + (1-y_i)\frac{m-1}{2} + w_o + \mathbf{x_i}^\text{T}\mathbf{w})^\text{+}$$
$$\text{where } a^\text{+} = \text{max}(0, a)$$
Note that the hinge loss we use is a reparameterization of the usual hinge loss: typically hinge loss expects the binary label to be in {-1, 1}, whereas ours expects the binary labels to be in {0, 1}. This reparameterization allows LinearLearner to accept the same data format for binary classification regardless of the training loss. Read more about [hinge loss](https://en.wikipedia.org/wiki/Hinge_loss) on wikipedia.
## Class weights
In some binary classification problems, we may find that our training data is highly unbalanced. For example, in credit card fraud detection, we're likely to have many more examples of non-fraudulent transactions than fraudulent. In these cases, balancing the class weights may improve model performance.
Suppose we have 98% negative and 2% positive examples. To balance the total weight of each class, we can set the positive class weight to be 49. Now the average weight from the positive class is 0.98 $\cdot$ 1 = 0.98, and the average weight from the negative class is 0.02 $\cdot$ 49 = 0.98. The negative class weight multiplier is always 1.
To incorporate the positive class weight in training, we multiply the loss by the positive weight whenever we see a positive class label. For logistic regression, the weighted loss is:
Weighted logistic regression:
$$ \text{argmin}_{w_0, \mathbf{w}} \sum_{i=1}^N p y_i \text{log}(\sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w})) - (1 - y_i) \text{log}(1 - \sigma(w_0 + \mathbf{x_i}^\intercal \mathbf{w})) $$
$$ \text{where } p \text{ is the weight for the positive class.} $$
The only difference between the weighted and unweighted logistic regression loss functions is the presense of the class weight, $p$ on the left-hand term in the loss. Class weights in the hinge loss (SVM) classifier are applied in the same way.
To apply class weights when training a model with linear learner, supply the weight for the positive class as a training parameter:
positive_example_weight_mult=200,
Or to ask linear learner to calculate the positive class weight for you:
positive_example_weight_mult='balanced',
## Hands-on example: Detecting credit card fraud
In this section, we'll look at a credit card fraud detection dataset. The data set (Dal Pozzolo et al. 2015) was downloaded from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data). We have features and labels for over a quarter million credit card transactions, each of which is labeled as fraudulent or not fraudulent. We'd like to train a model based on the features of these transactions so that we can predict risky or fraudulent transactions in the future. This is a binary classification problem.
We'll walk through training linear learner with various settings and deploying an inference endpoint. We'll evaluate the quality of our models by hitting that endpoint with observations from the test set. We can take the real-time predictions returned by the endpoint and evaluate them against the ground-truth labels in our test set.
Next, we'll apply the linear learner threshold tuning functionality to get better precision without sacrificing recall. Then, we'll push the precision even higher using the linear learner new class weights feature. Because fraud can be extremely costly, we would prefer to have high recall, even if this means more false positives. This is especially true if we are building a first line of defense, flagging potentially fraudulent transactions for further review before taking actions that affect customers.
First we'll do some preprocessing on this data set: we'll shuffle the examples and split them into train and test sets. To run this under notebook under your own AWS account, you'll need to change the Amazon S3 locations. First download the raw data from [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud/data) and upload to your SageMaker notebook instance (or wherever you're running this notebook). Only 0.17% of the data have positive labels, making this a challenging classification problem.
```
import boto3
import io
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sagemaker
import sagemaker.amazon.common as smac
from sagemaker import get_execution_role
from sagemaker.predictor import csv_serializer, json_deserializer
# Set data locations
bucket = '<your_s3_bucket_here>' # replace this with your own bucket
prefix = 'sagemaker/DEMO-linear-learner-loss-weights' # replace this with your own prefix
s3_train_key = '{}/train/recordio-pb-data'.format(prefix)
s3_train_path = os.path.join('s3://', bucket, s3_train_key)
local_raw_data = 'creditcard.csv.zip'
role = get_execution_role()
# Confirm access to s3 bucket
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
print(obj.key)
# Read the data, shuffle, and split into train and test sets, separating the labels (last column) from the features
raw_data = pd.read_csv(local_raw_data).as_matrix()
np.random.seed(0)
np.random.shuffle(raw_data)
train_size = int(raw_data.shape[0] * 0.7)
train_features = raw_data[:train_size, :-1]
train_labels = raw_data[:train_size, -1]
test_features = raw_data[train_size:, :-1]
test_labels = raw_data[train_size:, -1]
# Convert the processed training data to protobuf and write to S3 for linear learner
vectors = np.array([t.tolist() for t in train_features]).astype('float32')
labels = np.array([t.tolist() for t in train_labels]).astype('float32')
buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, vectors, labels)
buf.seek(0)
boto3.resource('s3').Bucket(bucket).Object(s3_train_key).upload_fileobj(buf)
```
We'll wrap the model training setup in a convenience function that takes in the S3 location of the training data, the model hyperparameters that define our training job, and the S3 output path for model artifacts. Inside the function, we'll hardcode the algorithm container, the number and type of EC2 instances to train on, and the input and output data formats.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
def predictor_from_hyperparams(s3_train_data, hyperparams, output_path):
"""
Create an Estimator from the given hyperparams, fit to training data, and return a deployed predictor
"""
# specify algorithm containers and instantiate an Estimator with given hyperparams
container = get_image_uri(boto3.Session().region_name, 'linear-learner')
linear = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path=output_path,
sagemaker_session=sagemaker.Session())
linear.set_hyperparameters(**hyperparams)
# train model
linear.fit({'train': s3_train_data})
# deploy a predictor
linear_predictor = linear.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
linear_predictor.content_type = 'text/csv'
linear_predictor.serializer = csv_serializer
linear_predictor.deserializer = json_deserializer
return linear_predictor
```
And add another convenience function for setting up a hosting endpoint, making predictions, and evaluating the model. To make predictions, we need to set up a model hosting endpoint. Then we feed test features to the endpoint and receive predicted test labels. To evaluate the models we create in this exercise, we'll capture predicted test labels and compare them to actuals using some common binary classification metrics.
```
def evaluate(linear_predictor, test_features, test_labels, model_name, verbose=True):
"""
Evaluate a model on a test set given the prediction endpoint. Return binary classification metrics.
"""
# split the test data set into 100 batches and evaluate using prediction endpoint
prediction_batches = [linear_predictor.predict(batch)['predictions'] for batch in np.array_split(test_features, 100)]
# parse raw predictions json to exctract predicted label
test_preds = np.concatenate([np.array([x['predicted_label'] for x in batch]) for batch in prediction_batches])
# calculate true positives, false positives, true negatives, false negatives
tp = np.logical_and(test_labels, test_preds).sum()
fp = np.logical_and(1-test_labels, test_preds).sum()
tn = np.logical_and(1-test_labels, 1-test_preds).sum()
fn = np.logical_and(test_labels, 1-test_preds).sum()
# calculate binary classification metrics
recall = tp / (tp + fn)
precision = tp / (tp + fp)
accuracy = (tp + tn) / (tp + fp + tn + fn)
f1 = 2 * precision * recall / (precision + recall)
if verbose:
print(pd.crosstab(test_labels, test_preds, rownames=['actuals'], colnames=['predictions']))
print("\n{:<11} {:.3f}".format('Recall:', recall))
print("{:<11} {:.3f}".format('Precision:', precision))
print("{:<11} {:.3f}".format('Accuracy:', accuracy))
print("{:<11} {:.3f}".format('F1:', f1))
return {'TP': tp, 'FP': fp, 'FN': fn, 'TN': tn, 'Precision': precision, 'Recall': recall, 'Accuracy': accuracy,
'F1': f1, 'Model': model_name}
```
And finally we'll add a convenience function to delete prediction endpoints after we're done with them:
```
def delete_endpoint(predictor):
try:
boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint)
print('Deleted {}'.format(predictor.endpoint))
except:
print('Already deleted: {}'.format(predictor.endpoint))
```
Let's begin by training a binary classifier model with the linear learner default settings. Note that we're setting the number of epochs to 40, which is much higher than the default of 10 epochs. With early stopping, we don't have to worry about setting the number of epochs too high. Linear learner will stop training automatically after the model has converged.
```
# Training a binary classifier with default settings: logistic regression
defaults_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'epochs': 40
}
defaults_output_path = 's3://{}/{}/defaults/output'.format(bucket, prefix)
defaults_predictor = predictor_from_hyperparams(s3_train_path, defaults_hyperparams, defaults_output_path)
```
And now we'll produce a model with a threshold tuned for the best possible precision with recall fixed at 90%:
```
# Training a binary classifier with automated threshold tuning
autothresh_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'epochs': 40
}
autothresh_output_path = 's3://{}/{}/autothresh/output'.format(bucket, prefix)
autothresh_predictor = predictor_from_hyperparams(s3_train_path, autothresh_hyperparams, autothresh_output_path)
```
### Improving recall with class weights
Now we'll improve on these results using a new feature added to linear learner: class weights for binary classification. We introduced this feature in the *Class Weights* section, and now we'll look into its application to the credit card fraud dataset by training a new model with balanced class weights:
```
# Training a binary classifier with class weights and automated threshold tuning
class_weights_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'positive_example_weight_mult': 'balanced',
'epochs': 40
}
class_weights_output_path = 's3://{}/{}/class_weights/output'.format(bucket, prefix)
class_weights_predictor = predictor_from_hyperparams(s3_train_path, class_weights_hyperparams, class_weights_output_path)
```
The first training examples used the default loss function for binary classification, logistic loss. Now let's train a model with hinge loss. This is also called a support vector machine (SVM) classifier with a linear kernel. Threshold tuning is supported for all binary classifier models in linear learner.
```
# Training a binary classifier with hinge loss and automated threshold tuning
svm_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'loss': 'hinge_loss',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'epochs': 40
}
svm_output_path = 's3://{}/{}/svm/output'.format(bucket, prefix)
svm_predictor = predictor_from_hyperparams(s3_train_path, svm_hyperparams, svm_output_path)
```
And finally, let's see what happens with balancing the class weights for the SVM model:
```
# Training a binary classifier with hinge loss, balanced class weights, and automated threshold tuning
svm_balanced_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'loss': 'hinge_loss',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'positive_example_weight_mult': 'balanced',
'epochs': 40
}
svm_balanced_output_path = 's3://{}/{}/svm_balanced/output'.format(bucket, prefix)
svm_balanced_predictor = predictor_from_hyperparams(s3_train_path, svm_balanced_hyperparams, svm_balanced_output_path)
```
Now we'll make use of the prediction endpoint we've set up for each model by sending them features from the test set and evaluating their predictions with standard binary classification metrics.
```
# Evaluate the trained models
predictors = {'Logistic': defaults_predictor, 'Logistic with auto threshold': autothresh_predictor,
'Logistic with class weights': class_weights_predictor, 'Hinge with auto threshold': svm_predictor,
'Hinge with class weights': svm_balanced_predictor}
metrics = {key: evaluate(predictor, test_features, test_labels, key, False) for key, predictor in predictors.items()}
pd.set_option('display.float_format', lambda x: '%.3f' % x)
display(pd.DataFrame(list(metrics.values())).loc[:, ['Model', 'Recall', 'Precision', 'Accuracy', 'F1']])
```
The results are in! With threshold tuning, we can accurately predict 85-90% of the fraudulent transactions in the test set (due to randomness in training, recall will vary between 0.85-0.9 across multiple runs). But in addition to those true positives, we'll have a high number of false positives: 90-95% of the transactions we predict to be fraudulent are in fact not fraudulent (precision varies between 0.05-0.1). This model would work well as a first line of defense, flagging potentially fraudulent transactions for further review. If we instead want a model that gives very few false alarms, at the cost of catching far fewer of the fraudulent transactions, then we should optimize for higher precision:
binary_classifier_model_selection_criteria='recall_at_target_precision',
target_precision=0.9,
And what about the results of using our new feature, class weights for binary classification? Training with class weights has made a huge improvement to this model's performance! The precision is roughly doubled, while recall is still held constant at 85-90%.
Balancing class weights improved the performance of our SVM predictor, but it still does not match the corresponding logistic regression model for this dataset. Comparing all of the models we've fit so far, logistic regression with class weights and tuned thresholds did the best.
#### Note on target vs. observed recall
It's worth taking some time to look more closely at these results. If we asked linear learner for a model calibrated to a target recall of 0.9, then why didn't we get exactly 90% recall on the test set? The reason is the difference between training, validation, and testing. Linear learner calibrates thresholds for binary classification on the validation data set when one is provided, or else on the training set. Since we did not provide a validation data set, the threshold were calculated on the training data. Since the training, validation, and test data sets don't match exactly, the target recall we request is only an approximation. In this case, the threshold that produced 90% recall on the training data happened to produce only 85-90% recall on the test data (due to some randomness in training, the results will vary from one run to the next). The variation of recall in the test set versus the training set is dependent on the number of positive points. In this example, although we have over 280,000 examples in the entire dataset, we only have 337 positive examples, hence the large difference. The accuracy of this approximation can be improved by providing a large validation data set to get a more accurate threshold, and then evaluating on a large test set to get a more accurate benchmark of the model and its threshold. For even more fine-grained control, we can set the number of calibration samples to a higher number. It's default value is already quite high at 10 million samples:
num_calibration_samples=10000000,
#### Clean Up
Finally we'll clean up by deleting the prediction endpoints we set up:
```
for predictor in [defaults_predictor, autothresh_predictor, class_weights_predictor,
svm_predictor, svm_balanced_predictor]:
delete_endpoint(predictor)
```
We've just shown how to use the linear learner new early stopping feature, new loss functions, and new class weights feature to improve credit card fraud prediction. Class weights can help you optimize recall or precision for all types of fraud detection, as well as other classification problems with rare events, like ad click prediction or mechanical failure prediction. Try using class weights in your binary classification problem, or try one of the new loss functions for your regression problems: use quantile prediction to put confidence intervals around your predictions by learning 5% and 95% quantiles. For more information about new loss functions and class weights, see the linear learner [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/linear-learner.html).
##### References
Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015. See link to full license text on [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud).
|
github_jupyter
|
import boto3
import io
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import sagemaker
import sagemaker.amazon.common as smac
from sagemaker import get_execution_role
from sagemaker.predictor import csv_serializer, json_deserializer
# Set data locations
bucket = '<your_s3_bucket_here>' # replace this with your own bucket
prefix = 'sagemaker/DEMO-linear-learner-loss-weights' # replace this with your own prefix
s3_train_key = '{}/train/recordio-pb-data'.format(prefix)
s3_train_path = os.path.join('s3://', bucket, s3_train_key)
local_raw_data = 'creditcard.csv.zip'
role = get_execution_role()
# Confirm access to s3 bucket
for obj in boto3.resource('s3').Bucket(bucket).objects.all():
print(obj.key)
# Read the data, shuffle, and split into train and test sets, separating the labels (last column) from the features
raw_data = pd.read_csv(local_raw_data).as_matrix()
np.random.seed(0)
np.random.shuffle(raw_data)
train_size = int(raw_data.shape[0] * 0.7)
train_features = raw_data[:train_size, :-1]
train_labels = raw_data[:train_size, -1]
test_features = raw_data[train_size:, :-1]
test_labels = raw_data[train_size:, -1]
# Convert the processed training data to protobuf and write to S3 for linear learner
vectors = np.array([t.tolist() for t in train_features]).astype('float32')
labels = np.array([t.tolist() for t in train_labels]).astype('float32')
buf = io.BytesIO()
smac.write_numpy_to_dense_tensor(buf, vectors, labels)
buf.seek(0)
boto3.resource('s3').Bucket(bucket).Object(s3_train_key).upload_fileobj(buf)
from sagemaker.amazon.amazon_estimator import get_image_uri
def predictor_from_hyperparams(s3_train_data, hyperparams, output_path):
"""
Create an Estimator from the given hyperparams, fit to training data, and return a deployed predictor
"""
# specify algorithm containers and instantiate an Estimator with given hyperparams
container = get_image_uri(boto3.Session().region_name, 'linear-learner')
linear = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path=output_path,
sagemaker_session=sagemaker.Session())
linear.set_hyperparameters(**hyperparams)
# train model
linear.fit({'train': s3_train_data})
# deploy a predictor
linear_predictor = linear.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
linear_predictor.content_type = 'text/csv'
linear_predictor.serializer = csv_serializer
linear_predictor.deserializer = json_deserializer
return linear_predictor
def evaluate(linear_predictor, test_features, test_labels, model_name, verbose=True):
"""
Evaluate a model on a test set given the prediction endpoint. Return binary classification metrics.
"""
# split the test data set into 100 batches and evaluate using prediction endpoint
prediction_batches = [linear_predictor.predict(batch)['predictions'] for batch in np.array_split(test_features, 100)]
# parse raw predictions json to exctract predicted label
test_preds = np.concatenate([np.array([x['predicted_label'] for x in batch]) for batch in prediction_batches])
# calculate true positives, false positives, true negatives, false negatives
tp = np.logical_and(test_labels, test_preds).sum()
fp = np.logical_and(1-test_labels, test_preds).sum()
tn = np.logical_and(1-test_labels, 1-test_preds).sum()
fn = np.logical_and(test_labels, 1-test_preds).sum()
# calculate binary classification metrics
recall = tp / (tp + fn)
precision = tp / (tp + fp)
accuracy = (tp + tn) / (tp + fp + tn + fn)
f1 = 2 * precision * recall / (precision + recall)
if verbose:
print(pd.crosstab(test_labels, test_preds, rownames=['actuals'], colnames=['predictions']))
print("\n{:<11} {:.3f}".format('Recall:', recall))
print("{:<11} {:.3f}".format('Precision:', precision))
print("{:<11} {:.3f}".format('Accuracy:', accuracy))
print("{:<11} {:.3f}".format('F1:', f1))
return {'TP': tp, 'FP': fp, 'FN': fn, 'TN': tn, 'Precision': precision, 'Recall': recall, 'Accuracy': accuracy,
'F1': f1, 'Model': model_name}
def delete_endpoint(predictor):
try:
boto3.client('sagemaker').delete_endpoint(EndpointName=predictor.endpoint)
print('Deleted {}'.format(predictor.endpoint))
except:
print('Already deleted: {}'.format(predictor.endpoint))
# Training a binary classifier with default settings: logistic regression
defaults_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'epochs': 40
}
defaults_output_path = 's3://{}/{}/defaults/output'.format(bucket, prefix)
defaults_predictor = predictor_from_hyperparams(s3_train_path, defaults_hyperparams, defaults_output_path)
# Training a binary classifier with automated threshold tuning
autothresh_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'epochs': 40
}
autothresh_output_path = 's3://{}/{}/autothresh/output'.format(bucket, prefix)
autothresh_predictor = predictor_from_hyperparams(s3_train_path, autothresh_hyperparams, autothresh_output_path)
# Training a binary classifier with class weights and automated threshold tuning
class_weights_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'positive_example_weight_mult': 'balanced',
'epochs': 40
}
class_weights_output_path = 's3://{}/{}/class_weights/output'.format(bucket, prefix)
class_weights_predictor = predictor_from_hyperparams(s3_train_path, class_weights_hyperparams, class_weights_output_path)
# Training a binary classifier with hinge loss and automated threshold tuning
svm_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'loss': 'hinge_loss',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'epochs': 40
}
svm_output_path = 's3://{}/{}/svm/output'.format(bucket, prefix)
svm_predictor = predictor_from_hyperparams(s3_train_path, svm_hyperparams, svm_output_path)
# Training a binary classifier with hinge loss, balanced class weights, and automated threshold tuning
svm_balanced_hyperparams = {
'feature_dim': 30,
'predictor_type': 'binary_classifier',
'loss': 'hinge_loss',
'binary_classifier_model_selection_criteria': 'precision_at_target_recall',
'target_recall': 0.9,
'positive_example_weight_mult': 'balanced',
'epochs': 40
}
svm_balanced_output_path = 's3://{}/{}/svm_balanced/output'.format(bucket, prefix)
svm_balanced_predictor = predictor_from_hyperparams(s3_train_path, svm_balanced_hyperparams, svm_balanced_output_path)
# Evaluate the trained models
predictors = {'Logistic': defaults_predictor, 'Logistic with auto threshold': autothresh_predictor,
'Logistic with class weights': class_weights_predictor, 'Hinge with auto threshold': svm_predictor,
'Hinge with class weights': svm_balanced_predictor}
metrics = {key: evaluate(predictor, test_features, test_labels, key, False) for key, predictor in predictors.items()}
pd.set_option('display.float_format', lambda x: '%.3f' % x)
display(pd.DataFrame(list(metrics.values())).loc[:, ['Model', 'Recall', 'Precision', 'Accuracy', 'F1']])
for predictor in [defaults_predictor, autothresh_predictor, class_weights_predictor,
svm_predictor, svm_balanced_predictor]:
delete_endpoint(predictor)
| 0.534127 | 0.993704 |
```
from pyvis.network import Network
import networkx as nx
import json
import functools
import itertools
import collections
from matplotlib import pyplot as plt
from networkx.drawing.nx_agraph import write_dot, graphviz_layout
# utility functions
def none_max(a, b):
if a is None:
return b
if b is None:
return a
return max(a, b)
def max_dict(dict_a, dict_b):
all_keys = dict_a.keys() | dict_b.keys()
return {k: none_max(dict_a.get(k), dict_b.get(k)) for k in all_keys}
def plot_graph(G, heading, layout=False, width=1000, height=550, physics=True):
g = Network(height=height,
width=width,
notebook=True,
directed=True,
heading=heading,
layout=layout)
# level_dict = nx.nx.single_source_shortest_path_length(G.reverse(), num_mapping["fin:end"])
# [g.add_node(k, level=5-v) for k, v in level_dict.items()]
# level_dict[num_mapping["fin:end"]] = len(nx.dag_longest_path(G))-1
# [g.add_node(k, level=v+1) for k, v in level_dict.items()]
g.toggle_physics(physics)
nodes = list(G.nodes())
root_nodes = [n for n,d in G.in_degree() if d==0]
level_dict = {i: 0 for i in root_nodes}
level_dict_two = (
functools.reduce(lambda a,b : max_dict(a,b),
[{j: max([len(k) for k in nx.all_simple_paths(G, i, j)])
for j in nx.descendants(G, i)}
for i in root_nodes]
)
)
level_dict.update(level_dict_two)
[g.add_node(k, level=v) for k, v in level_dict.items()]
g.from_nx(G)
return g
G = nx.DiGraph()
G.add_edges_from([(1,2), (1,3), (2,4), (3,4), (1,4)])
plot_graph(G, heading="DAG", layout=True).show("./test.html")
dict(G.in_degree)
TR = nx.DiGraph()
TR.add_nodes_from(G.nodes())
plot_graph(TR, heading="TR", layout=True).show("ex.html")
for u in G:
print(set(G[u]))
import json
def print_vars(d):
for i in d:
print("%-20s %s" % (str(i), str(d[i])))
print()
def transitive_reduction(G):
""" Returns transitive reduction of a directed graph
The transitive reduction of G = (V,E) is a graph G- = (V,E-) such that
for all v,w in V there is an edge (v,w) in E- if and only if (v,w) is
in E and there is no path from v to w in G with length greater than 1.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
Returns
-------
NetworkX DiGraph
The transitive reduction of `G`
Raises
------
NetworkXError
If `G` is not a directed acyclic graph (DAG) transitive reduction is
not uniquely defined and a :exc:`NetworkXError` exception is raised.
References
----------
https://en.wikipedia.org/wiki/Transitive_reduction
"""
TR = nx.DiGraph()
TR.add_nodes_from(G.nodes())
descendants = {}
check_count = dict(G.in_degree)
print_vars({'check_count': check_count})
for u in G:
u_nbrs = set(G[u])
print_vars({'u': str(u), 'u_nbrs': u_nbrs})
for v in G[u]:
if v in u_nbrs:
if v not in descendants:
descendants[v] = {y for x,y in nx.dfs_edges(G, v)}
print_vars({'v': str(v), 'descendants': descendants})
u_nbrs -= descendants[v]
check_count[v] -= 1
print_vars({'v': v, 'u_nbrs': u_nbrs, 'check_count': check_count})
if check_count[v] == 0:
del descendants[v]
TR.add_edges_from((u, v) for v in u_nbrs)
print_vars({'TR Edges': list(TR.edges())})
return TR
G = nx.DiGraph()
G.add_edges_from([(1,2), (1,3), (2,4), (3,4), (1,4)])
TR = transitive_reduction(G)
plot_graph(TR, layout=True, heading="TR").show("tr.html")
```
The algorithm in plain english
for every node u
visit all children of u, let the children be v
maintain a set u_nbrs, which has all children of v
Use nx.dfs_edges to get all descendents of v
Delete all descendents of v from u_nbrs
As there is a longer path from u to descendants of v through v
After performing the above operation for all children of u
Add the edges from u to every node in u_nbrs
|
github_jupyter
|
from pyvis.network import Network
import networkx as nx
import json
import functools
import itertools
import collections
from matplotlib import pyplot as plt
from networkx.drawing.nx_agraph import write_dot, graphviz_layout
# utility functions
def none_max(a, b):
if a is None:
return b
if b is None:
return a
return max(a, b)
def max_dict(dict_a, dict_b):
all_keys = dict_a.keys() | dict_b.keys()
return {k: none_max(dict_a.get(k), dict_b.get(k)) for k in all_keys}
def plot_graph(G, heading, layout=False, width=1000, height=550, physics=True):
g = Network(height=height,
width=width,
notebook=True,
directed=True,
heading=heading,
layout=layout)
# level_dict = nx.nx.single_source_shortest_path_length(G.reverse(), num_mapping["fin:end"])
# [g.add_node(k, level=5-v) for k, v in level_dict.items()]
# level_dict[num_mapping["fin:end"]] = len(nx.dag_longest_path(G))-1
# [g.add_node(k, level=v+1) for k, v in level_dict.items()]
g.toggle_physics(physics)
nodes = list(G.nodes())
root_nodes = [n for n,d in G.in_degree() if d==0]
level_dict = {i: 0 for i in root_nodes}
level_dict_two = (
functools.reduce(lambda a,b : max_dict(a,b),
[{j: max([len(k) for k in nx.all_simple_paths(G, i, j)])
for j in nx.descendants(G, i)}
for i in root_nodes]
)
)
level_dict.update(level_dict_two)
[g.add_node(k, level=v) for k, v in level_dict.items()]
g.from_nx(G)
return g
G = nx.DiGraph()
G.add_edges_from([(1,2), (1,3), (2,4), (3,4), (1,4)])
plot_graph(G, heading="DAG", layout=True).show("./test.html")
dict(G.in_degree)
TR = nx.DiGraph()
TR.add_nodes_from(G.nodes())
plot_graph(TR, heading="TR", layout=True).show("ex.html")
for u in G:
print(set(G[u]))
import json
def print_vars(d):
for i in d:
print("%-20s %s" % (str(i), str(d[i])))
print()
def transitive_reduction(G):
""" Returns transitive reduction of a directed graph
The transitive reduction of G = (V,E) is a graph G- = (V,E-) such that
for all v,w in V there is an edge (v,w) in E- if and only if (v,w) is
in E and there is no path from v to w in G with length greater than 1.
Parameters
----------
G : NetworkX DiGraph
A directed acyclic graph (DAG)
Returns
-------
NetworkX DiGraph
The transitive reduction of `G`
Raises
------
NetworkXError
If `G` is not a directed acyclic graph (DAG) transitive reduction is
not uniquely defined and a :exc:`NetworkXError` exception is raised.
References
----------
https://en.wikipedia.org/wiki/Transitive_reduction
"""
TR = nx.DiGraph()
TR.add_nodes_from(G.nodes())
descendants = {}
check_count = dict(G.in_degree)
print_vars({'check_count': check_count})
for u in G:
u_nbrs = set(G[u])
print_vars({'u': str(u), 'u_nbrs': u_nbrs})
for v in G[u]:
if v in u_nbrs:
if v not in descendants:
descendants[v] = {y for x,y in nx.dfs_edges(G, v)}
print_vars({'v': str(v), 'descendants': descendants})
u_nbrs -= descendants[v]
check_count[v] -= 1
print_vars({'v': v, 'u_nbrs': u_nbrs, 'check_count': check_count})
if check_count[v] == 0:
del descendants[v]
TR.add_edges_from((u, v) for v in u_nbrs)
print_vars({'TR Edges': list(TR.edges())})
return TR
G = nx.DiGraph()
G.add_edges_from([(1,2), (1,3), (2,4), (3,4), (1,4)])
TR = transitive_reduction(G)
plot_graph(TR, layout=True, heading="TR").show("tr.html")
| 0.740456 | 0.567757 |
# RMSProp
我们在[“Adagrad”](adagrad.md)一节里提到,由于调整学习率时分母上的变量 $\boldsymbol{s}_t$ 一直在累加按元素平方的小批量随机梯度,目标函数自变量每个元素的学习率在迭代过程中一直在降低(或不变)。所以,当学习率在迭代早期降得较快且当前解依然不佳时,Adagrad 在迭代后期由于学习率过小,可能较难找到一个有用的解。为了应对这一问题,RMSProp 算法对 Adagrad 做了一点小小的修改 [1]。
## 算法
我们在[“动量法”](momentum.md)一节里介绍过指数加权移动平均。不同于 Adagrad 里状态变量 $\boldsymbol{s}_t$ 是截至时间步 $t$ 所有小批量随机梯度 $\boldsymbol{g}_t$ 按元素平方和,RMSProp 将这些梯度按元素平方做指数加权移动平均。具体来说,给定超参数 $0 \leq \gamma < 1$,RMSProp 在时间步 $t>0$ 计算
$$\boldsymbol{s}_t \leftarrow \gamma \boldsymbol{s}_{t-1} + (1 - \gamma) \boldsymbol{g}_t \odot \boldsymbol{g}_t. $$
和 Adagrad 一样,RMSProp 将目标函数自变量中每个元素的学习率通过按元素运算重新调整,然后更新自变量
$$\boldsymbol{x}_t \leftarrow \boldsymbol{x}_{t-1} - \frac{\eta}{\sqrt{\boldsymbol{s}_t + \epsilon}} \odot \boldsymbol{g}_t, $$
其中 $\eta$ 是学习率,$\epsilon$ 是为了维持数值稳定性而添加的常数,例如 $10^{-6}$。因为 RMSProp 的状态变量是对平方项 $\boldsymbol{g}_t \odot \boldsymbol{g}_t$ 的指数加权移动平均,所以可以看作是最近 $1/(1-\gamma)$ 个时间步的小批量随机梯度平方项的加权平均。如此一来,自变量每个元素的学习率在迭代过程中不再一直降低(或不变)。
照例,让我们先观察 RMSProp 对目标函数 $f(\boldsymbol{x})=0.1x_1^2+2x_2^2$ 中自变量的迭代轨迹。回忆在[“Adagrad”](adagrad.md)一节使用学习率为 0.4 的 Adagrad,自变量在迭代后期的移动幅度较小。但在同样的学习率下,RMSProp 可以较快逼近最优解。
```
%matplotlib inline
import d2lzh as d2l
import math
from mxnet import nd
def rmsprop_2d(x1, x2, s1, s2):
g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6
s1 = gamma * s1 + (1 - gamma) * g1 ** 2
s2 = gamma * s2 + (1 - gamma) * g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta, gamma = 0.4, 0.9
d2l.show_trace_2d(f_2d, d2l.train_2d(rmsprop_2d))
```
## 从零开始实现
接下来按照算法中的公式实现 RMSProp。
```
features, labels = d2l.get_data_ch7()
def init_rmsprop_states():
s_w = nd.zeros((features.shape[1], 1))
s_b = nd.zeros(1)
return (s_w, s_b)
def rmsprop(params, states, hyperparams):
gamma, eps = hyperparams['gamma'], 1e-6
for p, s in zip(params, states):
s[:] = gamma * s + (1 - gamma) * p.grad.square()
p[:] -= hyperparams['lr'] * p.grad / (s + eps).sqrt()
```
我们将初始学习率设为 0.01,并将超参数 $\gamma$ 设为 0.9。此时,变量 $\boldsymbol{s}_t$ 可看作是最近 $1/(1-0.9) = 10$ 个时间步的平方项 $\boldsymbol{g}_t \odot \boldsymbol{g}_t$ 的加权平均。
```
features, labels = d2l.get_data_ch7()
d2l.train_ch7(rmsprop, init_rmsprop_states(), {'lr': 0.01, 'gamma': 0.9},
features, labels)
```
## 简洁实现
通过算法名称为“rmsprop”的`Trainer`实例,我们便可使用 Gluon 提供的 RMSProp 算法来训练模型。注意超参数 $\gamma$ 通过`gamma1`指定。
```
d2l.train_gluon_ch7('rmsprop', {'learning_rate': 0.01, 'gamma1': 0.9},
features, labels)
```
## 小结
* RMSProp 和 Adagrad 的不同在于,RMSProp 使用了小批量随机梯度按元素平方的指数加权移动平均来调整学习率。
## 练习
* 把 $\gamma$ 的值设为 1,实验结果有什么变化?为什么?
* 试着使用其他的初始学习率和 $\gamma$ 超参数的组合,观察并分析实验结果。
## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/2275)

## 参考文献
[1] Tieleman, T., & Hinton, G. (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 26-31.
|
github_jupyter
|
%matplotlib inline
import d2lzh as d2l
import math
from mxnet import nd
def rmsprop_2d(x1, x2, s1, s2):
g1, g2, eps = 0.2 * x1, 4 * x2, 1e-6
s1 = gamma * s1 + (1 - gamma) * g1 ** 2
s2 = gamma * s2 + (1 - gamma) * g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta, gamma = 0.4, 0.9
d2l.show_trace_2d(f_2d, d2l.train_2d(rmsprop_2d))
features, labels = d2l.get_data_ch7()
def init_rmsprop_states():
s_w = nd.zeros((features.shape[1], 1))
s_b = nd.zeros(1)
return (s_w, s_b)
def rmsprop(params, states, hyperparams):
gamma, eps = hyperparams['gamma'], 1e-6
for p, s in zip(params, states):
s[:] = gamma * s + (1 - gamma) * p.grad.square()
p[:] -= hyperparams['lr'] * p.grad / (s + eps).sqrt()
features, labels = d2l.get_data_ch7()
d2l.train_ch7(rmsprop, init_rmsprop_states(), {'lr': 0.01, 'gamma': 0.9},
features, labels)
d2l.train_gluon_ch7('rmsprop', {'learning_rate': 0.01, 'gamma1': 0.9},
features, labels)
| 0.493897 | 0.982906 |
**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x**
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.nlp import *
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from torchtext import vocab, data, datasets
import pandas as pd
sl=1000
vocab_size=200000
PATH='data/arxiv/arxiv.csv'
# You can download a similar to Jeremy's original arxiv.csv here: https://drive.google.com/file/d/0B34BjUTAgwm6SzdPWDAtVG1vWVU/. It comes from this article https://hackernoon.com/building-brundage-bot-10252facf3d1 and github https://github.com/amauboussin/arxiv-twitterbot, just rename it to arxiv.csv
df = pd.read_csv(PATH)
df.head()
df['txt'] = df.category + ' ' + df.title + '\n' + df.summary
print(df.iloc[0].txt)
n=len(df); n
val_idx = get_cv_idxs(n, val_pct=0.1)
((val,trn),(val_y,trn_y)) = split_by_idx(val_idx, df.txt.values, df.tweeted.values)
```
## Ngram logistic regression
```
veczr = CountVectorizer(ngram_range=(1,3), tokenizer=tokenize)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc.shape, trn_term_doc.sum()
y=trn_y
x=trn_term_doc.sign()
val_x = val_term_doc.sign()
p = x[np.argwhere(y!=0)[:,0]].sum(0)+1
q = x[np.argwhere(y==0)[:,0]].sum(0)+1
r = np.log((p/p.sum())/(q/q.sum()))
b = np.log(len(p)/len(q))
pre_preds = val_term_doc @ r.T + b
preds = pre_preds.T>0
(preds==val_y).mean()
m = LogisticRegression(C=0.1, fit_intercept=False)
m.fit(x, y);
preds = m.predict(val_x)
(preds.T==val_y).mean()
probs = m.predict_proba(val_x)[:,1]
from sklearn.metrics import precision_recall_curve, average_precision_score
import matplotlib.pyplot as plt
precision, recall, _ = precision_recall_curve(val_y, probs)
average_precision = average_precision_score(val_y, probs)
plt.step(recall, precision, color='b', alpha=0.2, where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2, color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall curve: AUC={0:0.2f}'.format(average_precision));
recall[precision>=0.6][0]
df_val = df.iloc[sorted(val_idx)]
incorrect_yes = np.where((preds != val_y) & (val_y == 0))[0]
most_incorrect_yes = np.argsort(-probs[incorrect_yes])
txts = df_val.iloc[incorrect_yes[most_incorrect_yes[:10]]]
txts[["link", "title", "summary"]]
' '.join(txts.link.values)
incorrect_no = np.where((preds != val_y) & (val_y == 1))[0]
most_incorrect_no = np.argsort(probs[incorrect_no])
txts = df_val.iloc[incorrect_no[most_incorrect_no[:10]]]
txts[["link", "title", "summary"]]
' '.join(txts.link.values)
to_review = np.where((preds > 0.8) & (val_y == 0))[0]
to_review_idx = np.argsort(-probs[to_review])
txts = df_val.iloc[to_review[to_review_idx]]
txt_html = ('<li><a href="http://' + txts.link + '">' + txts.title.str.replace('\n',' ') + '</a>: '
+ txts.summary.str.replace('\n',' ') + '</li>').values
full_html = (f"""<!DOCTYPE html>
<html>
<head><title>Brundage Bot Backfill</title></head>
<body>
<ul>
{os.linesep.join(txt_html)}
</ul>
</body>
</html>""")
```
## Learner
```
veczr = CountVectorizer(ngram_range=(1,3), tokenizer=tokenize, max_features=vocab_size)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc.shape, trn_term_doc.sum()
md = TextClassifierData.from_bow(trn_term_doc, trn_y, val_term_doc, val_y, sl)
learner = md.dotprod_nb_learner(r_adj=20)
learner.fit(0.02, 4, wds=1e-6, cycle_len=1)
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
def prec_at_6(preds,targs):
precision, recall, _ = precision_recall_curve(targs[:,1], preds[:,1])
return recall[precision>=0.6][0]
prec_at_6(*learner.predict_with_targs())
```
|
github_jupyter
|
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.nlp import *
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from torchtext import vocab, data, datasets
import pandas as pd
sl=1000
vocab_size=200000
PATH='data/arxiv/arxiv.csv'
# You can download a similar to Jeremy's original arxiv.csv here: https://drive.google.com/file/d/0B34BjUTAgwm6SzdPWDAtVG1vWVU/. It comes from this article https://hackernoon.com/building-brundage-bot-10252facf3d1 and github https://github.com/amauboussin/arxiv-twitterbot, just rename it to arxiv.csv
df = pd.read_csv(PATH)
df.head()
df['txt'] = df.category + ' ' + df.title + '\n' + df.summary
print(df.iloc[0].txt)
n=len(df); n
val_idx = get_cv_idxs(n, val_pct=0.1)
((val,trn),(val_y,trn_y)) = split_by_idx(val_idx, df.txt.values, df.tweeted.values)
veczr = CountVectorizer(ngram_range=(1,3), tokenizer=tokenize)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc.shape, trn_term_doc.sum()
y=trn_y
x=trn_term_doc.sign()
val_x = val_term_doc.sign()
p = x[np.argwhere(y!=0)[:,0]].sum(0)+1
q = x[np.argwhere(y==0)[:,0]].sum(0)+1
r = np.log((p/p.sum())/(q/q.sum()))
b = np.log(len(p)/len(q))
pre_preds = val_term_doc @ r.T + b
preds = pre_preds.T>0
(preds==val_y).mean()
m = LogisticRegression(C=0.1, fit_intercept=False)
m.fit(x, y);
preds = m.predict(val_x)
(preds.T==val_y).mean()
probs = m.predict_proba(val_x)[:,1]
from sklearn.metrics import precision_recall_curve, average_precision_score
import matplotlib.pyplot as plt
precision, recall, _ = precision_recall_curve(val_y, probs)
average_precision = average_precision_score(val_y, probs)
plt.step(recall, precision, color='b', alpha=0.2, where='post')
plt.fill_between(recall, precision, step='post', alpha=0.2, color='b')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall curve: AUC={0:0.2f}'.format(average_precision));
recall[precision>=0.6][0]
df_val = df.iloc[sorted(val_idx)]
incorrect_yes = np.where((preds != val_y) & (val_y == 0))[0]
most_incorrect_yes = np.argsort(-probs[incorrect_yes])
txts = df_val.iloc[incorrect_yes[most_incorrect_yes[:10]]]
txts[["link", "title", "summary"]]
' '.join(txts.link.values)
incorrect_no = np.where((preds != val_y) & (val_y == 1))[0]
most_incorrect_no = np.argsort(probs[incorrect_no])
txts = df_val.iloc[incorrect_no[most_incorrect_no[:10]]]
txts[["link", "title", "summary"]]
' '.join(txts.link.values)
to_review = np.where((preds > 0.8) & (val_y == 0))[0]
to_review_idx = np.argsort(-probs[to_review])
txts = df_val.iloc[to_review[to_review_idx]]
txt_html = ('<li><a href="http://' + txts.link + '">' + txts.title.str.replace('\n',' ') + '</a>: '
+ txts.summary.str.replace('\n',' ') + '</li>').values
full_html = (f"""<!DOCTYPE html>
<html>
<head><title>Brundage Bot Backfill</title></head>
<body>
<ul>
{os.linesep.join(txt_html)}
</ul>
</body>
</html>""")
veczr = CountVectorizer(ngram_range=(1,3), tokenizer=tokenize, max_features=vocab_size)
trn_term_doc = veczr.fit_transform(trn)
val_term_doc = veczr.transform(val)
trn_term_doc.shape, trn_term_doc.sum()
md = TextClassifierData.from_bow(trn_term_doc, trn_y, val_term_doc, val_y, sl)
learner = md.dotprod_nb_learner(r_adj=20)
learner.fit(0.02, 4, wds=1e-6, cycle_len=1)
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
def prec_at_6(preds,targs):
precision, recall, _ = precision_recall_curve(targs[:,1], preds[:,1])
return recall[precision>=0.6][0]
prec_at_6(*learner.predict_with_targs())
| 0.653016 | 0.761428 |
<a href="https://colab.research.google.com/github/keithvtls/Numerical-Method-Activities/blob/main/Week%203-5%20-%20Roots%20of%20Equations/NuMeth_Group_4_Act_3Roots_of_Linear_Equation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### CONTRIBUTION
The group talked about the grades that will be given should be the same for each member of the group. Each group member participated and contribute ideas to finish this user manual before the due date. Each member would like to say thank you for their fellow group members.
# Inside the Module
```
### Brute force algorithm(f(x)=0)
def f_of_x(f,roots,tol,i, epochs=100):
x_roots=[] # list of roots
n_roots= roots # number of roots needed to find
incre = i #increments
h = tol #tolerance is the starting guess
for epoch in range(epochs): # the list of iteration that will be using
if np.isclose(f(h),0): # applying current h or the tolerance in the equation and the approximation on f(x) = 0
x_roots.insert(len(x_roots), h)
end_epochs = epoch
if len(x_roots) == n_roots:
break # once the root is found it will stop and print the root
h+=incre # the change of value in h wherein if the roots did not find it will going to loop
return x_roots, end_epochs # returning the value of the roots and the iteration or the epochs
### brute force algorithm (in terms of x)
def in_terms_of_x(eq,tol,epochs=100):
funcs = eq # equation to be solved
x_roots=[] # list of roots
n_roots = len(funcs) # How many roots needed to find according to the length of the equation
# epochs= begin_epochs # number of iteration
h = tol # tolerance or the guess to adjust
for func in funcs:
x = 0 # initial value or initial guess
for epoch in range(epochs): # the list of iteration that will be using
x_prime = func(x)
if np.allclose(x, x_prime,h):
x_roots.insert(len(x_roots),x_prime)
break # once the root is found it will stop and print the root
x = x_prime
return x_roots, epochs # returning the value of the roots and the iteration or the epochs
### newton-raphson method
def newt_raphson(func_eq,prime_eq, inits, epochs=100):
f = func_eq # first equation
f_prime = prime_eq # second equation
# epochs= max_iter # number of iteration
x_inits = inits # guess of the roots in range
roots = [] # list of roots
for x_init in x_inits:
x = x_init
for epoch in range(epochs):
x_prime = x - (f(x)/f_prime(x))
if np.allclose(x, x_prime):
roots.append(x)
break # once the root is found it will stop and print the root
x = x_prime
return roots, epochs # returning the value of the roots and the iteration or the epochs
```
Inside also of the module there is an import numpy and import matplotlib the module is given the name first_two_method for this activity to avoid confusion to the last three method.
# On the package
The package was name numeth_yon for the numeth that stands for the course numerical method while the yon is the group number.
# Explaining how to use your package and module with examples.
```
'''
Import the Numpy package to the IDLE (import numpy as np), and from numeth_yon package import first_two_method for it to run the equation
it is needed for the first_two_method to be next to the function that was inside in the first_two_method module that has been made which are
the, f_of_x, in_terms_of_x and the newt_raphson, that wiil seen below:
'''
import numpy as np
from numeth_yon import first_two_method
'''
For the user to use f_of_x function, it is needed to provide the roots, the iteration is already set into the default value of 100,
the estimated guess, and a number of increase must be provided. The function created that has been created,
is to find the roots of the given equation.
'''
sample1 = lambda x: x**2+x-2
roots, epochs = first_two_method.f_of_x(sample1,2,-10,1) # the first_two_method is the module that is next to the function inside of the module
print("The root is: {}, found at epoch {}".format(roots,epochs+1))
# Output: The root is: [-2, 1], found at epoch 12
'''
In this method of using Brute Force Algorithm in terms of x, the user must have the equation to be solved,
number of iteration was already set to 100 as default value and tolerance to adjust the guess number.
'''
sample2 = lambda x: 2-x**2
sample3 = lambda x: np.sqrt(2-x)
funcs = [sample2, sample3]
roots, epochs = first_two_method.in_terms_of_x(funcs,1e-05) # the first_two_method is the module that is next to the function inside of the module
print("The root is {} found after {} epochs".format(roots,epochs))
# Output: The root is [-2, 1.00000172977337] found after 100 epochs
'''
To use the newt_raphson, the user must provide an equation, the derivative of the equation,
the number of repetitions was set to default value of 100, and the range of searching value for the parameters.
The function of the function, is to find the roots of the given equation.
'''
g = lambda x: 2*x**2 - 5*x + 3
g_prime = lambda x: 4*x-5
# the first_two_method is the module that is next to the function inside of the module
roots, epochs = first_two_method.newt_raphson(g,g_prime, np.arange(0,5))
x_roots = np.round(roots,3)
x_roots = np.unique(x_roots)
# Output: The root is [1. 1.5] found after 100 epochs
```
To further understand please refer to the PDF version.
#### **Activity 2.1**
1. Identify **two more polynomials** preferably **orders higher than 2** and **two transcendental functions**. Write them in **LaTex**.
2. Plot their graphs you may choose your own set of pre-images.
3. Manually solve for their roots and plot them along the graph of the equation.
```
import numpy as np
import matplotlib.pyplot as plt
```
$$ f(x) = x^3+3x^2-4x $$
```
##Input the function on the define f(x)
def f(x):
return x**3+3*x**2-4*x
x0, x1,x2 = -4, 0, 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,2,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
```
$$ f(x) = 2x^3-3x^2-11x-6 $$
```
def f(x):
return 2*x**3+3*x**2-11*x-6
x0, x1,x2 = -3, -0.5, 2 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
```
$$ f(x) = log(x) $$
```
def f(x):
return np.log(x)
x0 = 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(0.2,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
```
$$ f(x) = \sqrt{9-x} $$
```
def f(x):
return np.sqrt(9-x)
x0 = 9 # Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(1,9,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
```
|
github_jupyter
|
### Brute force algorithm(f(x)=0)
def f_of_x(f,roots,tol,i, epochs=100):
x_roots=[] # list of roots
n_roots= roots # number of roots needed to find
incre = i #increments
h = tol #tolerance is the starting guess
for epoch in range(epochs): # the list of iteration that will be using
if np.isclose(f(h),0): # applying current h or the tolerance in the equation and the approximation on f(x) = 0
x_roots.insert(len(x_roots), h)
end_epochs = epoch
if len(x_roots) == n_roots:
break # once the root is found it will stop and print the root
h+=incre # the change of value in h wherein if the roots did not find it will going to loop
return x_roots, end_epochs # returning the value of the roots and the iteration or the epochs
### brute force algorithm (in terms of x)
def in_terms_of_x(eq,tol,epochs=100):
funcs = eq # equation to be solved
x_roots=[] # list of roots
n_roots = len(funcs) # How many roots needed to find according to the length of the equation
# epochs= begin_epochs # number of iteration
h = tol # tolerance or the guess to adjust
for func in funcs:
x = 0 # initial value or initial guess
for epoch in range(epochs): # the list of iteration that will be using
x_prime = func(x)
if np.allclose(x, x_prime,h):
x_roots.insert(len(x_roots),x_prime)
break # once the root is found it will stop and print the root
x = x_prime
return x_roots, epochs # returning the value of the roots and the iteration or the epochs
### newton-raphson method
def newt_raphson(func_eq,prime_eq, inits, epochs=100):
f = func_eq # first equation
f_prime = prime_eq # second equation
# epochs= max_iter # number of iteration
x_inits = inits # guess of the roots in range
roots = [] # list of roots
for x_init in x_inits:
x = x_init
for epoch in range(epochs):
x_prime = x - (f(x)/f_prime(x))
if np.allclose(x, x_prime):
roots.append(x)
break # once the root is found it will stop and print the root
x = x_prime
return roots, epochs # returning the value of the roots and the iteration or the epochs
'''
Import the Numpy package to the IDLE (import numpy as np), and from numeth_yon package import first_two_method for it to run the equation
it is needed for the first_two_method to be next to the function that was inside in the first_two_method module that has been made which are
the, f_of_x, in_terms_of_x and the newt_raphson, that wiil seen below:
'''
import numpy as np
from numeth_yon import first_two_method
'''
For the user to use f_of_x function, it is needed to provide the roots, the iteration is already set into the default value of 100,
the estimated guess, and a number of increase must be provided. The function created that has been created,
is to find the roots of the given equation.
'''
sample1 = lambda x: x**2+x-2
roots, epochs = first_two_method.f_of_x(sample1,2,-10,1) # the first_two_method is the module that is next to the function inside of the module
print("The root is: {}, found at epoch {}".format(roots,epochs+1))
# Output: The root is: [-2, 1], found at epoch 12
'''
In this method of using Brute Force Algorithm in terms of x, the user must have the equation to be solved,
number of iteration was already set to 100 as default value and tolerance to adjust the guess number.
'''
sample2 = lambda x: 2-x**2
sample3 = lambda x: np.sqrt(2-x)
funcs = [sample2, sample3]
roots, epochs = first_two_method.in_terms_of_x(funcs,1e-05) # the first_two_method is the module that is next to the function inside of the module
print("The root is {} found after {} epochs".format(roots,epochs))
# Output: The root is [-2, 1.00000172977337] found after 100 epochs
'''
To use the newt_raphson, the user must provide an equation, the derivative of the equation,
the number of repetitions was set to default value of 100, and the range of searching value for the parameters.
The function of the function, is to find the roots of the given equation.
'''
g = lambda x: 2*x**2 - 5*x + 3
g_prime = lambda x: 4*x-5
# the first_two_method is the module that is next to the function inside of the module
roots, epochs = first_two_method.newt_raphson(g,g_prime, np.arange(0,5))
x_roots = np.round(roots,3)
x_roots = np.unique(x_roots)
# Output: The root is [1. 1.5] found after 100 epochs
import numpy as np
import matplotlib.pyplot as plt
##Input the function on the define f(x)
def f(x):
return x**3+3*x**2-4*x
x0, x1,x2 = -4, 0, 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,2,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
def f(x):
return 2*x**3+3*x**2-11*x-6
x0, x1,x2 = -3, -0.5, 2 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(-5,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0,x1,x2],[0,0,0], c='red', label='roots')
plt.legend()
plt.show()
def f(x):
return np.log(x)
x0 = 1 ## Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(0.2,4,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
def f(x):
return np.sqrt(9-x)
x0 = 9 # Roots of the function f(x)
## Plotting the roots in a graph
X = np.linspace(1,9,dtype=float)
Y = f(X)
## Creating the grid of the graph
plt.figure(figsize=(10,5))
plt.axhline(y=0,color='black')
plt.axvline(x=0,color='black')
plt.grid()
## Plotting the roots in the graph
plt.plot(X,Y,color='blue')
plt.scatter([x0],[0], c='red', label='roots')
plt.legend()
plt.show()
| 0.611614 | 0.980913 |
```
import pandas as pd
import glob, os
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
basedir = '/Users/simon/Work/ECOSAT3/DATA/Dredges/'
gpd.read_file('/Users/simon/Work/ECOSAT3/DATA/Dredges/DR01/shapefile/dredge_01_events.shp')
#print glob.glob('%s/DR*')
#print os.listdir(basedir)
Dredge_Dict = {}
for j in range(1,56):
dredge_folder_name = 'DR%02d' % j
#shapefile = '%s/%s/shapefile/dredge_%02d_events.shp' % (basedir,dredge_folder_name,j)
shapefile = glob.glob('%s/%s/shapefile/*.shp' % (basedir,dredge_folder_name))
if len(shapefile)==0:
shapefile = glob.glob('%s/%s/Shapefile/*.shp' % (basedir,dredge_folder_name))
if len(shapefile)==0:
shapefile = glob.glob('%s/%s/shapefiles/*.shp' % (basedir,dredge_folder_name))
print shapefile
Events = gpd.read_file(shapefile[0])
#print Events
dredge_path_X = []
dredge_path_Y = []
try:
for geometry in Events.geometry:
if geometry is not None:
dredge_path_X.append(geometry.xy[0])
dredge_path_Y.append(geometry.xy[1])
#dredge_path_X = [geometry.xy[0] for geometry in Events.geometry]
#dredge_path_Y = [geometry.xy[1] for geometry in Events.geometry]
#print dredge_path_X
#plt.plot(dredge_path_X,dredge_path_Y,'k.')
Dredge_Dict['%02d' % j] = (np.mean(dredge_path_X),np.mean(dredge_path_Y))
plt.plot(np.mean(dredge_path_X),np.mean(dredge_path_Y),'k.')
except:
print 'failed for DR%02d' % j
#print Events.geometry
print
plt.show()
print Dredge_Dict
rock_descriptions = pd.read_csv('../ELOG/in2019_v04_rocks_elog.csv')
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('DR49','D49', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D12 ','D12', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D55 ','D55', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D91','D53', regex=False)
print rock_descriptions.Dredge.unique()
print rock_descriptions.columns
print rock_descriptions['Rock name']
rock_descriptions = rock_descriptions.assign(Longitude=np.nan,Latitude=np.nan)
for DredgeNumberString in rock_descriptions.Dredge.unique():
DredgeNumber = DredgeNumberString[-2:]
print DredgeNumber
ind = np.where(rock_descriptions.Dredge.str.match(DredgeNumberString))
#print ind
rock_descriptions.Longitude.iloc[ind] = Dredge_Dict[DredgeNumber][0]
rock_descriptions.Latitude.iloc[ind] = Dredge_Dict[DredgeNumber][1]
#rock_descriptions
df_basalt = rock_descriptions[rock_descriptions['Rock name'].str.contains('basalt')]
print df_basalt.Dredge.unique()
df_feldspar = rock_descriptions[rock_descriptions['Rock name'].str.contains('feldspar')]
#df_feldspar.Dredge.unique()
df_volcanic = rock_descriptions[rock_descriptions['Rock name'].str.contains('volcanic')]
print df_volcanic.Dredge.unique()
#df_volcanic = rock_descriptions[rock_descriptions['Rock name'].str.contains('volcanic')]
#df_volcanic.Dredge.unique()
plt.plot([Dredge_Dict[item][0] for item in Dredge_Dict],[Dredge_Dict[item][1] for item in Dredge_Dict],'bx')
plt.plot(rock_descriptions.Longitude,rock_descriptions.Latitude,'k.',markersize=15)
plt.plot(df_basalt.Longitude,df_basalt.Latitude,'r.',markersize=7,zorder=2)
plt.plot(df_feldspar.Longitude,df_feldspar.Latitude,'c.',markersize=7,zorder=2)
ds = xr.open_dataset('/Users/simon/Data/GMTdata/hawaii2017/earth_relief_02m.grd')
plt.figure(figsize=(12,12))
ds['z'].plot(vmin=-7000,vmax=1000,cmap=plt.cm.gray)
plt.plot([Dredge_Dict[item][0] for item in Dredge_Dict],[Dredge_Dict[item][1] for item in Dredge_Dict],'bx',markersize=15)
plt.plot(rock_descriptions.Longitude,rock_descriptions.Latitude,'ko',markersize=20)
plt.plot(df_basalt.Longitude,df_basalt.Latitude,'r.',markersize=17,zorder=2)
plt.plot(df_volcanic.Longitude,df_volcanic.Latitude,'r.',markersize=17,zorder=2)
#plt.plot(df_feldspar.Longitude,df_feldspar.Latitude,'c.',markersize=7,zorder=2)
plt.xlim(149,163)
plt.ylim(-27,-9)
plt.show()
```
|
github_jupyter
|
import pandas as pd
import glob, os
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
basedir = '/Users/simon/Work/ECOSAT3/DATA/Dredges/'
gpd.read_file('/Users/simon/Work/ECOSAT3/DATA/Dredges/DR01/shapefile/dredge_01_events.shp')
#print glob.glob('%s/DR*')
#print os.listdir(basedir)
Dredge_Dict = {}
for j in range(1,56):
dredge_folder_name = 'DR%02d' % j
#shapefile = '%s/%s/shapefile/dredge_%02d_events.shp' % (basedir,dredge_folder_name,j)
shapefile = glob.glob('%s/%s/shapefile/*.shp' % (basedir,dredge_folder_name))
if len(shapefile)==0:
shapefile = glob.glob('%s/%s/Shapefile/*.shp' % (basedir,dredge_folder_name))
if len(shapefile)==0:
shapefile = glob.glob('%s/%s/shapefiles/*.shp' % (basedir,dredge_folder_name))
print shapefile
Events = gpd.read_file(shapefile[0])
#print Events
dredge_path_X = []
dredge_path_Y = []
try:
for geometry in Events.geometry:
if geometry is not None:
dredge_path_X.append(geometry.xy[0])
dredge_path_Y.append(geometry.xy[1])
#dredge_path_X = [geometry.xy[0] for geometry in Events.geometry]
#dredge_path_Y = [geometry.xy[1] for geometry in Events.geometry]
#print dredge_path_X
#plt.plot(dredge_path_X,dredge_path_Y,'k.')
Dredge_Dict['%02d' % j] = (np.mean(dredge_path_X),np.mean(dredge_path_Y))
plt.plot(np.mean(dredge_path_X),np.mean(dredge_path_Y),'k.')
except:
print 'failed for DR%02d' % j
#print Events.geometry
print
plt.show()
print Dredge_Dict
rock_descriptions = pd.read_csv('../ELOG/in2019_v04_rocks_elog.csv')
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('DR49','D49', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D12 ','D12', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D55 ','D55', regex=False)
rock_descriptions.Dredge = rock_descriptions.Dredge.str.replace('D91','D53', regex=False)
print rock_descriptions.Dredge.unique()
print rock_descriptions.columns
print rock_descriptions['Rock name']
rock_descriptions = rock_descriptions.assign(Longitude=np.nan,Latitude=np.nan)
for DredgeNumberString in rock_descriptions.Dredge.unique():
DredgeNumber = DredgeNumberString[-2:]
print DredgeNumber
ind = np.where(rock_descriptions.Dredge.str.match(DredgeNumberString))
#print ind
rock_descriptions.Longitude.iloc[ind] = Dredge_Dict[DredgeNumber][0]
rock_descriptions.Latitude.iloc[ind] = Dredge_Dict[DredgeNumber][1]
#rock_descriptions
df_basalt = rock_descriptions[rock_descriptions['Rock name'].str.contains('basalt')]
print df_basalt.Dredge.unique()
df_feldspar = rock_descriptions[rock_descriptions['Rock name'].str.contains('feldspar')]
#df_feldspar.Dredge.unique()
df_volcanic = rock_descriptions[rock_descriptions['Rock name'].str.contains('volcanic')]
print df_volcanic.Dredge.unique()
#df_volcanic = rock_descriptions[rock_descriptions['Rock name'].str.contains('volcanic')]
#df_volcanic.Dredge.unique()
plt.plot([Dredge_Dict[item][0] for item in Dredge_Dict],[Dredge_Dict[item][1] for item in Dredge_Dict],'bx')
plt.plot(rock_descriptions.Longitude,rock_descriptions.Latitude,'k.',markersize=15)
plt.plot(df_basalt.Longitude,df_basalt.Latitude,'r.',markersize=7,zorder=2)
plt.plot(df_feldspar.Longitude,df_feldspar.Latitude,'c.',markersize=7,zorder=2)
ds = xr.open_dataset('/Users/simon/Data/GMTdata/hawaii2017/earth_relief_02m.grd')
plt.figure(figsize=(12,12))
ds['z'].plot(vmin=-7000,vmax=1000,cmap=plt.cm.gray)
plt.plot([Dredge_Dict[item][0] for item in Dredge_Dict],[Dredge_Dict[item][1] for item in Dredge_Dict],'bx',markersize=15)
plt.plot(rock_descriptions.Longitude,rock_descriptions.Latitude,'ko',markersize=20)
plt.plot(df_basalt.Longitude,df_basalt.Latitude,'r.',markersize=17,zorder=2)
plt.plot(df_volcanic.Longitude,df_volcanic.Latitude,'r.',markersize=17,zorder=2)
#plt.plot(df_feldspar.Longitude,df_feldspar.Latitude,'c.',markersize=7,zorder=2)
plt.xlim(149,163)
plt.ylim(-27,-9)
plt.show()
| 0.122733 | 0.389605 |
# DAT210x - Programming with Python for DS
## Module4- Lab2
```
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
from sklearn.decomposition import PCA
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
```
### Some Boilerplate Code
For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations. We've added some notes to the code in case you're interested in knowing what it's doing:
### A Note on SKLearn's `.transform()` calls:
Any time you perform a transformation on your data, you lose the column header names because the output of SciKit-Learn's `.transform()` method is an NDArray and not a daraframe.
This actually makes a lot of sense because there are essentially two types of transformations:
- Those that adjust the scale of your features, and
- Those that change alter the number of features, perhaps even changing their values entirely.
An example of adjusting the scale of a feature would be changing centimeters to inches. Changing the feature entirely would be like using PCA to reduce 300 columns to 30. In either case, the original column's units have either been altered or no longer exist at all, so it's up to you to assign names to your columns after any transformation, if you'd like to store the resulting NDArray back into a dataframe.
```
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
```
SKLearn contains many methods for transforming your features by scaling them, a type of pre-processing):
- `RobustScaler`
- `Normalizer`
- `MinMaxScaler`
- `MaxAbsScaler`
- `StandardScaler`
- ...
http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
However in order to be effective at PCA, there are a few requirements that must be met, and which will drive the selection of your scaler. PCA requires your data is standardized -- in other words, it's _mean_ should equal 0, and it should have unit variance.
SKLearn's regular `Normalizer()` doesn't zero out the mean of your data, it only clamps it, so it could be inappropriate to use depending on your data. `MinMaxScaler` and `MaxAbsScaler` both fail to set a unit variance, so you won't be using them here either. `RobustScaler` can work, again depending on your data (watch for outliers!). So for this assignment, you're going to use the `StandardScaler`. Get familiar with it by visiting these two websites:
- http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing-scaler
- http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html#sklearn.preprocessing.StandardScaler
Lastly, some code to help with visualizations:
```
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
```
### And Now, The Assignment
```
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
```
Load up the dataset specified on the lab instructions page and remove any and all _rows_ that have a NaN in them. You should be a pro at this by now ;-)
**QUESTION**: Should the `id` column be included in your dataset as a feature?
```
kidneydata = pd.read_csv('Datasets\kidney_disease.csv', index_col=0)
kidneydata = kidneydata.dropna(axis=0)
```
Let's build some color-coded labels; the actual label feature will be removed prior to executing PCA, since it's unsupervised. You're only labeling by color so you can see the effects of PCA:
```
labels = ['red' if i=='ckd' else 'green' for i in kidneydata.classification]
```
Use an indexer to select only the following columns: `['bgr','wc','rc']`
```
s1 = kidneydata.loc[:, ['bgr','wc','rc']]
```
Either take a look at the dataset's webpage in the attribute info section of UCI's [Chronic Kidney Disease]() page,: https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease or alternatively, you can actually look at the first few rows of your dataframe using `.head()`. What kind of data type should these three columns be? Compare what you see with the results when you print out your dataframe's `dtypes`.
If Pandas did not properly detect and convert your columns to the data types you expected, use an appropriate command to coerce these features to the right type.
```
tip = s1.head()
print(tip)
s1.wc = pd.to_numeric(s1.wc, errors='coerce')
s1.rc = pd.to_numeric(s1.rc, errors='coerce')
```
PCA Operates based on variance. The variable with the greatest variance will dominate. Examine your data using a command that will check the variance of every feature in your dataset, and then print out the results. Also print out the results of running `.describe` on your dataset.
_Hint:_ If you do not see all three variables: `'bgr'`, `'wc'`, and `'rc'`, then it's likely you probably did not complete the previous step properly.
Below, we assume your dataframe's variable is named `df`. If it isn't, make the appropriate changes. But do not alter the code in `scaleFeaturesDF()` just yet!
```
df = s1
if scaleFeatures: df = scaleFeaturesDF(df)
```
Run PCA on your dataset, reducing it to 2 principal components. Make sure your PCA model is saved in a variable called `'pca'`, and that the results of your transformation are saved in another variable `'T'`:
```
pca = PCA(n_components=2)
pca.fit(df)
T = pca.transform(df)
```
Now, plot the transformed data as a scatter plot. Recall that transforming the data will result in a NumPy NDArray. You can either use MatPlotLib to graph it directly, or you can convert it back to DataFrame and have Pandas do it for you.
Since we've already demonstrated how to plot directly with MatPlotLib in `Module4/assignment1.ipynb`, this time we'll show you how to convert your transformed data back into to a Pandas Dataframe and have Pandas plot it from there.
```
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, df.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
```
|
github_jupyter
|
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
from sklearn.decomposition import PCA
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
kidneydata = pd.read_csv('Datasets\kidney_disease.csv', index_col=0)
kidneydata = kidneydata.dropna(axis=0)
labels = ['red' if i=='ckd' else 'green' for i in kidneydata.classification]
s1 = kidneydata.loc[:, ['bgr','wc','rc']]
tip = s1.head()
print(tip)
s1.wc = pd.to_numeric(s1.wc, errors='coerce')
s1.rc = pd.to_numeric(s1.rc, errors='coerce')
df = s1
if scaleFeatures: df = scaleFeaturesDF(df)
pca = PCA(n_components=2)
pca.fit(df)
T = pca.transform(df)
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, df.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
| 0.846863 | 0.951594 |
```
import json
import numpy as np
import operator
import math
def r_precision(G, R):
limit_R = R[:len(G)]
if len(G) != 0:
return len(list(set(G).intersection(set(limit_R)))) * 1.0 / len(G)
else:
return 0
def ndcg(G, R):
r = [1 if i in set(G) else 0 for i in R]
r = np.asfarray(r)
dcg = r[0] + np.sum(r[1:] / np.log2(np.arange(2, r.size + 1)))
#k = len(set(G).intersection(set(R)))
k = len(G)
if k > 0:
idcg = 1 + np.sum(np.ones(k - 1) / np.log2(np.arange(2, k + 1)))
return dcg * 1.0 / idcg
else:
return 0
def clicks(G, R):
n = 1
for i in R:
if i in set(G):
return ((n - 1) / 10) * 1.0
n += 1
return 51
GT = json.load(open('../DATA_PROCESSING/PL_TRACKS_ALL.json'))
# GT = json.load(open('../DATA_PROCESSING/PL_TRACKS_5_TEST.json'))
track_index = json.load(open('../DATA_PROCESSING/ALL_INDEX_READONLY/TRACK_INDEX_READONLY.json'))
track_index_reversed = {k:v for (v, k) in track_index.items()}
Task = str(9)
if Task == '1':
bar = 1
elif Task == '2':
bar = 5
elif Task == '5':
bar = 10
elif Task == '6':
bar = 25
elif Task == '7':
bar = 25
elif Task == '9':
bar = 100
```
## For different Task (change from here)
```
SEED = json.load(open('../DATA_PROCESSING/PL_TRACKS_5_TEST_T' + Task + '.json'))
GT_Now = {}
for pl in SEED:
GT_Now[pl] = []
for t in GT[pl]:
if t not in SEED[pl]:
GT_Now[pl].append(t)
Xing_Raw = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Xing/Pure_Xing_T' + Task + '.json'))
Xing = {}
for pl in Xing_Raw:
Xing[pl] = []
for t in Xing_Raw[pl]:
Xing[pl].append(track_index_reversed[t])
QQ = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_QQ/T' + Task + '_100_500_50.json'))
W2V_raw = json.load(open('/home/xing/xing/xing/T' + Task + '.json'))
W2V = {}
for pl in W2V_raw:
W2V[pl] = []
for t in W2V_raw[pl]:
if t not in SEED[pl]:
W2V[pl].append(t)
if len(W2V[pl]) == 500:
break
N2V = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Node2Vec/Pure_Node2Vec_T' + Task + '.json'))
```
## Combine Method 1 (Used in current submission)
```
def combine_method_1(QQ_dict, Xing_dict):
R_combined = {}
for pl in Xing_dict:
if len(Xing_dict[pl]) == 500:
R_combined[pl] = Xing_dict[pl][:]
else:
rest = 500 - len(Xing_dict[pl])
R_combined[pl] = Xing_dict[pl][:]
for t in QQ_dict[pl]:
if t not in R_combined[pl]:
R_combined[pl].append(t)
if len(R_combined[pl]) == 500:
break
return R_combined
Combined_R_method_1 = combine_method_1(QQ, Xing)
R_precision_1 = {}
NDCG_1 = {}
Clicks_1 = {}
for pl in Xing:
R_precision_1[pl] = r_precision(GT_Now[pl], Combined_R_method_1[pl])
NDCG_1[pl] = ndcg(GT_Now[pl], Combined_R_method_1[pl])
Clicks_1[pl] = clicks(GT_Now[pl], Combined_R_method_1[pl])
print sum(R_precision_1.values()) / len(R_precision_1)
print sum(NDCG_1.values()) / len(NDCG_1)
print sum(Clicks_1.values()) / len(Clicks_1)
```
## Combine Method 2
```
def rank(new, current, sort_base):
layer = list(set(new) - set(current))
layer_index = {}
for elem in layer:
layer_index[elem] = sort_base[elem]
layer_sorted = sorted(layer_index.items(), key=operator.itemgetter(1))
layer_final = [i[0] for i in layer_sorted]
return current + layer_final
def combine_method_2_sub_1(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_2(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Ax), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Bx & Aq), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_3(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 2
up_to_layer2_1 = rank(list(Bx & Aq), up_to_layer1, Xing_index)
# Layer 2 part 1
up_to_layer2_2 = rank(list(Ax & Bq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_4(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index) * 1.5
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index) * 1.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_5(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = (500 - len(QQ_index)) * 2.0
Xing_index = {}
for elem in Xing:
Xing_index[elem] = (500 - len(Xing_index)) * 3.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
def combine_method_2_sub_6(QQ, Xing, W2V, GT_len):
all_tracks = list(set(QQ + Xing + W2V))
QQ_index = {}
t = 0
for elem in QQ:
QQ_index[elem] = (501 - t) * 2.0
t += 1
Xing_index = {}
t = 0
for elem in Xing:
Xing_index[elem] = (501 - t) * 3.0
t += 1
W2V_index = {}
t = 0
for elem in W2V:
W2V_index[elem] = (501 - t) * 1.5
t += 1
for elem in all_tracks:
if elem not in QQ_index:
QQ_index[elem] = 0
if elem not in Xing_index:
Xing_index[elem] = 0
if elem not in W2V_index:
W2V_index[elem] = 0
final_index = {}
for elem in all_tracks:
final_index[elem] = Xing_index[elem] + QQ_index[elem] + W2V_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
# Ax = set(Xing[:GT_len])
# Bx = set(Xing[GT_len:])
# Aq = set(QQ[:GT_len])
# Bq = set(QQ[GT_len:])
# # Layer 1
# up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# # Layer 2 part 1
# up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# # Layer 2 part 2
# up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# # Layer 3 part 1
# up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# # Layer 3 part 2
# up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# # Layer 3 part 3
# up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# # Layer 4 part 1
# up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# # Layer 4 part 2
# up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
# return up_to_layer4_2[:500]
def combine_method_2(QQ_dict, Xing_dict, W2V_dict):
global GT
R_combined = {}
for pl in Xing_dict:
R_combined[pl] = combine_method_2_sub_6(QQ_dict[pl], Xing_dict[pl], W2V_dict[pl], len(GT_Now[pl]))
return R_combined
Combined_R_method_2 = combine_method_2(QQ, Xing, W2V)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in Xing:
R_precision_2[pl] = r_precision(GT_Now[pl], Combined_R_method_2[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], Combined_R_method_2[pl])
Clicks_2[pl] = clicks(GT_Now[pl], Combined_R_method_2[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
0.217464854917
0.4193948692135327
0.722
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in W2V:
R_precision_2[pl] = r_precision(GT_Now[pl], W2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], W2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], W2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in N2V:
R_precision_2[pl] = r_precision(GT_Now[pl], N2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], N2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], N2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2
```
|
github_jupyter
|
import json
import numpy as np
import operator
import math
def r_precision(G, R):
limit_R = R[:len(G)]
if len(G) != 0:
return len(list(set(G).intersection(set(limit_R)))) * 1.0 / len(G)
else:
return 0
def ndcg(G, R):
r = [1 if i in set(G) else 0 for i in R]
r = np.asfarray(r)
dcg = r[0] + np.sum(r[1:] / np.log2(np.arange(2, r.size + 1)))
#k = len(set(G).intersection(set(R)))
k = len(G)
if k > 0:
idcg = 1 + np.sum(np.ones(k - 1) / np.log2(np.arange(2, k + 1)))
return dcg * 1.0 / idcg
else:
return 0
def clicks(G, R):
n = 1
for i in R:
if i in set(G):
return ((n - 1) / 10) * 1.0
n += 1
return 51
GT = json.load(open('../DATA_PROCESSING/PL_TRACKS_ALL.json'))
# GT = json.load(open('../DATA_PROCESSING/PL_TRACKS_5_TEST.json'))
track_index = json.load(open('../DATA_PROCESSING/ALL_INDEX_READONLY/TRACK_INDEX_READONLY.json'))
track_index_reversed = {k:v for (v, k) in track_index.items()}
Task = str(9)
if Task == '1':
bar = 1
elif Task == '2':
bar = 5
elif Task == '5':
bar = 10
elif Task == '6':
bar = 25
elif Task == '7':
bar = 25
elif Task == '9':
bar = 100
SEED = json.load(open('../DATA_PROCESSING/PL_TRACKS_5_TEST_T' + Task + '.json'))
GT_Now = {}
for pl in SEED:
GT_Now[pl] = []
for t in GT[pl]:
if t not in SEED[pl]:
GT_Now[pl].append(t)
Xing_Raw = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Xing/Pure_Xing_T' + Task + '.json'))
Xing = {}
for pl in Xing_Raw:
Xing[pl] = []
for t in Xing_Raw[pl]:
Xing[pl].append(track_index_reversed[t])
QQ = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_QQ/T' + Task + '_100_500_50.json'))
W2V_raw = json.load(open('/home/xing/xing/xing/T' + Task + '.json'))
W2V = {}
for pl in W2V_raw:
W2V[pl] = []
for t in W2V_raw[pl]:
if t not in SEED[pl]:
W2V[pl].append(t)
if len(W2V[pl]) == 500:
break
N2V = json.load(open('../MODEL_5_Counter_Artist_Album/Pure_Node2Vec/Pure_Node2Vec_T' + Task + '.json'))
def combine_method_1(QQ_dict, Xing_dict):
R_combined = {}
for pl in Xing_dict:
if len(Xing_dict[pl]) == 500:
R_combined[pl] = Xing_dict[pl][:]
else:
rest = 500 - len(Xing_dict[pl])
R_combined[pl] = Xing_dict[pl][:]
for t in QQ_dict[pl]:
if t not in R_combined[pl]:
R_combined[pl].append(t)
if len(R_combined[pl]) == 500:
break
return R_combined
Combined_R_method_1 = combine_method_1(QQ, Xing)
R_precision_1 = {}
NDCG_1 = {}
Clicks_1 = {}
for pl in Xing:
R_precision_1[pl] = r_precision(GT_Now[pl], Combined_R_method_1[pl])
NDCG_1[pl] = ndcg(GT_Now[pl], Combined_R_method_1[pl])
Clicks_1[pl] = clicks(GT_Now[pl], Combined_R_method_1[pl])
print sum(R_precision_1.values()) / len(R_precision_1)
print sum(NDCG_1.values()) / len(NDCG_1)
print sum(Clicks_1.values()) / len(Clicks_1)
def rank(new, current, sort_base):
layer = list(set(new) - set(current))
layer_index = {}
for elem in layer:
layer_index[elem] = sort_base[elem]
layer_sorted = sorted(layer_index.items(), key=operator.itemgetter(1))
layer_final = [i[0] for i in layer_sorted]
return current + layer_final
def combine_method_2_sub_1(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_2(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, Xing_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Ax), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Bx & Aq), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_3(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index)
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index)
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], Xing_index)
# Layer 2 part 2
up_to_layer2_1 = rank(list(Bx & Aq), up_to_layer1, Xing_index)
# Layer 2 part 1
up_to_layer2_2 = rank(list(Ax & Bq), up_to_layer2_1, Xing_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, Xing_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, QQ_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, Xing_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, Xing_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, QQ_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_4(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = len(QQ_index) * 1.5
Xing_index = {}
for elem in Xing:
Xing_index[elem] = len(Xing_index) * 1.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
Ax = set(Xing[:GT_len])
Bx = set(Xing[GT_len:])
Aq = set(QQ[:GT_len])
Bq = set(QQ[GT_len:])
# Layer 1
up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# Layer 2 part 1
up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# Layer 2 part 2
up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# Layer 3 part 1
up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# Layer 3 part 2
up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# Layer 3 part 3
up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# Layer 4 part 1
up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# Layer 4 part 2
up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
return up_to_layer4_2[:500]
def combine_method_2_sub_5(QQ, Xing, GT_len):
QQ_index = {}
for elem in QQ:
QQ_index[elem] = (500 - len(QQ_index)) * 2.0
Xing_index = {}
for elem in Xing:
Xing_index[elem] = (500 - len(Xing_index)) * 3.0
final_index = {}
for elem in list(set(Xing_index.keys()) | set(QQ_index.keys())):
if elem in Xing and elem not in QQ:
final_index[elem] = Xing_index[elem]
elif elem in QQ and elem not in Xing:
final_index[elem] = QQ_index[elem]
else:
final_index[elem] = Xing_index[elem] + QQ_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
def combine_method_2_sub_6(QQ, Xing, W2V, GT_len):
all_tracks = list(set(QQ + Xing + W2V))
QQ_index = {}
t = 0
for elem in QQ:
QQ_index[elem] = (501 - t) * 2.0
t += 1
Xing_index = {}
t = 0
for elem in Xing:
Xing_index[elem] = (501 - t) * 3.0
t += 1
W2V_index = {}
t = 0
for elem in W2V:
W2V_index[elem] = (501 - t) * 1.5
t += 1
for elem in all_tracks:
if elem not in QQ_index:
QQ_index[elem] = 0
if elem not in Xing_index:
Xing_index[elem] = 0
if elem not in W2V_index:
W2V_index[elem] = 0
final_index = {}
for elem in all_tracks:
final_index[elem] = Xing_index[elem] + QQ_index[elem] + W2V_index[elem]
layer_sorted = sorted(final_index.items(), key=operator.itemgetter(1), reverse = True)
layer_final = [i[0] for i in layer_sorted]
return layer_final[:500]
# Ax = set(Xing[:GT_len])
# Bx = set(Xing[GT_len:])
# Aq = set(QQ[:GT_len])
# Bq = set(QQ[GT_len:])
# # Layer 1
# up_to_layer1 = rank(list(Ax & Aq), [], final_index)
# # Layer 2 part 1
# up_to_layer2_1 = rank(list(Ax & Bq), up_to_layer1, final_index)
# # Layer 2 part 2
# up_to_layer2_2 = rank(list(Bx & Aq), up_to_layer2_1, final_index)
# # Layer 3 part 1
# up_to_layer3_1 = rank(list(Ax), up_to_layer2_2, final_index)
# # Layer 3 part 2
# up_to_layer3_2 = rank(list(Aq), up_to_layer3_1, final_index)
# # Layer 3 part 3
# up_to_layer3_3 = rank(list(Bx & Bq), up_to_layer3_2, final_index)
# # Layer 4 part 1
# up_to_layer4_1 = rank(list(Bx), up_to_layer3_3, final_index)
# # Layer 4 part 2
# up_to_layer4_2 = rank(list(Bq), up_to_layer4_1, final_index)
# return up_to_layer4_2[:500]
def combine_method_2(QQ_dict, Xing_dict, W2V_dict):
global GT
R_combined = {}
for pl in Xing_dict:
R_combined[pl] = combine_method_2_sub_6(QQ_dict[pl], Xing_dict[pl], W2V_dict[pl], len(GT_Now[pl]))
return R_combined
Combined_R_method_2 = combine_method_2(QQ, Xing, W2V)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in Xing:
R_precision_2[pl] = r_precision(GT_Now[pl], Combined_R_method_2[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], Combined_R_method_2[pl])
Clicks_2[pl] = clicks(GT_Now[pl], Combined_R_method_2[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
0.217464854917
0.4193948692135327
0.722
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in W2V:
R_precision_2[pl] = r_precision(GT_Now[pl], W2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], W2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], W2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2 = {}
NDCG_2 = {}
Clicks_2 = {}
for pl in N2V:
R_precision_2[pl] = r_precision(GT_Now[pl], N2V[pl])
NDCG_2[pl] = ndcg(GT_Now[pl], N2V[pl])
Clicks_2[pl] = clicks(GT_Now[pl], N2V[pl])
print sum(R_precision_2.values()) / len(R_precision_2)
print sum(NDCG_2.values()) / len(NDCG_2)
print sum(Clicks_2.values()) / len(Clicks_2)
R_precision_2
| 0.19923 | 0.68545 |
```
import pandas as pd
import cobra as co
```
# Convert the Tables that make up gapseq's "full model" to a cobrapy model Object
## Download the relevant tables into this repo's data folder
```
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/f3d74944e5e4ee5a6ab328c4fd46b35fd53cee73/dat/seed_reactions_corrected.tsv"
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/f3d74944e5e4ee5a6ab328c4fd46b35fd53cee73/dat/seed_metabolites_edited.tsv"
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/master/dat/seed_Enzyme_Class_Reactions_Aliases_unique_edited.tsv"
```
## Import the tables as pandas DataFrames
```
df_reactions = pd.read_csv("data/seed_reactions_corrected.tsv", sep='\t')
df_metabolites = pd.read_csv("data/seed_metabolites_edited.tsv", sep='\t')
df_ec = pd.read_csv("data/seed_Enzyme_Class_Reactions_Aliases_unique_edited.tsv", sep='\t')
df_rxns_corrected = df_reactions[df_reactions["gapseq.status"]=="corrected"]
```
## Introduce MIRIAM compliant headers to annotation columns
```
df_metabolites = df_metabolites.rename(
columns={"biocycID": "biocyc",
"biggID": "bigg.metabolite",
"keggID": "kegg.compound",
"InChI": "inchi",
"chebiID": "chebi",
"reactomeID": "reactome",
"hmdbID": "hmdb",
"InChIKey": "inchikey",
"MNX_ID": "metanetx.chemical"
}
)
df_metabolites
```
## Build the cobrapy model object from scratch
```
universal = co.Model()
universal.id = "universal"
universal.name = "Gap-Seq-derived Universal Model"
```
### Build a list of reaction objects from info in the table and the reaction equation strings
```
df_rxns_corrected = df_rxns_corrected.rename(
columns={"ec_numbers": "ec-code"
}
)
def handle_annotation(entry):
# Could be a list delimited by ';', a single string, an int or float, or NaN.
# Here we try to handle each case
if pd.isnull(entry):
return ''
if type(entry) != str:
return str(entry)
if ";" in entry:
return entry.split(";")
return str(entry)
reactions = []
annotation_columns = [
"ec-code"
]
notes_columns = [
'abbreviation',
'code',
'stoichiometry',
'is_transport',
'equation',
'definition',
'reversibility',
'direction',
'abstract_reaction',
'pathways',
'aliases',
'deltag',
'deltagerr',
'compound_ids',
'status',
'is_obsolete',
'linked_reaction',
'notes',
'is_copy_of',
'gapseq.status'
]
def convert_equation_to_bigg_compartments(equation):
equation = equation.replace("[0]", "_c").replace("[1]", "_e").replace("[2]", "_p")
equation = equation.replace("(", "").replace(")", "")
return str(equation)
def define_rxn(row):
# A function to define the reaction objects based on the row of the corrected Reaction DataFrame
rxn = co.Reaction(
id = str(row["id"]),
name = str(row["name"]),
)
rxn.annotation.update({k : handle_annotation(row[k]) for k in annotation_columns if handle_annotation(row[k]) is not None})
rxn.notes.update({k : handle_annotation(row[k]) for k in notes_columns if handle_annotation(row[k]) is not None})
rxn.annotation["seed.reaction"] = row["id"]
universal.add_reactions([rxn])
rxn.build_reaction_from_string(convert_equation_to_bigg_compartments(row.equation))
for index, row in df_rxns_corrected.iterrows():
define_rxn(row)
```
### Build a list of metabolite objects
```
annotation_columns = [
"biocyc",
"bigg.metabolite",
"kegg.compound",
"inchi",
"chebi",
"reactome",
"hmdb",
"inchikey",
"metanetx.chemical"
]
notes_columns = [
'abbreviation',
'mass',
'source',
'is_core',
'is_obsolete',
'linked_compound',
'is_cofactor',
'deltag',
'deltagerr',
'pka',
'pkb',
'abstract_compound',
'comprised_of',
'aliases'
]
for index, row in df_metabolites.iterrows():
# Identify which metabolites have already been added and supplement all additional
# info from the metabolite DataFrame
metabolites = universal.metabolites.query(row["id"])
for met in metabolites:
met.name = row["name"]
met.formula = handle_annotation(row["formula"])
met.charge = row["charge"]
met.compartment = met.id.split("_")[1]
met.annotation.update({k : handle_annotation(row[k]) for k in annotation_columns if handle_annotation(row[k]) is not None})
met.notes.update({k : handle_annotation(row[k]) for k in notes_columns if handle_annotation(row[k]) is not None})
met.annotation["seed.compound"]=row["id"]
```
### Add EC number annotation to reactions
```
for index, row in df_ec.iterrows():
reaction_strings = []
if "|" in row["MS ID"]:
reaction_strings = row["MS ID"].split("|")
else:
reaction_strings.append(row["MS ID"])
for r in reaction_strings:
reaction_object = universal.reactions.query(r)
if reaction_object:
reaction_object[0].annotation["ec-code"] = str(row["External ID"])
```
### Write the model as SBML
```
import logging
logging.getLogger().setLevel(logging.DEBUG)
co.io.write_sbml_model(universal, "universal.sbml")
```
# Test with Memote
```
!memote report snapshot "universal.sbml"
```
|
github_jupyter
|
import pandas as pd
import cobra as co
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/f3d74944e5e4ee5a6ab328c4fd46b35fd53cee73/dat/seed_reactions_corrected.tsv"
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/f3d74944e5e4ee5a6ab328c4fd46b35fd53cee73/dat/seed_metabolites_edited.tsv"
!wget -P "data/" "https://raw.githubusercontent.com/jotech/gapseq/master/dat/seed_Enzyme_Class_Reactions_Aliases_unique_edited.tsv"
df_reactions = pd.read_csv("data/seed_reactions_corrected.tsv", sep='\t')
df_metabolites = pd.read_csv("data/seed_metabolites_edited.tsv", sep='\t')
df_ec = pd.read_csv("data/seed_Enzyme_Class_Reactions_Aliases_unique_edited.tsv", sep='\t')
df_rxns_corrected = df_reactions[df_reactions["gapseq.status"]=="corrected"]
df_metabolites = df_metabolites.rename(
columns={"biocycID": "biocyc",
"biggID": "bigg.metabolite",
"keggID": "kegg.compound",
"InChI": "inchi",
"chebiID": "chebi",
"reactomeID": "reactome",
"hmdbID": "hmdb",
"InChIKey": "inchikey",
"MNX_ID": "metanetx.chemical"
}
)
df_metabolites
universal = co.Model()
universal.id = "universal"
universal.name = "Gap-Seq-derived Universal Model"
df_rxns_corrected = df_rxns_corrected.rename(
columns={"ec_numbers": "ec-code"
}
)
def handle_annotation(entry):
# Could be a list delimited by ';', a single string, an int or float, or NaN.
# Here we try to handle each case
if pd.isnull(entry):
return ''
if type(entry) != str:
return str(entry)
if ";" in entry:
return entry.split(";")
return str(entry)
reactions = []
annotation_columns = [
"ec-code"
]
notes_columns = [
'abbreviation',
'code',
'stoichiometry',
'is_transport',
'equation',
'definition',
'reversibility',
'direction',
'abstract_reaction',
'pathways',
'aliases',
'deltag',
'deltagerr',
'compound_ids',
'status',
'is_obsolete',
'linked_reaction',
'notes',
'is_copy_of',
'gapseq.status'
]
def convert_equation_to_bigg_compartments(equation):
equation = equation.replace("[0]", "_c").replace("[1]", "_e").replace("[2]", "_p")
equation = equation.replace("(", "").replace(")", "")
return str(equation)
def define_rxn(row):
# A function to define the reaction objects based on the row of the corrected Reaction DataFrame
rxn = co.Reaction(
id = str(row["id"]),
name = str(row["name"]),
)
rxn.annotation.update({k : handle_annotation(row[k]) for k in annotation_columns if handle_annotation(row[k]) is not None})
rxn.notes.update({k : handle_annotation(row[k]) for k in notes_columns if handle_annotation(row[k]) is not None})
rxn.annotation["seed.reaction"] = row["id"]
universal.add_reactions([rxn])
rxn.build_reaction_from_string(convert_equation_to_bigg_compartments(row.equation))
for index, row in df_rxns_corrected.iterrows():
define_rxn(row)
annotation_columns = [
"biocyc",
"bigg.metabolite",
"kegg.compound",
"inchi",
"chebi",
"reactome",
"hmdb",
"inchikey",
"metanetx.chemical"
]
notes_columns = [
'abbreviation',
'mass',
'source',
'is_core',
'is_obsolete',
'linked_compound',
'is_cofactor',
'deltag',
'deltagerr',
'pka',
'pkb',
'abstract_compound',
'comprised_of',
'aliases'
]
for index, row in df_metabolites.iterrows():
# Identify which metabolites have already been added and supplement all additional
# info from the metabolite DataFrame
metabolites = universal.metabolites.query(row["id"])
for met in metabolites:
met.name = row["name"]
met.formula = handle_annotation(row["formula"])
met.charge = row["charge"]
met.compartment = met.id.split("_")[1]
met.annotation.update({k : handle_annotation(row[k]) for k in annotation_columns if handle_annotation(row[k]) is not None})
met.notes.update({k : handle_annotation(row[k]) for k in notes_columns if handle_annotation(row[k]) is not None})
met.annotation["seed.compound"]=row["id"]
for index, row in df_ec.iterrows():
reaction_strings = []
if "|" in row["MS ID"]:
reaction_strings = row["MS ID"].split("|")
else:
reaction_strings.append(row["MS ID"])
for r in reaction_strings:
reaction_object = universal.reactions.query(r)
if reaction_object:
reaction_object[0].annotation["ec-code"] = str(row["External ID"])
import logging
logging.getLogger().setLevel(logging.DEBUG)
co.io.write_sbml_model(universal, "universal.sbml")
!memote report snapshot "universal.sbml"
| 0.37502 | 0.778691 |
### Summary Statistics and Quick Viz!
```
import pandas as pd
pd.options.display.max_rows = 30
```
### START HERE
Now we've learned about how to get our dataframe how we want it, let's try and get some fun out of it!
We have our data, now what?
We usually like to learn from it. We want to find out about maybe some summary statistics about the features of the data.
Let's load in our cereal dataset again.
```
df = pd.read_csv('../data/cereal.csv', index_col = 0)
df.head(15)
```
## Pandas describe()
Pandas has a lot up it's sleeve but one of the most useful functions is called describe and it does exactly that. it _describes_ your data let's try it out.
```
df.describe()
```
This table will tell you about:
- `count`: The number of non-NA/null observations.
- `mean`: The mean of column
- `std` : The standard deviation of a column
- `min`: The min value for a column
- `max`: The max value for a column
- by default the 25,50 and 75 percentile of the observations
You can make change to either limit how much you show or extend it.
```
df.describe(include = "all")
```
Adding `include = "all"` withinh the brackets adds some additional statistics
- `unique`: how many observations are unique
- `top`: which observation value is most occuring
- `freq`: what is the frequency of the most occuring observation
you can also get single statistics of each column using:
either `df.mean()`,`df.std()`, `df.count()`, `df.median()`, `df.sum()`. Some of these might produce some wild results especially if the column is a qualitative observation.
```
df.sum()
```
## `pd.value_counts`
If you want to get a frequency table of categorical columns `pd.value_counts` is very useful.
In the previous slides we talked about getting a single column from a dataframe using double brackets like `df[['column-name']]`. That's great but to use pd.value_counts we need to use a different structure which you'll learn in the next module. Instead of getting a single columns with double brackets we only use single brackets like so:
```
manufacturer_column = df["mfr"]
manufacturer_column
```
We saved the object in a variable called `manufacturer_column` in the same way as we have aave dataframes before.
Next we cant use `pd.value_counts()` referencing that column we saved within the brackets.
```
manufacturer_freq = manufacturer_column.value_counts()
manufacturer_freq
```
We can then see the frequency of each qualitative value. _Careful here! Notice that instead of putting the dataframe first, we indicate the package (pd) that `value_counts` is coming from and then the object we want the counts of within the brackets!
This looks a bit funny though doesn't it? That's because this output isn't our usual dataframe type so we need to make it so. We can make it prettier with the following
```
manufacturer_freq_df = pd.DataFrame(manufacturer_freq)
manufacturer_freq_df
```
Ah! That's what we are used to. The column name is specifying the counts of the manufacturers, but maybe we should rename that column to something that makes more sense. let's rename that column to `freq`. But how?
We use something called `rename` of course! When we rename things it's especially important that we don't forget to assign it to a variable or the column name won't stick! Let's assign it to `freq_mfr_df`.
```
freq_mfr_df = manufacturer_freq_df.rename(columns = {"mfr": "freq"})
freq_mfr_df
```
_Note: The code above uses something we've never seen before, `{}` curley brackets!
These have a special meaning but for now you need to know that this `columns` argument need to be set equal to `"old column name" : "new-column-name"` in curley brackets for us to rename the column._
## Quick Viz with Pandas
If we want to visualize things using different plots we can do that too! Take `manufacturer_freq` object. This would be great to express as a bar chart. But how do we do it?!
```
freq_mfr_df.plot.bar();
```
The important things to notice here is that we want to `.plot` a `.bar()` graph.
You may have noticed also this `;` after the code. this just prevents an additional unnecessary output such as
```
<matplotlib.axes._subplots.AxesSubplot at 0x1227555c0>
```
which we don't really need.
What else can we plot from our original cereal dataframe? Maybe we want to see the relationship between `calories` and `rating` in cereals?
This would require a `scatter` plot!
In the code we would need to specify the x and y axis which means we just need to put in the column names.
```
df.plot.scatter(x='sugars',y='calories');
```
Something you may have noticed is that there are 77 cereals but there doesn't seem to be 77 data points! That's because some of them are lying on top of each other with the same sugar ar calorie values. It may be of use to set an opacity to the graph to differential those points. Opacity is set with the argument `alpha` and accepts values between 0 and 1, 1 being full intensity.
```
df.plot.scatter(x='sugars',y='calories', alpha= .3)
```
Look at that! Now we can see there are multiple cereals that have 2.5g of sugar with 100 calories.
What if we wanted to change the colour to purple? Enter parameter `c`! We
```
plota = df.plot.scatter(x="sugars",
y="calories",
alpha= .3,
color = "purple")
```
Those data points look pretty small. To enlarge them, the argument `s` should do the trick. Also every good graph should havew a title! Let's take this opportunity to set the argument `title` to something.
```
ploty = df.plot.scatter(x="sugars",
y="calories",
alpha= 0.3,
color="Darkblue",
s= 50,
title = "The relationship between sugar and calories in cereals")
```
Let's try this in the assignment now.
```
ploty.show()
position_bar = position_freq_df.plot.bar(color = "Teal",
alpha = .5,
title = "Canuck player positions")
```
|
github_jupyter
|
import pandas as pd
pd.options.display.max_rows = 30
df = pd.read_csv('../data/cereal.csv', index_col = 0)
df.head(15)
df.describe()
df.describe(include = "all")
df.sum()
manufacturer_column = df["mfr"]
manufacturer_column
manufacturer_freq = manufacturer_column.value_counts()
manufacturer_freq
manufacturer_freq_df = pd.DataFrame(manufacturer_freq)
manufacturer_freq_df
freq_mfr_df = manufacturer_freq_df.rename(columns = {"mfr": "freq"})
freq_mfr_df
freq_mfr_df.plot.bar();
<matplotlib.axes._subplots.AxesSubplot at 0x1227555c0>
df.plot.scatter(x='sugars',y='calories');
df.plot.scatter(x='sugars',y='calories', alpha= .3)
plota = df.plot.scatter(x="sugars",
y="calories",
alpha= .3,
color = "purple")
ploty = df.plot.scatter(x="sugars",
y="calories",
alpha= 0.3,
color="Darkblue",
s= 50,
title = "The relationship between sugar and calories in cereals")
ploty.show()
position_bar = position_freq_df.plot.bar(color = "Teal",
alpha = .5,
title = "Canuck player positions")
| 0.303938 | 0.984456 |
```
# We'll use requesta and BeautifulSoup again in this tutorial:
import requests
from bs4 import BeautifulSoup
## We'll also use the re module for regular expressions.
import re
## Let's look at this list of state universities in the US:
top_url = 'https://en.wikipedia.org/wiki/List_of_state_universities_in_the_United_States'
# Use requests.get to fetch the HTML at the specific url:
response = requests.get(top_url)
print(type(response))
# This returns an object of type Response:
# And it contains all the HTML of the URL:
print(response.content)
# Create the nested data object using the BeautifulSoup() function:
soup = BeautifulSoup(response.content)
print(type(soup))
# The prettify method for making our output more readable.
## The example below looks at the 50,000 - 51,000 characters in the scraped HTML:
print(soup.prettify())[50000:51000]
# We can use the find method to find the first tag (and its contents) of a certain type.
soup.find("p")
```
### Exploring and Inspecting a Webpage
Similar to the `find` method, we can use the `find_all` method to find all the tags of a certain type. But what tags are we looking for? We can look at the code for any individual part of an HTML page by clicking on it from within a browser and selecting `inspect`.

### Inspected Elements
This will show you the underlying code that generates this element.

You can see that the links to the colleges are listed, meaning within `<li>` tags, as well as links, meaning within `<a>` tags.
```
# This gets us somewhere, but there are links in here that are not colleges and some of the colleges do not have links.
soup.find_all("a")
# Searching for <li> tags gets us closer, but there are still some non-universities in here.
list_items = soup.find_all("li")
print(type(list_items))
print(list_items[200:210])
# Let's search for the first and last university in the list and return their index number:
for i in range(0, len(list_items)):
content = str(list_items[i].contents)
if "University of Alabama System" in content:
print("Index of first university is: " + str(i))
if "University of Wyoming" in content:
print("Index of last university is: " + str(i))
# Now we can use those indexes to subset out everything that isn't a university:
universities = list_items[71:840]
print(len(universities))
print(universities)
# We can grab the University Names and URLs for the wikipedia pages for the schools that have them:
name_list = []
url_list = []
for uni in universities:
name_list.append(uni.text)
a_tag = uni.find("a")
if a_tag:
ref = a_tag.get("href")
print(ref)
url_list.append(ref)
else:
print("No URL for this University")
url_list.append("")
import pandas as pd
d = { "name" : pd.Series(name_list),
"html_tag" : pd.Series(universities),
"url" : pd.Series(url_list)}
df = pd.DataFrame(d)
df["url"] = "https://en.wikipedia.org" + df["url"]
df.shape
df[:10]
# How many names contain 'College':
df['name'].str.contains("College", na=False).value_counts()
# How many names contain 'University':
df['name'].str.contains("University", na=False).value_counts()
```
## From Scraping to Crawling
So, you might have noticed that the information we collected from this scraper isn't that interesting. However, it does include a list of URLs for each University we found and we can scrape these pages as well. On the individual pages for each university, there's data on the school type, their location, endowment, and founding year, as well as other interesting information that we may be able to get to.
At this point, you'd start to consider our task a basic form of web crawling - the systemic or automated browsing of multiple web pages. This is certainly a simple application of web crawling, but the idea of following hyperlinks from one URL to another is representative.
```
uni_pages = []
for url in df["url"]:
if url != "":
resp = requests.get(url)
uni_pages.append(resp.content)
else:
uni_pages.append("")
## Add this newly scrapped data to our pandas dataframe:
df["wikipedia_page"] = uni_pages
df.shape
## Our pandas dataframe now has a column containing the entire HTML wikipedia apgefor each university:
df["wikipedia_page"][:10]
# Let's see what we can get from one page:
soup = BeautifulSoup(df["wikipedia_page"][0])
table = soup.find("table", {"class" : "infobox"})
rows = table.find_all("tr")
print(rows[:])
## Now we can search across these rows for various data of interest:
for row in rows:
header = row.find("th")
data = row.find("td")
# Make sure there was actually both a th and td tag in that row, and proceed if so.
if header is not None and data is not None:
if header.contents[0] == "Type":
print("The type of this school is " + data.text)
if header.contents[0] == "Location":
print("This location of this school is " + data.text)
if header.contents[0] == "Website":
print("The website for this school is " + data.text)
if "Endowment" in str(header.contents[0]):
print("The endowment for this school is " + data.text)
## Create empty columns of out dataframe to fill with new information:
df["type"] = ""
df["location"] = ""
df["website"] = ""
df["established"] = ""
df["endowment"] = ""
## Loop over every wikipedia page in our dataframe and populate our new columns with the pertinent data:
for i in range(0, len(df["wikipedia_page"])):
tmp_soup = BeautifulSoup(df["wikipedia_page"][i])
tmp_table = tmp_soup.find("table", {"class" : "infobox"})
if tmp_table is not None:
tmp_rows = tmp_table.find_all("tr")
for row in tmp_rows:
header = row.find("th")
data = row.find("td")
if header is not None and data is not None:
if header.contents[0] == "Type":
df["type"][i] = data.text
if header.contents[0] == "Location":
df["location"][i] = data.text
if header.contents[0] == "Website":
df["website"][i] = data.text
## Note that below we convert to unicode using utf-8, rather then simply str().
## This is more robust in handling special characters.
if "Endowment" in header.contents[0].encode('utf-8'):
df["endowment"][i] = data.text
if "Established" in header.contents[0].encode('utf-8'):
df["established"][i] = data.text
## Now we have dramatically more actionable data that could have been very difficult to collect manually.
df[:200]
```
|
github_jupyter
|
# We'll use requesta and BeautifulSoup again in this tutorial:
import requests
from bs4 import BeautifulSoup
## We'll also use the re module for regular expressions.
import re
## Let's look at this list of state universities in the US:
top_url = 'https://en.wikipedia.org/wiki/List_of_state_universities_in_the_United_States'
# Use requests.get to fetch the HTML at the specific url:
response = requests.get(top_url)
print(type(response))
# This returns an object of type Response:
# And it contains all the HTML of the URL:
print(response.content)
# Create the nested data object using the BeautifulSoup() function:
soup = BeautifulSoup(response.content)
print(type(soup))
# The prettify method for making our output more readable.
## The example below looks at the 50,000 - 51,000 characters in the scraped HTML:
print(soup.prettify())[50000:51000]
# We can use the find method to find the first tag (and its contents) of a certain type.
soup.find("p")
# This gets us somewhere, but there are links in here that are not colleges and some of the colleges do not have links.
soup.find_all("a")
# Searching for <li> tags gets us closer, but there are still some non-universities in here.
list_items = soup.find_all("li")
print(type(list_items))
print(list_items[200:210])
# Let's search for the first and last university in the list and return their index number:
for i in range(0, len(list_items)):
content = str(list_items[i].contents)
if "University of Alabama System" in content:
print("Index of first university is: " + str(i))
if "University of Wyoming" in content:
print("Index of last university is: " + str(i))
# Now we can use those indexes to subset out everything that isn't a university:
universities = list_items[71:840]
print(len(universities))
print(universities)
# We can grab the University Names and URLs for the wikipedia pages for the schools that have them:
name_list = []
url_list = []
for uni in universities:
name_list.append(uni.text)
a_tag = uni.find("a")
if a_tag:
ref = a_tag.get("href")
print(ref)
url_list.append(ref)
else:
print("No URL for this University")
url_list.append("")
import pandas as pd
d = { "name" : pd.Series(name_list),
"html_tag" : pd.Series(universities),
"url" : pd.Series(url_list)}
df = pd.DataFrame(d)
df["url"] = "https://en.wikipedia.org" + df["url"]
df.shape
df[:10]
# How many names contain 'College':
df['name'].str.contains("College", na=False).value_counts()
# How many names contain 'University':
df['name'].str.contains("University", na=False).value_counts()
uni_pages = []
for url in df["url"]:
if url != "":
resp = requests.get(url)
uni_pages.append(resp.content)
else:
uni_pages.append("")
## Add this newly scrapped data to our pandas dataframe:
df["wikipedia_page"] = uni_pages
df.shape
## Our pandas dataframe now has a column containing the entire HTML wikipedia apgefor each university:
df["wikipedia_page"][:10]
# Let's see what we can get from one page:
soup = BeautifulSoup(df["wikipedia_page"][0])
table = soup.find("table", {"class" : "infobox"})
rows = table.find_all("tr")
print(rows[:])
## Now we can search across these rows for various data of interest:
for row in rows:
header = row.find("th")
data = row.find("td")
# Make sure there was actually both a th and td tag in that row, and proceed if so.
if header is not None and data is not None:
if header.contents[0] == "Type":
print("The type of this school is " + data.text)
if header.contents[0] == "Location":
print("This location of this school is " + data.text)
if header.contents[0] == "Website":
print("The website for this school is " + data.text)
if "Endowment" in str(header.contents[0]):
print("The endowment for this school is " + data.text)
## Create empty columns of out dataframe to fill with new information:
df["type"] = ""
df["location"] = ""
df["website"] = ""
df["established"] = ""
df["endowment"] = ""
## Loop over every wikipedia page in our dataframe and populate our new columns with the pertinent data:
for i in range(0, len(df["wikipedia_page"])):
tmp_soup = BeautifulSoup(df["wikipedia_page"][i])
tmp_table = tmp_soup.find("table", {"class" : "infobox"})
if tmp_table is not None:
tmp_rows = tmp_table.find_all("tr")
for row in tmp_rows:
header = row.find("th")
data = row.find("td")
if header is not None and data is not None:
if header.contents[0] == "Type":
df["type"][i] = data.text
if header.contents[0] == "Location":
df["location"][i] = data.text
if header.contents[0] == "Website":
df["website"][i] = data.text
## Note that below we convert to unicode using utf-8, rather then simply str().
## This is more robust in handling special characters.
if "Endowment" in header.contents[0].encode('utf-8'):
df["endowment"][i] = data.text
if "Established" in header.contents[0].encode('utf-8'):
df["established"][i] = data.text
## Now we have dramatically more actionable data that could have been very difficult to collect manually.
df[:200]
| 0.398641 | 0.769384 |
# Statistics
```
import numpy as np
# 1D array
A1 = np.arange(20)
print(A1)
A.ndim
# 2D array
A2 = np.array([[11, 12, 13], [21, 22, 23]])
print(A2)
np.sum(A2, axis=0)
np.sum(A2)
A2.ndim
```
## Sum
- Sum of array elements over a given axis.
- **Syntax:** `np.sum(array); array-wise sum`
- **Syntax:** `np.sum(array, axis=0); row-wise sum`
- **Syntax:** `np.sum(array, axis=1); column-wise sum`

Axis 0 is thus the first dimension (the "rows"), and axis 1 is the second dimension (the "columns")
```
# sum of 1D array
np.sum(A1)
# array-wise sum of 2D array
np.sum(A2)
A2
# sum of 2D array(axis=0, row-wise sum)
np.sum(A2, axis=0)
# sum of 2D array(axis=1, column-wise sum)
np.sum(A2, axis=1)
```
## Mean
- Compute the median along the specified axis.
- Returns the average of the array elements. The average is taken over the flattened array by default, otherwise over the specified axis. `float64` intermediate and return values re used for integer inputs.
- **Syntax:** `np.mean(array); array-wise mean`
- **Syntax:** `np.mean(array, axis=0); row-wise mean`
- **Syntax:** `np.mean(array, axis=1); column-wise mean`
```
A1
A2
# compute the average of array `A1`
np.mean(A)
# mean of 2D array(axis=0, row-wise)
np.mean(A2, axis=0)
# mean of 2D array(axis=1, column-wise)
np.mean(A2, axis=1)
```
## Median
- Compute the median along the specified axis.
- Returns the median of the array elements.
- **Syntax:** `np.median(array); array-wise median`
- **Syntax:** `np.median(array, axis=0); row-wise median`
- **Syntax:** `np.median(array, axis=1); column-wise median`
```
# compute the meadian of `A1`
np.median(A1)
# median of 2D array(axis=0, row-wise)
np.median(A2, axis=0)
# median of 2D array(axis=1, column-wise)
np.median(A2, axis=1)
```
## Minimum
- Return the minimum of an array or minimum along an axis.
- **Syntax:** `np.min(array); array-wise min`
- **Syntax:** `np.min(array, axis=0); row-wise min`
- **Syntax:** `np.min(array, axis=1); column-wise min`
```
# minimum value of `A1`
np.min(A)
# minimum value of A2(axis=0, row-wise)
np.min(A2, axis=0)
# minimum value of A2(axis=1, column-wise)
np.min(A2, axis=1)
```
## Maximum
- Return the maximum of an array or minimum along an axis.
- **Syntax:** `np.max(array); array-wise max`
- **Syntax:** `np.max(array, axis=0); row-wise max`
- **Syntax:** `np.max(array, axis=1); column-wise max`
```
# maxiumum value of `A1`
np.max(A1)
# maxiumum value of A2(axis=0, row-wise)
np.max(A2, axis=0)
# maxiumum value of A2(axis=1, column-wise)
np.max(A2, axis=1)
```
## Range
- **Syntax:** `np.max(array) - np.min(array)`
```
A1.max()
A1.min()
r = np.max(A1) - np.min(A1)
print(r)
```
## Standard Deviation
- Compute the standard deviation along the specified axis.
- Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the
flattened array by default, otherwise over the specified axis.
- **Syntax:** `np.std(array); array-wise std`
- **Syntax:** `np.std(array, axis=0); row-wise std`
- **Syntax:** `np.std(array, axis=1); column-wise std`
```
# compute the standard deviation of `A1`
np.std(A1)
# standard deviation of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# standard deviation of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
```
## Variance
- Compute the variance along the specified axis.
- Returns the variance of the array elements, a measure of the spread of a
distribution. The variance is computed for the flattened array by
default, otherwise over the specified axis.
- **Syntax:** `np.var(array); array-wise var`
- **Syntax:** `np.var(array, axis=0); row-wise var`
- **Syntax:** `np.var(array, axis=1); column-wise var`
```
# compute the variance of `A`
np.var(A1)
# variance of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# variance of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
```
## Quantile
- Compute the q-th quantile of the data along the specified axis.
- **Syntax:** `np.quantile(array); array-wise quantile`
- **Syntax:** `np.quantile(array, axis=0); row-wise quantile`
- **Syntax:** `np.quantile(array, axis=1); column-wise quantile`
```
# 25th percentile of `A1`
np.quantile(A1, 0.25)
# 50th percentile of `A2`(axis=0)
np.quantile(A2, 0.5, axis=0)
# 75th percentile of `A2`(axis=1)
np.quantile(A2, 0.75, axis=1)
```
## Correlation Coefficient
```
# documentation
np.info(np.corrcoef)
# compute Correlation Coefficient
np.corrcoef(A2)
```
|
github_jupyter
|
import numpy as np
# 1D array
A1 = np.arange(20)
print(A1)
A.ndim
# 2D array
A2 = np.array([[11, 12, 13], [21, 22, 23]])
print(A2)
np.sum(A2, axis=0)
np.sum(A2)
A2.ndim
# sum of 1D array
np.sum(A1)
# array-wise sum of 2D array
np.sum(A2)
A2
# sum of 2D array(axis=0, row-wise sum)
np.sum(A2, axis=0)
# sum of 2D array(axis=1, column-wise sum)
np.sum(A2, axis=1)
A1
A2
# compute the average of array `A1`
np.mean(A)
# mean of 2D array(axis=0, row-wise)
np.mean(A2, axis=0)
# mean of 2D array(axis=1, column-wise)
np.mean(A2, axis=1)
# compute the meadian of `A1`
np.median(A1)
# median of 2D array(axis=0, row-wise)
np.median(A2, axis=0)
# median of 2D array(axis=1, column-wise)
np.median(A2, axis=1)
# minimum value of `A1`
np.min(A)
# minimum value of A2(axis=0, row-wise)
np.min(A2, axis=0)
# minimum value of A2(axis=1, column-wise)
np.min(A2, axis=1)
# maxiumum value of `A1`
np.max(A1)
# maxiumum value of A2(axis=0, row-wise)
np.max(A2, axis=0)
# maxiumum value of A2(axis=1, column-wise)
np.max(A2, axis=1)
A1.max()
A1.min()
r = np.max(A1) - np.min(A1)
print(r)
# compute the standard deviation of `A1`
np.std(A1)
# standard deviation of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# standard deviation of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
# compute the variance of `A`
np.var(A1)
# variance of 2D array(axis=0, row-wise)
np.std(A2, axis=0)
# variance of 2D array(axis=1, column-wise)
np.std(A2, axis=1)
# 25th percentile of `A1`
np.quantile(A1, 0.25)
# 50th percentile of `A2`(axis=0)
np.quantile(A2, 0.5, axis=0)
# 75th percentile of `A2`(axis=1)
np.quantile(A2, 0.75, axis=1)
# documentation
np.info(np.corrcoef)
# compute Correlation Coefficient
np.corrcoef(A2)
| 0.708616 | 0.991915 |
# Lesson 2.2:
# PowerGrid Models API - Using JSON Queries
This tutorial introduces the PowerGrid Models API and how it can be used to query model data.
__Learning Objectives:__
At the end of the tutorial, the user should be able to use the PowerGrid Models API to
*
*
*
## Getting Started
Before running any of the sample routines in this tutorial, it is first necessary to start the GridAPPS-D Platform and establish a connection to this notebook so that we can start passing calls to the API.
_Open the Ubuntu terminal and start the GridAPPS-D Platform if it is not running already:_
`cd gridappsd-docker`
~/gridappsd-docker$ `./run.sh -t develop`
_Once containers are running,_
gridappsd@[container]:/gridappsd$ `./run-gridappsd.sh`
```
# Establish connection to GridAPPS-D Platform:
from gridappsd import GridAPPSD
gapps = GridAPPSD("('localhost', 61613)", username='system', password='manager')
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
```
---
# Table of Contents
* [1. Introduction to the PowerGrid Model API](#1.-Introduction-to-the-PowerGrid-Model-API)
* [2. Using the PowerGrid Model API](#2.-Using-the-PowerGrid-Model-API)
* [2.1. Specifying the Topic](#2.1.-Specifying-the-Topic)
* [2.2. Structure of a JSON Query Message](#2.2.-Structure-of-a-JSON-Query-Message)
* [2.3. Specifying the requestType](#2.3.-Specifying-the-requestType)
* [3. Querying for Feeder Model Info](#3.-Querying-for-Feeder-Model-Info)
* [3.1. Query for mRIDs of all Models](#3.1.-Query-for-mRIDs-of-all-Models)
* [3.2. Query for Details Dictionary of all Models](#3.2.-Query-for-Details-Dictionary-of-all-Models)
* [4. Querying for Object Info](#4.-Querying-for-Object-Info)
* [4.1. Query for CIM Classes of Objects in Model](#4.1.-Query-for-CIM-Classes-of-Objects-in-Model)
---
# 1. Introduction to the PowerGrid Model API
The PowerGrid Model API is used to pull model information, inlcuding the names, mRIDs, measurements, and nominal values of power system equipment in the feeder (such as lines, loads, switches, transformers, and DERs).
INSERT MORE ON API
INSERT MORE ON API
INSERT MORE ON API
---
# 2. Using the PowerGrid Model API
## 2.1. Specifying the Topic
All queries passed to the PowerGrid Models API need to use the correct topic.For a review of GridAPPS-D topics, see Lesson 1.4 There are two ways to specify the topic. Both produce identical results.
__1) Specifying the topic as a string:__
```
topic = "goss.gridappsd.process.request.data.powergridmodel"
```
__2) Using the `topics` library to specify the topic:__
```
from gridappsd import topics as t
topic = t.REQUEST_POWERGRID_DATA
```
[Return to Top](#Table-of-Contents)
## 2.2. Structure of a JSON Query Message
Most simple queries are passed to PowerGrid Models API as JSON scripts wrapped in python string. The general format is
```
message = """
{
"requestType": "[INSERT QUERY HERE]",
"resultFormat": "JSON",
"modelId": "[OPTIONAL, INSERT MODEL mRID HERE]",
"objectId": "[OPTIONAL, INSERT OBJECT mRID HERE]",
"filter": "[OPTIONAL, INSERT SPARQL FILTER HERE]"
}
```
The components of the message are as follows:
* `"requestType":` -- Specifies the type of query. Available requestType are listed in the next section.
* `"resultFormat":` -- Optional. Specifies the format of the response, can be `"JSON"`, `"CSV"`, or `"XML"`. JSON is used by default if no format is specified.
* `"modelID":` -- Optional. Used to filter the query to only one particular model whose mRID is specified. Be aware of spelling and capitalization differences between JSON query spelling `"modelId"` and Python Library spelling `model_id`.
* `"objectType":` -- Optional. Used to filter the query to only one CIM class of equipment. Speciying the _objectID_ will override any values specified for _objectType_.
* `"objectID":` -- Optional. Used to filter the query to only one object whose mRID is specified. Specifying the _objectID_ will override any values specified for _objectType_.
* `"filter":` -- Optional. Used to filter the query using a SPARQL filter. SPARQL queries are covered in the next lesson.
The usage of each of these message components are explained in detail with code block examples below.
__Important__: Be sure to pay attention to placement of commas ( __,__ ) at the end of each JSON line. Commas are placed at the end of each line _except_ the last line. Incorrect comma placement will result in a JsonSyntaxException.
All of the queries are passed to the PowerGrid Model API using the `.get_response(topic, message)` method for the GridAPPS-D platform connection variable.
[Return to Top](#Table-of-Contents)
## 2.3. Specifying the `requestType`
Below are the possible `requestType` strings that are used to specify the type of each query. Executable code block examples are provided for each of the requests in the subsections below.
The first group of _requestType_ are for queries for information related to the entire model or a set of models, such as the model name, mRID, region, and substation:
* `"requestType": "QUERY_MODEL_NAMES"` -- [Query for the list of all model name mRIDs](#3.1.-Query-for-mRIDs-of-all-Models)
* `"requestType": "QUERY_MODEL_INFO"` -- [Query for the dictionary of all details for all feeders in Blazegraph](#3.2.-Query-for-Details-Dictionary-of-all-Models)
The second group of _requestType_ are for queries for a single object or a single class of objects withing a model, such as the object mRID, CIM attributes, or measurement points:
* `"requestType": "QUERY_OBJECT_TYPES"` -- [Query for the types of CIM classes of objects in the model](#4.1.-Query-for-CIM-Classes-of-Objects-in-Model)
* `"requestType": "QUERY_OBJECT_IDS"` -- Query for a list of all mRIDs for objects of a CIM class in the model
* `"requestType": "QUERY_OBJECT"` -- Query for CIM attributes of an object using its unique mRID
* `"requestType": "QUERY_OBJECT_DICT"` -- Query for the dictionary of all details for an object using either its _objectType_ OR its _objectID_
* `"requestType": "QUERY_OBJECT_MEASUREMENTS"` -- Query for all measurement types and mRIDs for an object using either its _objectType_ OR its _ObjectID_.
The third group of _requestType_ are for queries based on SPARQL filters or complete SPARQL queries. The structure of SPARQL was introduced in [Lesson 1.XX](). Usage of these two _requestType_ will covered separately in the next two lessons.
* `"requestType": "QUERY_MODEL"` -- Query for all part of a specified model, filtered by object type using a SPARQL filter.
* `"requestType": "QUERY"` -- Query using a complete SPARQL query.
[[Return to Top](#Table-of-Contents)]
---
# 3. Querying for Feeder Model Info
This section outlines the pre-built JSON queries that can be passed to the PowerGrid Model API to obtain mRIDs and other information for all models and feeders stored in the Blazegraph Database.
## 3.1. Query for mRIDs of all Models
This query obtains a list of all the model MRIDs stored in the Blazegraph database.
Query requestType:
* `"requestType": "QUERY_MODEL_NAMES"`
Allowed parameters:
* `"resultFormat":` – "XML" / "JSON" / "CSV" -- Optional. Will return results as a list in the format selected. JSON used by default.
```
message = """
{
"requestType": "QUERY_MODEL_NAMES",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
```
[Return to Top](#Table-of-Contents)
## 3.2. Query for Details Dictionary of all Models
This query returns a list of names and MRIDs for all models, substations, subregions, and regions for all available feeders stored in the Blazegraph database.
Query requestType:
* `"requestType": "QUERY_MODEL_INFO"`
Allowed parameters:
* `"resultFormat":` – "XML" / "JSON" / "CSV" -- Will return results as a list in the format selected.
```
message = """
{
"requestType": "QUERY_MODEL_INFO",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
```
[Return to Top](#Table-of-Contents)
---
# 4. Querying for Object Info
This section outlines the pre-built JSON queries that can be passed to the PowerGrid Model API to obtain mRIDs and other information for a particular object or a class of objects for one or more feeders stored in the Blazegraph Database.
All of the examples in this section use the IEEE 13 node model. The python constructor %s is used for all queries to enable the code block to be cut and paste into any python script without needing to change the model mRID.
```
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
```
## 4.1. Query for CIM Classes of Objects in Model
This query is used to query for a list of all the CIM XML classes of objects present in the Blazegraph for a particular model or all models in the database.
Query requestType is
* `"requestType": "QUERY_OBJECT_TYPES"`
Allowed parameters are
* `"modelId":` "model name mRID" -- Optional. Searches only the particular model identified by the given unique mRID
* `"resultFormat":` – "XML" / "JSON" / "CSV" -- Will return results as a list in the format selected.
__1) Query entire Blazegraph database__
Omit the "modelId" parameter to search the entire blazegraph database.
```
message = """
{
"requestType": "QUERY_OBJECT_TYPES",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
```
__2) Query for only a particular model__
Specify the model MRID as a python string and pass it as a parameter to the method to return only the CIM classes of objects in that particular model.
Be aware of spelling and capitalization differences between JSON query spelling `"modelId"` and Python Library spelling `model_id`.
```
message = """
{
"requestType": "QUERY_OBJECT_TYPES",
"modelId": "%s",
"resultFormat": "JSON"
}
""" % model_mrid
gapps.get_response(topic, message)
```
[Return to Top](#Table-of-Contents)
## 4.2. Query for mRIDs of Objects in a Feeder
This query is used to obtain all the mRIDs of objects of a particular CIM class in the feeder.
Query responseType is
* `"requestType": "QUERY_OBJECT_IDS"`
Allowed parameters are:
* `"modelId":` "model name mRID" -- When specified it searches against that model, if empty it will search against all models
* `"objectType":` "CIM Class" -- Optional. Specifies the type of objects you wish to return details for.
* `"resultFormat":` – "XML" / "JSON" / "CSV" -- Will return results as a list in the format selected.
Within a particular feeder, it is possible to query for objects of all the CIM classes obtained using `"requestType": "QUERY_OBJECT_TYPES"` (discussed above in [Section 4.1](#4.1.-Query-for-CIM-Classes-of-Objects-in-Model)). Note that the RDF URI is not included in the query, only the name of the class, such as `"objectType": "ACLineSegment"` or `"objectType": "LoadBreakSwitch"`.
```
message = """
{
"requestType": "QUERY_OBJECT_IDS",
"resultFormat": "JSON",
"modelId": "%s",
"objectType": "LoadBreakSwitch"
}
""" % model_mrid
gapps.get_response(topic, message)
```
[Return to Top](#Table-of-Contents)
## 4.3. Query for CIM Attributes of an Object
This query is used to obtain all the attributes and mRIDs of those attributes for a particular object whose mRID is specified.
Query responseType is
* `"requestType": "QUERY_OBJECT"`
Allowed parameters are:
* `"modelId":` "model name mRID" -- When specified it searches against that model, if empty it will search against all models
* `"objectId":` "object mRID" -- Optional. Specifies the type of objects you wish to return details for.
* `"resultFormat":` – "XML" / "JSON" / "CSV" -- Will return results as a list in the format selected.
Within a particular feeder, it is possible to query for objects of all the CIM classes obtained using `"requestType": "QUERY_OBJECT_TYPES"` (discussed above in [Section 4.1](#4.1.-Query-for-CIM-Classes-of-Objects-in-Model)). Note that the RDF URI is not included in the query, only the name of the class, such as `"objectType": "ACLineSegment"` or `"objectType": "LoadBreakSwitch"`.
```
object_mrid = "_2858B6C2-0886-4269-884C-06FA8B887319"
message = """
{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"modelId": "%s",
"objectId": "%s"
}
""" % (model_mrid, object_mrid)
message = """
{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"objectId": "_4F76A5F9-271D-9EB8-5E31-AA362D86F2C3"
}
"""
gapps.get_response(topic, message)
```
[Return to Top](#Table-of-Contents)
# 5. GridAPPSD-Python Shortcut Methods
A small number of simple PowerGrid Model API queries have pre-built Python functions that can be used without specifying the topic and a particular message.
## 3.1. `query_object_types`
This method is associated with the GridAPPSD connection object and returns a list of all the CIM XML classes of objects present in the Blazegraph for a particular model or all models in the database.
Allowed parameters are
* model_id (optional) - when specified, it searches only the particular model identified by the given unique mRID
__1) Query entire Blazegraph database__
Leave the arguments blank to search all models in the Blazegraph database
```
gapps.query_object_types()
```
__2) Query for only a particular model__
Specify the model MRID as a python string and pass it as a parameter to the method to return only the CIM classes of objects in that particular model
```
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
gapps.query_object_types(model_id = model_mrid)
```
## Query for Measurement mRIDs
```
message = '''{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"objectId": "_4F76A5F9-271D-9EB8-5E31-AA362D86F2C3"
}'''
gridappsd_conn.get_response(topic, message)
message = {
"modelId": model_mrid,
"requestType": "QUERY_OBJECT_MEASUREMENTS",
"resultFormat": "JSON",
"objectType": "ACLineSegment"}
obj_msr_ACline = gapps.get_response(topic, message, timeout=10)
obj_msr_ACline = obj_msr_ACline['data']
# Chose specific measurement mrid. Screen out those whose type is not PNV. For example,
obj_msr_ACline = [k for k in obj_msr_ACline if k['type'] == 'PNV']
obj_msr_ACline
```
## 2.1. Querying for All _modelNames_ mRIDs
When passing commands through the API, it is often necessary to specify the MRID of the particular power system network model.
The PowerGrid Models API containes a function to obtain a list of all the model MRIDs stored in the Blazegrpah database.
This method will return identical results to the JSON message explained below in section 3.
```
gapps.query_model_names()
```
|
github_jupyter
|
# Establish connection to GridAPPS-D Platform:
from gridappsd import GridAPPSD
gapps = GridAPPSD("('localhost', 61613)", username='system', password='manager')
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
topic = "goss.gridappsd.process.request.data.powergridmodel"
from gridappsd import topics as t
topic = t.REQUEST_POWERGRID_DATA
message = """
{
"requestType": "[INSERT QUERY HERE]",
"resultFormat": "JSON",
"modelId": "[OPTIONAL, INSERT MODEL mRID HERE]",
"objectId": "[OPTIONAL, INSERT OBJECT mRID HERE]",
"filter": "[OPTIONAL, INSERT SPARQL FILTER HERE]"
}
message = """
{
"requestType": "QUERY_MODEL_NAMES",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
message = """
{
"requestType": "QUERY_MODEL_INFO",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
message = """
{
"requestType": "QUERY_OBJECT_TYPES",
"resultFormat": "JSON"
}
"""
gapps.get_response(topic, message)
message = """
{
"requestType": "QUERY_OBJECT_TYPES",
"modelId": "%s",
"resultFormat": "JSON"
}
""" % model_mrid
gapps.get_response(topic, message)
message = """
{
"requestType": "QUERY_OBJECT_IDS",
"resultFormat": "JSON",
"modelId": "%s",
"objectType": "LoadBreakSwitch"
}
""" % model_mrid
gapps.get_response(topic, message)
object_mrid = "_2858B6C2-0886-4269-884C-06FA8B887319"
message = """
{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"modelId": "%s",
"objectId": "%s"
}
""" % (model_mrid, object_mrid)
message = """
{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"objectId": "_4F76A5F9-271D-9EB8-5E31-AA362D86F2C3"
}
"""
gapps.get_response(topic, message)
gapps.query_object_types()
model_mrid = "_49AD8E07-3BF9-A4E2-CB8F-C3722F837B62" # IEEE 13 Node used for all example queries
gapps.query_object_types(model_id = model_mrid)
message = '''{
"requestType": "QUERY_OBJECT",
"resultFormat": "JSON",
"objectId": "_4F76A5F9-271D-9EB8-5E31-AA362D86F2C3"
}'''
gridappsd_conn.get_response(topic, message)
message = {
"modelId": model_mrid,
"requestType": "QUERY_OBJECT_MEASUREMENTS",
"resultFormat": "JSON",
"objectType": "ACLineSegment"}
obj_msr_ACline = gapps.get_response(topic, message, timeout=10)
obj_msr_ACline = obj_msr_ACline['data']
# Chose specific measurement mrid. Screen out those whose type is not PNV. For example,
obj_msr_ACline = [k for k in obj_msr_ACline if k['type'] == 'PNV']
obj_msr_ACline
gapps.query_model_names()
| 0.438064 | 0.979136 |
```
%matplotlib inline
import numpy as np
import h5py
import os
from functools import reduce
from imp import reload
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
from hangul.read_data import load_data, load_images, load_all_labels
from matplotlib import cm
from hangul import style
```
## Variation across fonts for 1 character
```
fonts = ['GothicA1-Regular', 'NanumMyeongjo', 'NanumBrush', 'Stylish-Regular']
# appendix figure 3
fig, ax = plt.subplots(1,4, sharey=True, figsize=(6,1))
for ii,font in enumerate(fonts):
image = load_images('/data/hangul/h5s/{}/{}_500.h5'.format(font, font), median_shape=True)
ax[ii].imshow(image[0], cmap='gray')
ax[ii].set_xlabel(font, fontsize=10)
ax[ii].set_xticks([])
ax[ii].set_yticks([])
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/4fonts.pdf', dpi=300)
plt.show()
```
## Mean, Median, Std
```
fontsfolder = '/data/hangul/h5s'
fontnames = os.listdir(fontsfolder)
len(fontnames)
fontnames[0] = fonts[0]
fontnames[1] = fonts[1]
fontnames[2] = fonts[2]
fontnames[3] = fonts[3]
# all blocks all fonts
newdata = []
alldata_unconcat = []
for fontname in fontnames:
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
newdata.append(image)
alldata_unconcat.append(image)
newdata = np.concatenate(newdata, axis=0)
#all blocks w/in 1 font
fontname = 'GothicA1-Regular'
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
data_1font = image
# single block all fonts
data_1block = []
for fontname in fontnames:
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
data_1block.append(image[0])
data_1block = np.asarray(data_1block)
# appendix figure 5
# mean, median, std
fig, axes = plt.subplots(nrows=3, ncols=3, sharex=True, sharey=True, figsize = (3,3))
axes = axes.flatten()
axes[0].imshow(newdata.mean(axis=0), cmap = 'gray_r')
axes[0].set_xticks([], [])
axes[0].set_yticks([], [])
axes[0].set_title('Mean', fontsize=10)
axes[0].set_ylabel('All Fonts All Blocks', rotation=0, fontsize=10, labelpad=70)
axes[1].imshow(np.median(newdata, axis=0), cmap = 'gray_r')
axes[1].set_title('Median', fontsize=10)
axes[2].imshow(newdata.std(axis=0), cmap = 'gray_r')
axes[2].set_title('Standard\n Deviation', fontsize=10)
axes[3].imshow(data_1font.mean(axis=0), cmap = 'gray_r')
axes[3].set_ylabel('One Font All Blocks', fontsize=10, labelpad=70, rotation=0)
axes[4].imshow(np.median(data_1font, axis=0), cmap = 'gray_r')
axes[5].imshow(data_1font.std(axis=0), cmap = 'gray_r')
axes[6].imshow(data_1block.mean(axis=0), cmap = 'gray_r')
axes[6].set_ylabel('All Fonts Single Block', fontsize=10, labelpad=70, rotation=0)
axes[7].imshow(np.median(data_1block, axis=0), cmap = 'gray_r')
axes[8].imshow(np.std(data_1block, axis=0), cmap = 'gray_r')
fig.savefig('/home/ahyeon96/hangul_misc/mms.pdf', dpi=300, bbox_inches='tight')
```
## Pixels within font for all 40 fonts
```
# appendix figure 6
plt.figure(figsize=(10,4))
plt.subplot(1, 2, 1)
for ii,font in enumerate(fontnames):
if ii<4:
plt.hist(newdata[ii].ravel(), bins=25, label=font, histtype='step', linewidth=2)
else:
plt.hist(newdata[ii].ravel(), bins=25, alpha=0.2, histtype='step')
plt.yscale('log')
plt.title('Pixels within font for all 35 fonts', fontsize=10)
plt.ylabel('Frequency', fontsize=10)
plt.xlabel('Pixel Values', fontsize=10)
plt.text(-40, 800, 'A', fontweight='bold', fontsize=20)
plt.legend()
plt.subplot(1, 2, 2)
for ii,font in enumerate(fontnames):
if ii<4:
plt.hist(np.linalg.norm(alldata_unconcat[ii], axis=(1,2)), bins=25, label=font)
else:
plt.hist(np.linalg.norm(alldata_unconcat[ii], axis=(1,2)), bins=25, alpha=0.1)
axes = plt.gca()
plt.title('Character norms for all 35 fonts', fontsize=10)
plt.ylabel('Frequency', fontsize=10)
plt.xlabel('Charater Norms', fontsize=10)
plt.text(-30,1800, 'B', fontweight='bold', fontsize=20)
plt.legend()
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/hist.pdf', dpi=300)
```
## Correlations across fonts
```
n_fonts = len(fontnames)
correlation = np.full((n_fonts, n_fonts), np.nan)
for ii,font in enumerate(fontnames):
for jj,font in enumerate(fontnames):
corr = np.corrcoef(alldata_unconcat[ii].flatten(), alldata_unconcat[jj].flatten())[0][1]
correlation[ii,jj] = corr
np.savez('/home/ahyeon96/hangul_misc/correlation.npz', correlation)
correlation = np.load('/home/ahyeon96/hangul_misc/correlation.npz')
correlation = correlation['arr_0']
plt.figure(figsize=(6,6))
plt.imshow(correlation, cmap='Greys', label=fontnames)
plt.xticks(np.arange(len(fontnames)),fontnames, rotation=90, fontsize=8)
plt.yticks(np.arange(len(fontnames)),fontnames, fontsize=8)
plt.tight_layout()
# plt.savefig('/home/ahyeon96/data/hangul/results/confmat.pdf')
# appendix figure 7
import pylab
fig = plt.figure(figsize=(8,8))
ax1 = fig.add_axes([0.09,0.1,0.2,0.6])
Y = linkage(correlation, method='ward')
Z1 = dendrogram(Y, orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
D = correlation[idx1,:]
D2 = D[:,idx1]
fontnames_order = [fontnames[idx] for idx in idx1]
im = axmatrix.matshow(D2, aspect='auto', origin='lower', cmap='Greys')
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([1.15,0.1,0.01,0.6])
pylab.colorbar(im, cax=axcolor)
labels1 = ['{}'.format(font) for font in fontnames_order]
axmatrix.set_yticks(range(35))
axmatrix.set_yticklabels(labels1, minor=False, fontsize=8)
axmatrix.yaxis.set_label_position('right')
axmatrix.yaxis.tick_right()
fig.tight_layout()
fig.savefig('/home/ahyeon96/hangul_misc/dendrogram.pdf', bbox_inches='tight', dpi=300)
f, axes = plt.subplots(1, len(fontnames_order), figsize=(len(fontnames_order), 1))
for ii,font in enumerate(fontnames_order):
ax = axes[ii]
fname = os.path.join(fontsfolder, '{}/{}_500.h5'.format(font,font))
image = load_images(fname, median_shape=True)
ax.imshow(image[0], cmap='gray_r')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(font, fontsize=5)
f.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/first_image.pdf', dpi=300)
```
|
github_jupyter
|
%matplotlib inline
import numpy as np
import h5py
import os
from functools import reduce
from imp import reload
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
from hangul.read_data import load_data, load_images, load_all_labels
from matplotlib import cm
from hangul import style
fonts = ['GothicA1-Regular', 'NanumMyeongjo', 'NanumBrush', 'Stylish-Regular']
# appendix figure 3
fig, ax = plt.subplots(1,4, sharey=True, figsize=(6,1))
for ii,font in enumerate(fonts):
image = load_images('/data/hangul/h5s/{}/{}_500.h5'.format(font, font), median_shape=True)
ax[ii].imshow(image[0], cmap='gray')
ax[ii].set_xlabel(font, fontsize=10)
ax[ii].set_xticks([])
ax[ii].set_yticks([])
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/4fonts.pdf', dpi=300)
plt.show()
fontsfolder = '/data/hangul/h5s'
fontnames = os.listdir(fontsfolder)
len(fontnames)
fontnames[0] = fonts[0]
fontnames[1] = fonts[1]
fontnames[2] = fonts[2]
fontnames[3] = fonts[3]
# all blocks all fonts
newdata = []
alldata_unconcat = []
for fontname in fontnames:
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
newdata.append(image)
alldata_unconcat.append(image)
newdata = np.concatenate(newdata, axis=0)
#all blocks w/in 1 font
fontname = 'GothicA1-Regular'
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
data_1font = image
# single block all fonts
data_1block = []
for fontname in fontnames:
fname = os.path.join(fontsfolder, '{}/{}_24.h5'.format(fontname,fontname))
image = load_images(fname, median_shape=True)
data_1block.append(image[0])
data_1block = np.asarray(data_1block)
# appendix figure 5
# mean, median, std
fig, axes = plt.subplots(nrows=3, ncols=3, sharex=True, sharey=True, figsize = (3,3))
axes = axes.flatten()
axes[0].imshow(newdata.mean(axis=0), cmap = 'gray_r')
axes[0].set_xticks([], [])
axes[0].set_yticks([], [])
axes[0].set_title('Mean', fontsize=10)
axes[0].set_ylabel('All Fonts All Blocks', rotation=0, fontsize=10, labelpad=70)
axes[1].imshow(np.median(newdata, axis=0), cmap = 'gray_r')
axes[1].set_title('Median', fontsize=10)
axes[2].imshow(newdata.std(axis=0), cmap = 'gray_r')
axes[2].set_title('Standard\n Deviation', fontsize=10)
axes[3].imshow(data_1font.mean(axis=0), cmap = 'gray_r')
axes[3].set_ylabel('One Font All Blocks', fontsize=10, labelpad=70, rotation=0)
axes[4].imshow(np.median(data_1font, axis=0), cmap = 'gray_r')
axes[5].imshow(data_1font.std(axis=0), cmap = 'gray_r')
axes[6].imshow(data_1block.mean(axis=0), cmap = 'gray_r')
axes[6].set_ylabel('All Fonts Single Block', fontsize=10, labelpad=70, rotation=0)
axes[7].imshow(np.median(data_1block, axis=0), cmap = 'gray_r')
axes[8].imshow(np.std(data_1block, axis=0), cmap = 'gray_r')
fig.savefig('/home/ahyeon96/hangul_misc/mms.pdf', dpi=300, bbox_inches='tight')
# appendix figure 6
plt.figure(figsize=(10,4))
plt.subplot(1, 2, 1)
for ii,font in enumerate(fontnames):
if ii<4:
plt.hist(newdata[ii].ravel(), bins=25, label=font, histtype='step', linewidth=2)
else:
plt.hist(newdata[ii].ravel(), bins=25, alpha=0.2, histtype='step')
plt.yscale('log')
plt.title('Pixels within font for all 35 fonts', fontsize=10)
plt.ylabel('Frequency', fontsize=10)
plt.xlabel('Pixel Values', fontsize=10)
plt.text(-40, 800, 'A', fontweight='bold', fontsize=20)
plt.legend()
plt.subplot(1, 2, 2)
for ii,font in enumerate(fontnames):
if ii<4:
plt.hist(np.linalg.norm(alldata_unconcat[ii], axis=(1,2)), bins=25, label=font)
else:
plt.hist(np.linalg.norm(alldata_unconcat[ii], axis=(1,2)), bins=25, alpha=0.1)
axes = plt.gca()
plt.title('Character norms for all 35 fonts', fontsize=10)
plt.ylabel('Frequency', fontsize=10)
plt.xlabel('Charater Norms', fontsize=10)
plt.text(-30,1800, 'B', fontweight='bold', fontsize=20)
plt.legend()
plt.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/hist.pdf', dpi=300)
n_fonts = len(fontnames)
correlation = np.full((n_fonts, n_fonts), np.nan)
for ii,font in enumerate(fontnames):
for jj,font in enumerate(fontnames):
corr = np.corrcoef(alldata_unconcat[ii].flatten(), alldata_unconcat[jj].flatten())[0][1]
correlation[ii,jj] = corr
np.savez('/home/ahyeon96/hangul_misc/correlation.npz', correlation)
correlation = np.load('/home/ahyeon96/hangul_misc/correlation.npz')
correlation = correlation['arr_0']
plt.figure(figsize=(6,6))
plt.imshow(correlation, cmap='Greys', label=fontnames)
plt.xticks(np.arange(len(fontnames)),fontnames, rotation=90, fontsize=8)
plt.yticks(np.arange(len(fontnames)),fontnames, fontsize=8)
plt.tight_layout()
# plt.savefig('/home/ahyeon96/data/hangul/results/confmat.pdf')
# appendix figure 7
import pylab
fig = plt.figure(figsize=(8,8))
ax1 = fig.add_axes([0.09,0.1,0.2,0.6])
Y = linkage(correlation, method='ward')
Z1 = dendrogram(Y, orientation='left')
ax1.set_xticks([])
ax1.set_yticks([])
axmatrix = fig.add_axes([0.3,0.1,0.6,0.6])
idx1 = Z1['leaves']
D = correlation[idx1,:]
D2 = D[:,idx1]
fontnames_order = [fontnames[idx] for idx in idx1]
im = axmatrix.matshow(D2, aspect='auto', origin='lower', cmap='Greys')
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([1.15,0.1,0.01,0.6])
pylab.colorbar(im, cax=axcolor)
labels1 = ['{}'.format(font) for font in fontnames_order]
axmatrix.set_yticks(range(35))
axmatrix.set_yticklabels(labels1, minor=False, fontsize=8)
axmatrix.yaxis.set_label_position('right')
axmatrix.yaxis.tick_right()
fig.tight_layout()
fig.savefig('/home/ahyeon96/hangul_misc/dendrogram.pdf', bbox_inches='tight', dpi=300)
f, axes = plt.subplots(1, len(fontnames_order), figsize=(len(fontnames_order), 1))
for ii,font in enumerate(fontnames_order):
ax = axes[ii]
fname = os.path.join(fontsfolder, '{}/{}_500.h5'.format(font,font))
image = load_images(fname, median_shape=True)
ax.imshow(image[0], cmap='gray_r')
ax.set_xticks([])
ax.set_yticks([])
ax.set_title(font, fontsize=5)
f.tight_layout()
plt.savefig('/home/ahyeon96/hangul_misc/first_image.pdf', dpi=300)
| 0.359701 | 0.724675 |
# Training Pong Game by Using DQN
We use PyTorch to train a Deep Q Learning (DQN) agent on a Pong Game.
Reference Code:
- Pong_in_Pygame (Author: clear-code-projects)
+ Youtube: https://www.youtube.com/playlist?list=PL8ui5HK3oSiEk9HaKoVPxSZA03rmr9Z0k
+ Github: https://github.com/clear-code-projects/Pong_in_Pygame
- Deep-Q-Learning-Paper-To-Code (Author: philtabor)
+ Youtube: https://www.youtube.com/watch?v=wc-FxNENg9U
+ Github: https://github.com/philtabor/Deep-Q-Learning-Paper-To-Code
- Reinforcement Learning (DQN) Tutorial(in class colab notebook)
+ https://colab.research.google.com/drive/12M4bu1JUw0zKV2SelwLfxiQ4_aS_WVuH#scrollTo=MA-BKC_ZUMUV
## **Preparation**
---
Git-clone repository from my [Github](https://github.com/yenzu0329/DQN_for_Pong)
The repo contains
- **result** (folder that save the training result and demo video)
- dqn.py
- pong.py
- main.py (code for training DQN model)
- test.py (code for testing DQN model)
```
!git clone https://github.com/yenzu0329/DQN_for_Pong.git
```
### install pygame
```
!pip install pygame
```
To enable pygame on Colab, we have to fool the system to think it has a video card access.
So we assign a dummy video driver.
And the display is achieved by matplotlib im_show()
```
import os
os.environ["SDL_VIDEODRIVER"] = "dummy"
%matplotlib inline
```
## **DQN model**
---
First import necessary libraries
```
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
```
### DeepQNetwork
We define a class for DQN model
The model is an MLP network that takes in the
difference between the current and previous observations. It has **three**
outputs, representing $Q(s, \mathrm{up})$, $Q(s, \mathrm{down})$ and $Q(s, \mathrm{stop})$ (where $s$ is the input to the
network).
In effect, the network is trying to predict the *expected return* of
taking each action given the current input.
```
class DeepQNetwork(nn.Module):
def __init__(self, lr, input_dims, fc1_dims, fc2_dims, n_actions):
super(DeepQNetwork, self).__init__()
self.input_dims = input_dims
self.fc1_dims = fc1_dims
self.fc2_dims = fc2_dims
self.n_actions = n_actions
self.fc1 = nn.Linear(*self.input_dims, self.fc1_dims)
self.fc2 = nn.Linear(self.fc1_dims, self.fc2_dims)
self.fc3 = nn.Linear(self.fc2_dims, self.n_actions)
self.optimizer = optim.Adam(self.parameters(), lr=lr)
self.loss = nn.MSELoss()
self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
actions = self.fc3(x)
return actions
```
### Agent
We also define a class for manage and save the model
The class contains 5 methods
- `store_transition` - store (state, action, reward, new_state, terminal) to a cyclic buffer of bounded size that holds the transitions observed recently
- `choose_action` - select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action is determined by `epsilon`.
- `learn` - sample a batch of transitions, computes $Q(s_t, a_t)$ and
$V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our
loss. We also use a target network to compute V(st+1) for added stability.
- `make_memory` - make a dictionary for saveing memories to a pickle file. The dictionary contains `'state'`, `'action'`, `'reward'`, `'new_state'`, `'terminal'` five keys
- `load_memory` - load memories from a dictionary
```
class Agent():
def __init__(self, gamma, epsilon, lr, input_dims, batch_size, n_actions,
max_mem_size = 10000, eps_end = 0.01, eps_dec = 5e-4):
self.gamma = gamma
self.epsilon = epsilon
self.eps_min = eps_end
self.eps_dec = eps_dec
self.lr = lr
self.action_space = [i for i in range(n_actions)]
self.mem_size = max_mem_size
self.batch_size = batch_size
self.mem_cntr = 0
self.Q_eval = DeepQNetwork(self.lr, n_actions = n_actions, input_dims=input_dims,
fc1_dims=50, fc2_dims=50)
self.Q_target = DeepQNetwork(self.lr, n_actions = n_actions, input_dims=input_dims,
fc1_dims=50, fc2_dims=50)
self.Q_eval.to(self.Q_eval.device)
self.Q_target.to(self.Q_eval.device)
self.Q_target.load_state_dict(self.Q_eval.state_dict())
self.Q_target.eval()
self.state_memory = np.zeros((self.mem_size, *input_dims), dtype=np.float32)
self.new_state_memory = np.zeros((self.mem_size, *input_dims), dtype=np.float32)
self.action_memory = np.zeros(self.mem_size, dtype=np.int32)
self.reward_memory = np.zeros(self.mem_size, dtype=np.float32)
self.terminal_memory = np.zeros(self.mem_size, dtype=np.bool)
def store_transition(self, state, action, reward, new_state, done):
index = self.mem_cntr % self.mem_size
self.state_memory[index] = state
self.new_state_memory[index] = new_state
self.reward_memory[index] = reward
self.action_memory[index] = action
self.terminal_memory[index] = done
self.mem_cntr += 1
def choose_action(self, observation):
if np.random.random() > self.epsilon:
state = T.FloatTensor([observation]).to(self.Q_eval.device)
actions = self.Q_eval.forward(state).to('cpu')
action = T.argmax(actions).item()
else:
action = np.random.choice(self.action_space)
return action
def learn(self):
if self.mem_cntr < self.batch_size:
return
max_mem = min(self.mem_cntr, self.mem_size)
batch = np.random.choice(max_mem, self.batch_size, replace=False)
batch_idx = np.arange(self.batch_size, dtype = np.int32)
state_batch = T.tensor(self.state_memory[batch]).to(self.Q_eval.device)
new_state_batch = T.tensor(self.new_state_memory[batch]).to(self.Q_eval.device)
reward_batch = T.tensor(self.reward_memory[batch]).to(self.Q_eval.device)
terminal_batch = T.tensor(self.terminal_memory[batch]).to(self.Q_eval.device)
action_batch = self.action_memory[batch]
q_eval = self.Q_eval.forward(state_batch)[batch_idx, action_batch]
q_next = self.Q_target.forward(new_state_batch)
q_next[terminal_batch] = 0.0
q_target = reward_batch + self.gamma * T.max(q_next, dim=1)[0]
loss = self.Q_eval.loss(q_target, q_eval).to(self.Q_eval.device)
self.Q_eval.optimizer.zero_grad()
loss.backward()
self.Q_eval.optimizer.step()
self.epsilon = self.epsilon - self.eps_dec if self.epsilon > self.eps_min else self.eps_min
def make_memory(self):
memory = {}
memory['state'] = self.state_memory
memory['new_state'] = self.new_state_memory
memory['reward'] = self.reward_memory
memory['action'] = self.action_memory
memory['terminal'] = self.terminal_memory
return memory
def load_memory(self, memory):
self.state_memory = memory['state']
self.new_state_memory = memory['new_state']
self.reward_memory = memory['reward']
self.action_memory = memory['action']
self.terminal_memory = memory['terminal']
```
## **Training**
---
First import necessary libraries
```
import pickle
import torch
from DQN_for_Pong.pong import *
import matplotlib.pyplot as plt
import numpy as np
```
Below part is the main training loop. At the beginning, we reset the environment and initialize the state Tensor. Then, we sample an action, execute it, get the new observation and reward, and optimize our model once. When the player loses the ball, we restart the loop.
The reward policy show bellow:
- Ball survive: +0.01
- Opponent loss the ball: +3
- Player loss the ball: - (y distance between ball and player)*0.1
If the reward is better than 200, we say that the model is good enough and terminate the training loop.
After finish training, we save the weights of `agent.Q_eval` and `agent.Q_target` in `policy_net_model.pth` and `target_net_model.pth` files. We also save the transition memory and the chart of training process. Those files can be found in the **result** directory.
```
UP = 0
DOWN = 1
STOP = 2
if __name__ == '__main__':
agent = Agent(gamma=0.99, epsilon=1.0, batch_size=64, n_actions=3, input_dims=[4], lr=0.001)
scores, eps_history = [], []
avg_scores = []
player = Player(WIDTH - 10, HEIGHT/2, light_grey)
opponent = Opponent(5, HEIGHT/2, light_grey)
paddles = [player, opponent]
n_games = 10000
for i in range(n_games):
# Game objects
tmp_color = (randint(80,220),randint(80,220),randint(80,220))
player.color = tmp_color
opponent.color = tmp_color
ball = Ball(WIDTH/2, HEIGHT/2, color = tmp_color, paddles = paddles)
game_manager = GameManager(ball=ball, player=player, opponent=opponent)
done = False
score = 0.0
observation = [abs(player.get_x()-ball.get_x()), player.get_y(), ball.get_y(), ball.get_vel_direction()]
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
action = agent.choose_action(observation=observation)
if action == UP: player.move_up()
elif action == DOWN: player.move_down()
else: player.stop()
# Background Stuff
screen.fill(bg_color)
pygame.draw.rect(screen,light_grey,middle_strip)
score_label = basic_font.render("Episode: "+str(i), True, light_grey)
screen.blit(score_label, (10, 10))
# Run the game
reward = game_manager.run_game()
if reward == 0:
score += 0.01
elif reward == 1:
score += 3.0
if reward == -1:
score -= abs(ball.get_y() - player.get_y()) * 0.1
done = True
#done = game_manager.is_done()
new_observation = [abs(player.get_x()-ball.get_x()), player.get_y(), ball.get_y(), ball.get_vel_direction()]
agent.store_transition(observation, action, reward, new_observation, done)
agent.learn()
observation = new_observation
# Rendering
pygame.display.flip()
clock.tick(500)
scores.append(score)
eps_history.append(agent.epsilon)
if(len(scores) > 50):
avg_score = 1.0 * np.mean(scores[-50:])
else:
avg_score = 1.0 * np.mean(scores)
avg_scores.append(avg_score)
if i % 100 == 0:
agent.Q_target.load_state_dict(agent.Q_eval.state_dict())
print('episode%4d' % i, '-- score %5.2f' % score, 'avg score %5.2f' % avg_score,
'epsilon %.2f' % agent.epsilon)
if score > 200:
n_games = i + 1
break
# print and save model
print(agent.Q_eval)
print(agent.Q_target)
local_dir = os.getcwd()
result_dir = os.path.join(local_dir, "result")
if not os.path.isdir(result_dir):
os.mkdir(result_dir)
policy_net_path = os.path.join(local_dir, "result/policy_net_model.pth")
target_net_path = os.path.join(local_dir, "result/target_net_model.pth")
memory_path = os.path.join(local_dir, "result/memory.pickle")
torch.save(agent.Q_eval.state_dict(), policy_net_path)
torch.save(agent.Q_target.state_dict(), target_net_path)
with open(memory_path, 'wb') as f:
pickle.dump(agent.make_memory(), f)
# draw plot
x = [i+1 for i in range(n_games)]
filename = os.path.join(local_dir, 'dqn_for_pong.png')
fig = plt.figure()
plt.title("DQN for Pong")
plt.plot(x, scores, '-', label = 'score')
plt.plot(x, avg_scores, '-', label = 'avg_score')
plt.legend()
fig.savefig(filename)
```
## **Testing**
---
You can download the **result** directory and the **test.py** file (in DQN_for_Pong directory) to test the training result. Please make sure that the **result** and the **test.py** are in the same directory.
Here is a demo video
```
from IPython.display import HTML
from base64 import b64encode
import os
# Input video path
save_path = "/content/DQN_for_Pong/result/demo_video.mp4"
# Compressed video path
compressed_path = "/content/DQN_for_Pong/result/demo_video_compressed.mp4"
os.system(f"ffmpeg -i {save_path} -vcodec libx264 {compressed_path}")
# Show video
mp4 = open(compressed_path,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
```
|
github_jupyter
|
!git clone https://github.com/yenzu0329/DQN_for_Pong.git
!pip install pygame
import os
os.environ["SDL_VIDEODRIVER"] = "dummy"
%matplotlib inline
import torch as T
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
class DeepQNetwork(nn.Module):
def __init__(self, lr, input_dims, fc1_dims, fc2_dims, n_actions):
super(DeepQNetwork, self).__init__()
self.input_dims = input_dims
self.fc1_dims = fc1_dims
self.fc2_dims = fc2_dims
self.n_actions = n_actions
self.fc1 = nn.Linear(*self.input_dims, self.fc1_dims)
self.fc2 = nn.Linear(self.fc1_dims, self.fc2_dims)
self.fc3 = nn.Linear(self.fc2_dims, self.n_actions)
self.optimizer = optim.Adam(self.parameters(), lr=lr)
self.loss = nn.MSELoss()
self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')
self.to(self.device)
def forward(self, state):
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
actions = self.fc3(x)
return actions
class Agent():
def __init__(self, gamma, epsilon, lr, input_dims, batch_size, n_actions,
max_mem_size = 10000, eps_end = 0.01, eps_dec = 5e-4):
self.gamma = gamma
self.epsilon = epsilon
self.eps_min = eps_end
self.eps_dec = eps_dec
self.lr = lr
self.action_space = [i for i in range(n_actions)]
self.mem_size = max_mem_size
self.batch_size = batch_size
self.mem_cntr = 0
self.Q_eval = DeepQNetwork(self.lr, n_actions = n_actions, input_dims=input_dims,
fc1_dims=50, fc2_dims=50)
self.Q_target = DeepQNetwork(self.lr, n_actions = n_actions, input_dims=input_dims,
fc1_dims=50, fc2_dims=50)
self.Q_eval.to(self.Q_eval.device)
self.Q_target.to(self.Q_eval.device)
self.Q_target.load_state_dict(self.Q_eval.state_dict())
self.Q_target.eval()
self.state_memory = np.zeros((self.mem_size, *input_dims), dtype=np.float32)
self.new_state_memory = np.zeros((self.mem_size, *input_dims), dtype=np.float32)
self.action_memory = np.zeros(self.mem_size, dtype=np.int32)
self.reward_memory = np.zeros(self.mem_size, dtype=np.float32)
self.terminal_memory = np.zeros(self.mem_size, dtype=np.bool)
def store_transition(self, state, action, reward, new_state, done):
index = self.mem_cntr % self.mem_size
self.state_memory[index] = state
self.new_state_memory[index] = new_state
self.reward_memory[index] = reward
self.action_memory[index] = action
self.terminal_memory[index] = done
self.mem_cntr += 1
def choose_action(self, observation):
if np.random.random() > self.epsilon:
state = T.FloatTensor([observation]).to(self.Q_eval.device)
actions = self.Q_eval.forward(state).to('cpu')
action = T.argmax(actions).item()
else:
action = np.random.choice(self.action_space)
return action
def learn(self):
if self.mem_cntr < self.batch_size:
return
max_mem = min(self.mem_cntr, self.mem_size)
batch = np.random.choice(max_mem, self.batch_size, replace=False)
batch_idx = np.arange(self.batch_size, dtype = np.int32)
state_batch = T.tensor(self.state_memory[batch]).to(self.Q_eval.device)
new_state_batch = T.tensor(self.new_state_memory[batch]).to(self.Q_eval.device)
reward_batch = T.tensor(self.reward_memory[batch]).to(self.Q_eval.device)
terminal_batch = T.tensor(self.terminal_memory[batch]).to(self.Q_eval.device)
action_batch = self.action_memory[batch]
q_eval = self.Q_eval.forward(state_batch)[batch_idx, action_batch]
q_next = self.Q_target.forward(new_state_batch)
q_next[terminal_batch] = 0.0
q_target = reward_batch + self.gamma * T.max(q_next, dim=1)[0]
loss = self.Q_eval.loss(q_target, q_eval).to(self.Q_eval.device)
self.Q_eval.optimizer.zero_grad()
loss.backward()
self.Q_eval.optimizer.step()
self.epsilon = self.epsilon - self.eps_dec if self.epsilon > self.eps_min else self.eps_min
def make_memory(self):
memory = {}
memory['state'] = self.state_memory
memory['new_state'] = self.new_state_memory
memory['reward'] = self.reward_memory
memory['action'] = self.action_memory
memory['terminal'] = self.terminal_memory
return memory
def load_memory(self, memory):
self.state_memory = memory['state']
self.new_state_memory = memory['new_state']
self.reward_memory = memory['reward']
self.action_memory = memory['action']
self.terminal_memory = memory['terminal']
import pickle
import torch
from DQN_for_Pong.pong import *
import matplotlib.pyplot as plt
import numpy as np
UP = 0
DOWN = 1
STOP = 2
if __name__ == '__main__':
agent = Agent(gamma=0.99, epsilon=1.0, batch_size=64, n_actions=3, input_dims=[4], lr=0.001)
scores, eps_history = [], []
avg_scores = []
player = Player(WIDTH - 10, HEIGHT/2, light_grey)
opponent = Opponent(5, HEIGHT/2, light_grey)
paddles = [player, opponent]
n_games = 10000
for i in range(n_games):
# Game objects
tmp_color = (randint(80,220),randint(80,220),randint(80,220))
player.color = tmp_color
opponent.color = tmp_color
ball = Ball(WIDTH/2, HEIGHT/2, color = tmp_color, paddles = paddles)
game_manager = GameManager(ball=ball, player=player, opponent=opponent)
done = False
score = 0.0
observation = [abs(player.get_x()-ball.get_x()), player.get_y(), ball.get_y(), ball.get_vel_direction()]
while not done:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
action = agent.choose_action(observation=observation)
if action == UP: player.move_up()
elif action == DOWN: player.move_down()
else: player.stop()
# Background Stuff
screen.fill(bg_color)
pygame.draw.rect(screen,light_grey,middle_strip)
score_label = basic_font.render("Episode: "+str(i), True, light_grey)
screen.blit(score_label, (10, 10))
# Run the game
reward = game_manager.run_game()
if reward == 0:
score += 0.01
elif reward == 1:
score += 3.0
if reward == -1:
score -= abs(ball.get_y() - player.get_y()) * 0.1
done = True
#done = game_manager.is_done()
new_observation = [abs(player.get_x()-ball.get_x()), player.get_y(), ball.get_y(), ball.get_vel_direction()]
agent.store_transition(observation, action, reward, new_observation, done)
agent.learn()
observation = new_observation
# Rendering
pygame.display.flip()
clock.tick(500)
scores.append(score)
eps_history.append(agent.epsilon)
if(len(scores) > 50):
avg_score = 1.0 * np.mean(scores[-50:])
else:
avg_score = 1.0 * np.mean(scores)
avg_scores.append(avg_score)
if i % 100 == 0:
agent.Q_target.load_state_dict(agent.Q_eval.state_dict())
print('episode%4d' % i, '-- score %5.2f' % score, 'avg score %5.2f' % avg_score,
'epsilon %.2f' % agent.epsilon)
if score > 200:
n_games = i + 1
break
# print and save model
print(agent.Q_eval)
print(agent.Q_target)
local_dir = os.getcwd()
result_dir = os.path.join(local_dir, "result")
if not os.path.isdir(result_dir):
os.mkdir(result_dir)
policy_net_path = os.path.join(local_dir, "result/policy_net_model.pth")
target_net_path = os.path.join(local_dir, "result/target_net_model.pth")
memory_path = os.path.join(local_dir, "result/memory.pickle")
torch.save(agent.Q_eval.state_dict(), policy_net_path)
torch.save(agent.Q_target.state_dict(), target_net_path)
with open(memory_path, 'wb') as f:
pickle.dump(agent.make_memory(), f)
# draw plot
x = [i+1 for i in range(n_games)]
filename = os.path.join(local_dir, 'dqn_for_pong.png')
fig = plt.figure()
plt.title("DQN for Pong")
plt.plot(x, scores, '-', label = 'score')
plt.plot(x, avg_scores, '-', label = 'avg_score')
plt.legend()
fig.savefig(filename)
from IPython.display import HTML
from base64 import b64encode
import os
# Input video path
save_path = "/content/DQN_for_Pong/result/demo_video.mp4"
# Compressed video path
compressed_path = "/content/DQN_for_Pong/result/demo_video_compressed.mp4"
os.system(f"ffmpeg -i {save_path} -vcodec libx264 {compressed_path}")
# Show video
mp4 = open(compressed_path,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
| 0.894522 | 0.955486 |
```
import pandas as pd
import numpy as np
from collections import defaultdict
from sklearn.datasets import fetch_20newsgroups
from sklearn.metrics import confusion_matrix
from tqdm import tqdm
import itertools
import matplotlib.pyplot as plt
import re
%matplotlib inline
```
# Naive Bayes code (with Sentence)
##### Student ID: 2109853M-IM20-0015
###### This code includes the bag-of-word, the bag-of-sentence and Naive Bayes classifier which realized by myself. (Question 6)
```
def preprocess(str_arg):
cleaned_str=re.sub('[^a-z\s]+',' ',str_arg,flags=re.IGNORECASE) #every char except alphabets is replaced
cleaned_str=re.sub('(\s+)',' ',cleaned_str) #multiple spaces are replaced by single space
cleaned_str=cleaned_str.lower() #converting the cleaned string to lower case
cleaned_str=cleaned_str.strip()
return cleaned_str # returning the preprocessed string
```
## Naive Bayes part (with Sentence)
```
class Self_NaiveBayes:
def __init__(self, each_classes):
self.classes = each_classes
# the-bag-of-word dictionary
self.word_dicts = np.array([defaultdict(lambda: 0) for index in range(self.classes.shape[0])])
# the-bag-of-sentence dictionary
self.sen_dicts = np.array([defaultdict(lambda: 0) for index in range(self.classes.shape[0])])
def addTosen(self, example, index):
'''
The bag-of-sentence part
sen_dicts: Save the sentence dictionary.
'''
if isinstance(example, np.ndarray): example = example[0]
i = 0
for token_word in example.split():
if i != 0:
self.sen_dicts[index][token_word+before_word] += 1
i += 1
before_word = token_word
def addToword(self, example, dict_index):
'''
The bag-of-word part
word_dicts: Save the word dictionary.
'''
for token_word in example.split():
self.word_dicts[dict_index][token_word] += 1
def Train_word(self, dataset, labels):
self.examples = dataset
self.labels = labels
if not isinstance(self.examples, np.ndarray): self.examples = np.array(self.examples)
if not isinstance(self.labels, np.ndarray): self.labels = np.array(self.labels)
# Constructing BoW for each category
for wordidx, word in enumerate(self.classes):
all_cat_examples = self.examples[self.labels == word]
cleaned_examples = [preprocess(cat_example) for cat_example in all_cat_examples]
cleaned_examples = pd.DataFrame(data=cleaned_examples)
np.apply_along_axis(self.addToword, 1, cleaned_examples, wordidx)
prob_classes = np.empty(self.classes.shape[0])
all_words = []
cat_word_counts = np.empty(self.classes.shape[0])
for wordidx, word in enumerate(self.classes):
# Calculating prior probability p(c) for each category
prob_classes[wordidx] = np.sum(self.labels == word) / float(self.labels.shape[0])
# Calculating total counts of all the words of each category
count = list(self.word_dicts[wordidx].values())
cat_word_counts[wordidx] = np.sum(
np.array(list(self.word_dicts[wordidx].values()))) + 1 # |v| is remaining to be added
# get all words of this category
all_words += self.word_dicts[wordidx].keys()
# get vocabulary V of entire training set
self.vocabword = np.unique(np.array(all_words))
self.vocab_length_word = self.vocabword.shape[0]
# computing denominator value
denoms = np.array(
[cat_word_counts[cat_index] + self.vocab_length_word + 1 for cat_index, cat in enumerate(self.classes)])
self.cats_info_word = [(self.word_dicts[wordidx], prob_classes[wordidx], denoms[wordidx]) for wordidx, word in
enumerate(self.classes)]
self.cats_info_word = np.array(self.cats_info_word)
def Train_sentence(self, dataset, labels):
self.examples = dataset
self.labels = labels
if not isinstance(self.examples, np.ndarray): self.examples = np.array(self.examples)
if not isinstance(self.labels, np.ndarray): self.labels = np.array(self.labels)
# constructing sentence for each category
for sentence_idx, sentence in enumerate(self.classes):
all_sen_examples = self.examples[self.labels == sentence]
cleaned_examples = [preprocess(sen) for sen in all_sen_examples]
cleaned_examples = pd.DataFrame(data=cleaned_examples)
# now costruct BoW of this particular category
np.apply_along_axis(self.addTosen, 1, cleaned_examples, sentence_idx)
prob_classes = np.empty(self.classes.shape[0])
all_sentence = []
cat_sen_counts = np.empty(self.classes.shape[0])
for cat_index, cat in enumerate(self.classes):
# Calculating prior probability p(c) for each category
prob_classes[cat_index] = np.sum(self.labels == cat) / float(self.labels.shape[0])
# Calculating total counts of all the sentence of each class
count = list(self.sen_dicts[cat_index].values())
cat_sen_counts[cat_index] = np.sum(
np.array(list(self.sen_dicts[cat_index].values()))) + 1 # |v| is remaining to be added
# get all sentence of this category
all_sentence += self.sen_dicts[cat_index].keys()
# get vocabulary V of entire training set
self.vocab = np.unique(np.array(all_sentence))
self.vocab_length = self.vocab.shape[0]
# computing denominator value
denoms = np.array(
[cat_sen_counts[cat_index] + self.vocab_length + 1 for cat_index, cat in enumerate(self.classes)])
self.cats_info_sen = [(self.sen_dicts[cat_index], prob_classes[cat_index], denoms[cat_index]) for cat_index, cat in
enumerate(self.classes)]
self.cats_info_sen = np.array(self.cats_info_sen)
def probability(self, test_example):
likelihood_prob = np.zeros(self.classes.shape[0])
# finding probability
for cat_index, cat in enumerate(self.classes):
i = 0
for test_token in test_example.split():
if i != 0:
test_token_counts = self.cats_info_sen[cat_index][0].get(test_token + before_word, 0) + 1
token_word_counts = self.cats_info_word[cat_index][0].get(test_token, 0) + 1
# likelihood of this test_token word
sentence_token_prob = test_token_counts / float(self.cats_info_sen[cat_index][2])
word_token_prob = token_word_counts / float(self.cats_info_word[cat_index][2])
test_token_prob = sentence_token_prob * word_token_prob
likelihood_prob[cat_index] += np.log(test_token_prob)
else:
token_word_counts = self.cats_info_word[cat_index][0].get(test_token, 0) + 1
word_token_prob = token_word_counts / float(self.cats_info_word[cat_index][2])
likelihood_prob[cat_index] += np.log(word_token_prob)
i += 1
before_word = test_token
post_prob = np.empty(self.classes.shape[0])
for cat_index, cat in enumerate(self.classes):
post_prob[cat_index] = likelihood_prob[cat_index] + np.log(self.cats_info[cat_index][1]) + np.log(self.cats_info_word[cat_index][1])
return post_prob
def Test(self, test_set):
predictions = []
for example in test_set:
cleaned_example = preprocess(example)
# simply get the posterior probability of each document
post_prob = self.probability(cleaned_example) # probability of this document for both catergory
predictions.append(self.classes[np.argmax(post_prob)])
return np.array(predictions)
newsgroups_train = fetch_20newsgroups(subset='train')
train_data = newsgroups_train.data # getting all trainign examples
train_labels = newsgroups_train.target
newsgroups_test = fetch_20newsgroups(subset='test') # loading test data
test_data = newsgroups_test.data # get test set examples
test_labels = newsgroups_test.target
nb = Self_NaiveBayes(np.unique(train_labels)) # instantiate a NB class object
nb.Train_sentence(train_data, train_labels)
nb.Train_word(train_data, train_labels)
prediction = nb.Test(test_data) # get predcitions for test set
test_acc = np.sum(prediction == test_labels) / float(test_labels.shape[0])
```
## Test result(accuracy)
```
test_acc
```
## Confusion matrix
```
def plot_confusion_matrix(cm, classes,
title='Confusion matrix',
cmap=plt.cm.PuBu):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], '.2f'),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('Ground Truth')
plt.xlabel('Prediction')
plt.tight_layout()
# Compute confusion matrix
cma = confusion_matrix(test_labels, prediction, labels=None, sample_weight=None)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure(figsize=(12, 10), facecolor='w', edgecolor='b')
plot_confusion_matrix(cma, classes=newsgroups_test.target_names,
title='Confusion Matrix on Naive Bayes(Sentence and Words)')
plt.savefig('new_NB.png')
```
|
github_jupyter
|
import pandas as pd
import numpy as np
from collections import defaultdict
from sklearn.datasets import fetch_20newsgroups
from sklearn.metrics import confusion_matrix
from tqdm import tqdm
import itertools
import matplotlib.pyplot as plt
import re
%matplotlib inline
def preprocess(str_arg):
cleaned_str=re.sub('[^a-z\s]+',' ',str_arg,flags=re.IGNORECASE) #every char except alphabets is replaced
cleaned_str=re.sub('(\s+)',' ',cleaned_str) #multiple spaces are replaced by single space
cleaned_str=cleaned_str.lower() #converting the cleaned string to lower case
cleaned_str=cleaned_str.strip()
return cleaned_str # returning the preprocessed string
class Self_NaiveBayes:
def __init__(self, each_classes):
self.classes = each_classes
# the-bag-of-word dictionary
self.word_dicts = np.array([defaultdict(lambda: 0) for index in range(self.classes.shape[0])])
# the-bag-of-sentence dictionary
self.sen_dicts = np.array([defaultdict(lambda: 0) for index in range(self.classes.shape[0])])
def addTosen(self, example, index):
'''
The bag-of-sentence part
sen_dicts: Save the sentence dictionary.
'''
if isinstance(example, np.ndarray): example = example[0]
i = 0
for token_word in example.split():
if i != 0:
self.sen_dicts[index][token_word+before_word] += 1
i += 1
before_word = token_word
def addToword(self, example, dict_index):
'''
The bag-of-word part
word_dicts: Save the word dictionary.
'''
for token_word in example.split():
self.word_dicts[dict_index][token_word] += 1
def Train_word(self, dataset, labels):
self.examples = dataset
self.labels = labels
if not isinstance(self.examples, np.ndarray): self.examples = np.array(self.examples)
if not isinstance(self.labels, np.ndarray): self.labels = np.array(self.labels)
# Constructing BoW for each category
for wordidx, word in enumerate(self.classes):
all_cat_examples = self.examples[self.labels == word]
cleaned_examples = [preprocess(cat_example) for cat_example in all_cat_examples]
cleaned_examples = pd.DataFrame(data=cleaned_examples)
np.apply_along_axis(self.addToword, 1, cleaned_examples, wordidx)
prob_classes = np.empty(self.classes.shape[0])
all_words = []
cat_word_counts = np.empty(self.classes.shape[0])
for wordidx, word in enumerate(self.classes):
# Calculating prior probability p(c) for each category
prob_classes[wordidx] = np.sum(self.labels == word) / float(self.labels.shape[0])
# Calculating total counts of all the words of each category
count = list(self.word_dicts[wordidx].values())
cat_word_counts[wordidx] = np.sum(
np.array(list(self.word_dicts[wordidx].values()))) + 1 # |v| is remaining to be added
# get all words of this category
all_words += self.word_dicts[wordidx].keys()
# get vocabulary V of entire training set
self.vocabword = np.unique(np.array(all_words))
self.vocab_length_word = self.vocabword.shape[0]
# computing denominator value
denoms = np.array(
[cat_word_counts[cat_index] + self.vocab_length_word + 1 for cat_index, cat in enumerate(self.classes)])
self.cats_info_word = [(self.word_dicts[wordidx], prob_classes[wordidx], denoms[wordidx]) for wordidx, word in
enumerate(self.classes)]
self.cats_info_word = np.array(self.cats_info_word)
def Train_sentence(self, dataset, labels):
self.examples = dataset
self.labels = labels
if not isinstance(self.examples, np.ndarray): self.examples = np.array(self.examples)
if not isinstance(self.labels, np.ndarray): self.labels = np.array(self.labels)
# constructing sentence for each category
for sentence_idx, sentence in enumerate(self.classes):
all_sen_examples = self.examples[self.labels == sentence]
cleaned_examples = [preprocess(sen) for sen in all_sen_examples]
cleaned_examples = pd.DataFrame(data=cleaned_examples)
# now costruct BoW of this particular category
np.apply_along_axis(self.addTosen, 1, cleaned_examples, sentence_idx)
prob_classes = np.empty(self.classes.shape[0])
all_sentence = []
cat_sen_counts = np.empty(self.classes.shape[0])
for cat_index, cat in enumerate(self.classes):
# Calculating prior probability p(c) for each category
prob_classes[cat_index] = np.sum(self.labels == cat) / float(self.labels.shape[0])
# Calculating total counts of all the sentence of each class
count = list(self.sen_dicts[cat_index].values())
cat_sen_counts[cat_index] = np.sum(
np.array(list(self.sen_dicts[cat_index].values()))) + 1 # |v| is remaining to be added
# get all sentence of this category
all_sentence += self.sen_dicts[cat_index].keys()
# get vocabulary V of entire training set
self.vocab = np.unique(np.array(all_sentence))
self.vocab_length = self.vocab.shape[0]
# computing denominator value
denoms = np.array(
[cat_sen_counts[cat_index] + self.vocab_length + 1 for cat_index, cat in enumerate(self.classes)])
self.cats_info_sen = [(self.sen_dicts[cat_index], prob_classes[cat_index], denoms[cat_index]) for cat_index, cat in
enumerate(self.classes)]
self.cats_info_sen = np.array(self.cats_info_sen)
def probability(self, test_example):
likelihood_prob = np.zeros(self.classes.shape[0])
# finding probability
for cat_index, cat in enumerate(self.classes):
i = 0
for test_token in test_example.split():
if i != 0:
test_token_counts = self.cats_info_sen[cat_index][0].get(test_token + before_word, 0) + 1
token_word_counts = self.cats_info_word[cat_index][0].get(test_token, 0) + 1
# likelihood of this test_token word
sentence_token_prob = test_token_counts / float(self.cats_info_sen[cat_index][2])
word_token_prob = token_word_counts / float(self.cats_info_word[cat_index][2])
test_token_prob = sentence_token_prob * word_token_prob
likelihood_prob[cat_index] += np.log(test_token_prob)
else:
token_word_counts = self.cats_info_word[cat_index][0].get(test_token, 0) + 1
word_token_prob = token_word_counts / float(self.cats_info_word[cat_index][2])
likelihood_prob[cat_index] += np.log(word_token_prob)
i += 1
before_word = test_token
post_prob = np.empty(self.classes.shape[0])
for cat_index, cat in enumerate(self.classes):
post_prob[cat_index] = likelihood_prob[cat_index] + np.log(self.cats_info[cat_index][1]) + np.log(self.cats_info_word[cat_index][1])
return post_prob
def Test(self, test_set):
predictions = []
for example in test_set:
cleaned_example = preprocess(example)
# simply get the posterior probability of each document
post_prob = self.probability(cleaned_example) # probability of this document for both catergory
predictions.append(self.classes[np.argmax(post_prob)])
return np.array(predictions)
newsgroups_train = fetch_20newsgroups(subset='train')
train_data = newsgroups_train.data # getting all trainign examples
train_labels = newsgroups_train.target
newsgroups_test = fetch_20newsgroups(subset='test') # loading test data
test_data = newsgroups_test.data # get test set examples
test_labels = newsgroups_test.target
nb = Self_NaiveBayes(np.unique(train_labels)) # instantiate a NB class object
nb.Train_sentence(train_data, train_labels)
nb.Train_word(train_data, train_labels)
prediction = nb.Test(test_data) # get predcitions for test set
test_acc = np.sum(prediction == test_labels) / float(test_labels.shape[0])
test_acc
def plot_confusion_matrix(cm, classes,
title='Confusion matrix',
cmap=plt.cm.PuBu):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], '.2f'),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('Ground Truth')
plt.xlabel('Prediction')
plt.tight_layout()
# Compute confusion matrix
cma = confusion_matrix(test_labels, prediction, labels=None, sample_weight=None)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure(figsize=(12, 10), facecolor='w', edgecolor='b')
plot_confusion_matrix(cma, classes=newsgroups_test.target_names,
title='Confusion Matrix on Naive Bayes(Sentence and Words)')
plt.savefig('new_NB.png')
| 0.606149 | 0.789558 |
Predict the CaCO3 and TOC using the latest models (2021 Aug.) on the whole spetra.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.style.use('seaborn-colorblind')
#plt.style.use('dark_background')
plt.rcParams['figure.dpi'] = 300
plt.rcParams['savefig.dpi'] = 300
plt.rcParams['savefig.bbox'] = 'tight'
plt.rcParams['savefig.transparent'] = True
%matplotlib inline
import datetime
date = datetime.datetime.now().strftime('%Y%m%d')
print(date)
```
# Read spe dataset and models
```
merge_df = pd.read_csv('data/spe+bulk_dataset_20210825.csv', index_col=0)
spe_df = pd.read_csv('data/spe_dataset_20210818.csv', index_col=0)
spe_df
spe_df = spe_df[spe_df.core != 'SO202-37-2_re']
spe_df.shape
merge_df
2000/57240*100
X = spe_df.iloc[:, :2048].values
X = X / X.sum(axis = 1, keepdims = True)
from joblib import load
m_caco3 = load('models/caco3_nmf+svr_model_20210823.joblib')
m_toc_svr = load('models/toc_nmf+svr_model_20210823.joblib')
```
# Predict
```
y_caco3 = np.exp(m_caco3.predict(X))
y_toc_svr = np.exp(m_toc_svr.predict(X))
```
# Build dataset
```
predict_df = spe_df.iloc[:, -5:].copy()
predict_df['CaCO3 prediction (wt%)'] = y_caco3
predict_df['TOC prediction (wt%)'] = y_toc_svr
```
# Check
```
mask = (predict_df['CaCO3 prediction (wt%)'] > 100) | (predict_df['TOC prediction (wt%)'] > 100)
print('There are {} ({:.2f} %) predictions having values over 100.'.format(len(predict_df[mask]), len(predict_df[mask])/len(predict_df)*100))
plt.hist(predict_df.loc[predict_df['CaCO3 prediction (wt%)'] > 100, 'CaCO3 prediction (wt%)'])
plt.text(200, 200, '{} points have carbonates content > 100 (wt%)'.format(len(predict_df[predict_df['CaCO3 prediction (wt%)'] > 100])))
plt.ylabel('count')
plt.xlabel('wt%');
predict_df.to_csv('results/predict_{}.csv'.format(date))
print(date)
```
## Plot the resolution difference
```
core = 'SO264-64-1'
fig, axes = plt.subplots(2, 1, figsize=(7, 3.7), sharex='col')
axes[0].plot(predict_df.loc[predict_df.core == core,
'composite_depth_mm']*.001, predict_df.loc[predict_df.core == core, 'CaCO3 prediction (wt%)'],
label='Prediction', alpha=.8, c='gray', lw=1)
axes[0].scatter(merge_df.loc[merge_df.core == core,
'mid_depth_mm']*.001, merge_df.loc[merge_df.core == core, 'CaCO3%'],
label='Measurement', s=3)
axes[0].set_ylabel('Carbonate (wt %)')
axes[0].set_xlim(-.3, 18.8)
axes[0].legend()
axes[1].plot(predict_df.loc[predict_df.core == core,
'composite_depth_mm']*.001, predict_df.loc[predict_df.core == core, 'TOC prediction (wt%)'],
label='Prediction', alpha=.8, c='gray', lw=1)
axes[1].scatter(merge_df.loc[merge_df.core == core,
'mid_depth_mm']*.001, merge_df.loc[merge_df.core == core, 'TOC%'],
label='Measurement', s=3)
axes[1].set_ylabel('TOC (wt %)')
axes[1].set_xlabel('Depth (m)')
fig.subplots_adjust(hspace=.08)
fig.savefig('results/prediction_{}_{}.png'.format(core, date))
```
|
github_jupyter
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#plt.style.use('ggplot')
plt.style.use('seaborn-colorblind')
#plt.style.use('dark_background')
plt.rcParams['figure.dpi'] = 300
plt.rcParams['savefig.dpi'] = 300
plt.rcParams['savefig.bbox'] = 'tight'
plt.rcParams['savefig.transparent'] = True
%matplotlib inline
import datetime
date = datetime.datetime.now().strftime('%Y%m%d')
print(date)
merge_df = pd.read_csv('data/spe+bulk_dataset_20210825.csv', index_col=0)
spe_df = pd.read_csv('data/spe_dataset_20210818.csv', index_col=0)
spe_df
spe_df = spe_df[spe_df.core != 'SO202-37-2_re']
spe_df.shape
merge_df
2000/57240*100
X = spe_df.iloc[:, :2048].values
X = X / X.sum(axis = 1, keepdims = True)
from joblib import load
m_caco3 = load('models/caco3_nmf+svr_model_20210823.joblib')
m_toc_svr = load('models/toc_nmf+svr_model_20210823.joblib')
y_caco3 = np.exp(m_caco3.predict(X))
y_toc_svr = np.exp(m_toc_svr.predict(X))
predict_df = spe_df.iloc[:, -5:].copy()
predict_df['CaCO3 prediction (wt%)'] = y_caco3
predict_df['TOC prediction (wt%)'] = y_toc_svr
mask = (predict_df['CaCO3 prediction (wt%)'] > 100) | (predict_df['TOC prediction (wt%)'] > 100)
print('There are {} ({:.2f} %) predictions having values over 100.'.format(len(predict_df[mask]), len(predict_df[mask])/len(predict_df)*100))
plt.hist(predict_df.loc[predict_df['CaCO3 prediction (wt%)'] > 100, 'CaCO3 prediction (wt%)'])
plt.text(200, 200, '{} points have carbonates content > 100 (wt%)'.format(len(predict_df[predict_df['CaCO3 prediction (wt%)'] > 100])))
plt.ylabel('count')
plt.xlabel('wt%');
predict_df.to_csv('results/predict_{}.csv'.format(date))
print(date)
core = 'SO264-64-1'
fig, axes = plt.subplots(2, 1, figsize=(7, 3.7), sharex='col')
axes[0].plot(predict_df.loc[predict_df.core == core,
'composite_depth_mm']*.001, predict_df.loc[predict_df.core == core, 'CaCO3 prediction (wt%)'],
label='Prediction', alpha=.8, c='gray', lw=1)
axes[0].scatter(merge_df.loc[merge_df.core == core,
'mid_depth_mm']*.001, merge_df.loc[merge_df.core == core, 'CaCO3%'],
label='Measurement', s=3)
axes[0].set_ylabel('Carbonate (wt %)')
axes[0].set_xlim(-.3, 18.8)
axes[0].legend()
axes[1].plot(predict_df.loc[predict_df.core == core,
'composite_depth_mm']*.001, predict_df.loc[predict_df.core == core, 'TOC prediction (wt%)'],
label='Prediction', alpha=.8, c='gray', lw=1)
axes[1].scatter(merge_df.loc[merge_df.core == core,
'mid_depth_mm']*.001, merge_df.loc[merge_df.core == core, 'TOC%'],
label='Measurement', s=3)
axes[1].set_ylabel('TOC (wt %)')
axes[1].set_xlabel('Depth (m)')
fig.subplots_adjust(hspace=.08)
fig.savefig('results/prediction_{}_{}.png'.format(core, date))
| 0.371137 | 0.820829 |
```
import panel as pn
pn.extension('vtk')
```
The ``VTK`` pane renders VTK objects and vtk.js files inside a panel, making it possible to interact with complex geometries in 3D.
#### Parameters:
For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).
* **``camera``** (dict): A dictionary reflecting the current state of the VTK camera
* **``enable_keybindings``** (bool): A boolean to activate/deactivate keybindings. Bound keys are:
- s: set representation of all actors to *surface*
- w: set representation of all actors to *wireframe*
- v: set representation of all actors to *vertex*
- r: center the actors and move the camera so that all actors are visible
<br>**Warning**: These keybindings may not work as expected in a notebook context, if they interact with already bound keys
* **``object``** (str or object): Can be a string pointing to a local or remote file with a `.vtkjs` extension, or a `vtkRenderWindow` object
___
The simplest way to construct a VTK pane is to give it a vtk.js file which it will serialize and embed in the plot. The ``VTK`` pane also supports the regular sizing options provided by Bokeh, including responsive sizing modes:
```
dragon = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
sizing_mode='stretch_width', height=400)
dragon
```
The ``VTK`` pane can also be updated like all other pane objects by replacing the ``object``:
```
dragon.object = "https://github.com/Kitware/vtk-js-datasets/raw/master/data/vtkjs/TBarAssembly.vtkjs"
```
## Camera control
Once a VTK pane has been displayed it will automatically sync the camera state with the Pane object. We can read the camera state on the corresponding parameter:
```python
> dragon.camera
{'position': [-21.490090356222225, 14.44963146483641, 26.581314468858984],
'focalPoint': [0, 4.969950199127197, 0],
'viewUp': [0.17670012087160802, 0.9635684210080306, -0.20078088883170594],
'directionOfProjection': [0.605834463228546,
-0.2672449261957517,
-0.749362897791989],
'parallelProjection': False,
'useHorizontalViewAngle': False,
'viewAngle': 30,
'parallelScale': 9.180799381276024,
'clippingRange': [26.442079567041056, 44.714416678555395],
'thickness': 1000,
'windowCenter': [0, 0],
'useOffAxisProjection': False,
'screenBottomLeft': [-0.5, -0.5, -0.5],
'screenBottomRight': [0.5, -0.5, -0.5],
'screenTopRight': [0.5, 0.5, -0.5],
'freezeFocalPoint': False,
'useScissor': False,
'projectionMatrix': None,
'viewMatrix': None,
'physicalTranslation': [0, -4.969950199127197, 0],
'physicalScale': 9.180799381276024,
'physicalViewUp': [0, 1, 0],
'physicalViewNorth': [0, 0, -1],
'mtime': 2237,
'distance': 35.47188491341284}
```
This technique also makes it possible to link the camera of two or more VTK panes together:
```
dragon1 = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
height=400, sizing_mode='stretch_width')
dragon2 = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
height=400, sizing_mode='stretch_width')
dragon1.jslink(dragon2, camera='camera')
dragon2.jslink(dragon1, camera='camera')
pn.Row(dragon1, dragon2)
```
and to modify the camera state in Python and trigger an update:
```
if dragon.camera:
dragon.camera['viewAngle'] = 50
dragon.param.trigger('camera')
```
## Rendering VTK objects
In addition to support for plotting vtkjs files the VTK pane can also render objects defined using the ``vtk`` Python library.
There are slight differences with classical code generally used. As rendering of the objects and interactions with the view are handle by the VTK panel, we don't need to make a call to the `Render` method of the `vtkRenderWindow` (which would pop up the classical VTK window) or to specify a `vtkRenderWindowInteractor`.
```
import vtk
from vtk.util.colors import tomato
# This creates a polygonal cylinder model with eight circumferential
# facets.
cylinder = vtk.vtkCylinderSource()
cylinder.SetResolution(8)
# The mapper is responsible for pushing the geometry into the graphics
# library. It may also do color mapping, if scalars or other
# attributes are defined.
cylinderMapper = vtk.vtkPolyDataMapper()
cylinderMapper.SetInputConnection(cylinder.GetOutputPort())
# The actor is a grouping mechanism: besides the geometry (mapper), it
# also has a property, transformation matrix, and/or texture map.
# Here we set its color and rotate it -22.5 degrees.
cylinderActor = vtk.vtkActor()
cylinderActor.SetMapper(cylinderMapper)
cylinderActor.GetProperty().SetColor(tomato)
# We must set ScalarVisibilty to 0 to use tomato Color
cylinderMapper.SetScalarVisibility(0)
cylinderActor.RotateX(30.0)
cylinderActor.RotateY(-45.0)
# Create the graphics structure. The renderer renders into the render
# window.
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
# Add the actors to the renderer, set the background and size
ren.AddActor(cylinderActor)
ren.SetBackground(0.1, 0.2, 0.4)
geom_pane = pn.pane.VTK(renWin, width=500, height=500)
geom_pane
```
We can also add additional actors to the plot and then trigger an update:
```
sphere = vtk.vtkSphereSource()
sphereMapper = vtk.vtkPolyDataMapper()
sphereMapper.SetInputConnection(sphere.GetOutputPort())
sphereActor = vtk.vtkActor()
sphereActor.SetMapper(sphereMapper)
sphereActor.GetProperty().SetColor(tomato)
sphereMapper.SetScalarVisibility(0)
sphereActor.RotateX(30.0)
sphereActor.RotateY(-45.0)
sphereActor.SetPosition(0.5, 0.5, 0.5)
ren.AddActor(sphereActor)
geom_pane.param.trigger('object')
```
|
github_jupyter
|
import panel as pn
pn.extension('vtk')
dragon = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
sizing_mode='stretch_width', height=400)
dragon
dragon.object = "https://github.com/Kitware/vtk-js-datasets/raw/master/data/vtkjs/TBarAssembly.vtkjs"
> dragon.camera
{'position': [-21.490090356222225, 14.44963146483641, 26.581314468858984],
'focalPoint': [0, 4.969950199127197, 0],
'viewUp': [0.17670012087160802, 0.9635684210080306, -0.20078088883170594],
'directionOfProjection': [0.605834463228546,
-0.2672449261957517,
-0.749362897791989],
'parallelProjection': False,
'useHorizontalViewAngle': False,
'viewAngle': 30,
'parallelScale': 9.180799381276024,
'clippingRange': [26.442079567041056, 44.714416678555395],
'thickness': 1000,
'windowCenter': [0, 0],
'useOffAxisProjection': False,
'screenBottomLeft': [-0.5, -0.5, -0.5],
'screenBottomRight': [0.5, -0.5, -0.5],
'screenTopRight': [0.5, 0.5, -0.5],
'freezeFocalPoint': False,
'useScissor': False,
'projectionMatrix': None,
'viewMatrix': None,
'physicalTranslation': [0, -4.969950199127197, 0],
'physicalScale': 9.180799381276024,
'physicalViewUp': [0, 1, 0],
'physicalViewNorth': [0, 0, -1],
'mtime': 2237,
'distance': 35.47188491341284}
dragon1 = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
height=400, sizing_mode='stretch_width')
dragon2 = pn.pane.VTK('https://raw.githubusercontent.com/Kitware/vtk-js/master/Data/StanfordDragon.vtkjs',
height=400, sizing_mode='stretch_width')
dragon1.jslink(dragon2, camera='camera')
dragon2.jslink(dragon1, camera='camera')
pn.Row(dragon1, dragon2)
if dragon.camera:
dragon.camera['viewAngle'] = 50
dragon.param.trigger('camera')
import vtk
from vtk.util.colors import tomato
# This creates a polygonal cylinder model with eight circumferential
# facets.
cylinder = vtk.vtkCylinderSource()
cylinder.SetResolution(8)
# The mapper is responsible for pushing the geometry into the graphics
# library. It may also do color mapping, if scalars or other
# attributes are defined.
cylinderMapper = vtk.vtkPolyDataMapper()
cylinderMapper.SetInputConnection(cylinder.GetOutputPort())
# The actor is a grouping mechanism: besides the geometry (mapper), it
# also has a property, transformation matrix, and/or texture map.
# Here we set its color and rotate it -22.5 degrees.
cylinderActor = vtk.vtkActor()
cylinderActor.SetMapper(cylinderMapper)
cylinderActor.GetProperty().SetColor(tomato)
# We must set ScalarVisibilty to 0 to use tomato Color
cylinderMapper.SetScalarVisibility(0)
cylinderActor.RotateX(30.0)
cylinderActor.RotateY(-45.0)
# Create the graphics structure. The renderer renders into the render
# window.
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
# Add the actors to the renderer, set the background and size
ren.AddActor(cylinderActor)
ren.SetBackground(0.1, 0.2, 0.4)
geom_pane = pn.pane.VTK(renWin, width=500, height=500)
geom_pane
sphere = vtk.vtkSphereSource()
sphereMapper = vtk.vtkPolyDataMapper()
sphereMapper.SetInputConnection(sphere.GetOutputPort())
sphereActor = vtk.vtkActor()
sphereActor.SetMapper(sphereMapper)
sphereActor.GetProperty().SetColor(tomato)
sphereMapper.SetScalarVisibility(0)
sphereActor.RotateX(30.0)
sphereActor.RotateY(-45.0)
sphereActor.SetPosition(0.5, 0.5, 0.5)
ren.AddActor(sphereActor)
geom_pane.param.trigger('object')
| 0.688887 | 0.886519 |
# Testing MHW Systems
```
# imports
from importlib import reload
import numpy as np
import os
from matplotlib import pyplot as plt
from pkg_resources import resource_filename
from datetime import date
import pandas
import sqlalchemy
import iris
import iris.quickplot as qplt
import h5py
from oceanpy.sst import io as sst_io
from oceanpy.sst import utils as sst_utils
from mhw_analysis.systems import cube as mhw_cube
from mhw_analysis.systems import build as mhw_build
from mhw_analysis.systems import io as mhw_sys_io
```
# Load a few things for guidance
```
SST = sst_io.load_noaa((1, 1, 2014))
SST
lats = SST.coord('latitude').points
lats[lats>40.]
lons = SST.coord('longitude').points
lons[(lons > 200) & (lons < 230)]
```
# Generate a faux pandas MHW Event table
## Pacific
### Locations
```
pac_lons = lons[(lons > 200) & (lons < 230)]
pac_lons.size
pac_lats = lats[(lats>40.) & (lats < 60.)]
pac_lats.size
pac_XX, pac_YY = np.meshgrid(pac_lons, pac_lats)
pac_XX.shape
```
### Times
```
pac_date_end = date(2013, 12, 31)
pac_date_end.toordinal()
rand_dur = np.random.randint(5,100, pac_lats.size*pac_lons.size).reshape((80,120))
rand_dur
pac_start = pac_date_end.toordinal() - rand_dur
pac_start.shape
```
### Random categories
```
ran_cat = np.random.randint(0,4, pac_lats.size*pac_lons.size).reshape((80,120))
```
### Build em
```
pac_mhw_events = dict(lat=[], lon=[], duration=[], time_start=[], category=[])
for ii in range(pac_start.shape[0]):
for jj in range(pac_start.shape[1]):
# Fill it up
pac_mhw_events['lat'].append(pac_YY[ii,jj])
pac_mhw_events['lon'].append(pac_XX[ii,jj])
pac_mhw_events['duration'].append(rand_dur[ii,jj])
pac_mhw_events['time_start'].append(pac_start[ii,jj])
pac_mhw_events['category'].append(ran_cat[ii,jj])
pac_tbl = pandas.DataFrame(pac_mhw_events)
pac_tbl.head()
```
## Atlantic
```
atl_lons = lons[(lons > (360-70)) & (lons < (360-20))]
atl_lats = lats[(lats > 10) & (lats < 50)]
atl_XX, atl_YY = np.meshgrid(atl_lons, atl_lats)
atl_XX = atl_XX.flatten()
atl_YY = atl_YY.flatten()
atl_XX.shape
nrand_atl = 500
pos_rand = np.random.choice(np.arange(atl_XX.size), nrand_atl, replace=False)
pos_rand[0:5]
```
### Time
```
atl_dur = 7
atl_start = pac_date_end.toordinal() - np.random.randint(10,100, nrand_atl)
atl_cat = np.random.randint(0,4, nrand_atl)
```
### Build em
```
atl_mhw_events = dict(lat=[], lon=[], duration=[], time_start=[], category=[])
for ii in range(nrand_atl):
# Fill it up
atl_mhw_events['lat'].append(atl_YY[pos_rand[ii]])
atl_mhw_events['lon'].append(atl_XX[pos_rand[ii]])
atl_mhw_events['duration'].append(atl_dur)
atl_mhw_events['time_start'].append(atl_start[ii])
atl_mhw_events['category'].append(atl_cat[ii])
atl_tbl = pandas.DataFrame(atl_mhw_events)
atl_tbl.head()
```
## Combine
```
mhw_tbl = pandas.concat([pac_tbl, atl_tbl])
mhw_tbl.head()
```
# Cube
```
reload(mhw_cube)
cube = mhw_cube.build_cube('test_cube.npz', mhw_events=mhw_tbl, dmy_start=(2013,1,1), dmy_end=(2013,12,31))
np.max(cube), cube.dtype
```
# Systems
## Build
```
reload(mhw_build)
mhw_sys, mhw_mask = mhw_build.main(cube=cube, mhwsys_file='test_mhw_systems.hdf',
dmy_start=(1983,1,1))
```
## Explore
```
mhw_sys = mhw_sys.sort_values('NSpax', ascending=False)
mhw_sys
```
----
# Ocean boundaries
## Simple tests
```
tst_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'tst_indian_systems.hdf')
tst_sys = mhw_sys_io.load_systems(mhw_sys_file=tst_file)
tst_sys
tst_msk_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'tst_indian_mask.hdf')
f = h5py.File(tst_msk_file, mode='r')
mask = f['mask'][:]
f.close()
mask[180,500,45]
```
## Semi-full test
```
tst2_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'test_basins_systems.hdf')
tst2_sys = mhw_sys_io.load_systems(mhw_sys_file=tst2_file)
imax = np.argmax(tst2_sys.NSpax)
tst2_sys.iloc[imax]
np.log10(175183369)
```
## Pacific blob?
```
lat_c, lon_c = sst_utils.noaa_oi_coords(as_iris_coord=True)
tst2_msk_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'test_basins_mask.hdf')
f = h5py.File(tst2_msk_file, mode='r')
mask2 = f['mask'][:]
f.close()
from matplotlib import pyplot as plt
import matplotlib.ticker as mticker
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
bigone = mask2 == 2196
np.sum(bigone)
i0, step=2015, 50
any_mask = np.sum(bigone[:,:,i0:i0+step], axis=2)
any_mask[any_mask > 0] = 1
np.sum(any_mask)
any_big_cube = iris.cube.Cube(any_mask, var_name='BigOne',
dim_coords_and_dims=[(lat_c, 0),
(lon_c, 1)])
fig = plt.figure(figsize=(10, 6))
plt.clf()
proj = ccrs.PlateCarree(central_longitude=-180.0)
ax = plt.gca(projection=proj)
# Pacific events
# Draw the contour with 25 levels.
cm = plt.get_cmap('Blues')
cplt = iris.plot.contourf(any_big_cube, cmap=cm) # , vmin=0, vmax=20)#, 5)
#cb = plt.colorbar(cplt, fraction=0.020, pad=0.04)
#cb.set_label('Blob')
# Add coastlines to the map created by contourf.
plt.gca().coastlines()
plt.show()
```
## Longest duration MHW System
```
np.max(tst2_sys.zboxmax-tst2_sys.zboxmin)
```
|
github_jupyter
|
# imports
from importlib import reload
import numpy as np
import os
from matplotlib import pyplot as plt
from pkg_resources import resource_filename
from datetime import date
import pandas
import sqlalchemy
import iris
import iris.quickplot as qplt
import h5py
from oceanpy.sst import io as sst_io
from oceanpy.sst import utils as sst_utils
from mhw_analysis.systems import cube as mhw_cube
from mhw_analysis.systems import build as mhw_build
from mhw_analysis.systems import io as mhw_sys_io
SST = sst_io.load_noaa((1, 1, 2014))
SST
lats = SST.coord('latitude').points
lats[lats>40.]
lons = SST.coord('longitude').points
lons[(lons > 200) & (lons < 230)]
pac_lons = lons[(lons > 200) & (lons < 230)]
pac_lons.size
pac_lats = lats[(lats>40.) & (lats < 60.)]
pac_lats.size
pac_XX, pac_YY = np.meshgrid(pac_lons, pac_lats)
pac_XX.shape
pac_date_end = date(2013, 12, 31)
pac_date_end.toordinal()
rand_dur = np.random.randint(5,100, pac_lats.size*pac_lons.size).reshape((80,120))
rand_dur
pac_start = pac_date_end.toordinal() - rand_dur
pac_start.shape
ran_cat = np.random.randint(0,4, pac_lats.size*pac_lons.size).reshape((80,120))
pac_mhw_events = dict(lat=[], lon=[], duration=[], time_start=[], category=[])
for ii in range(pac_start.shape[0]):
for jj in range(pac_start.shape[1]):
# Fill it up
pac_mhw_events['lat'].append(pac_YY[ii,jj])
pac_mhw_events['lon'].append(pac_XX[ii,jj])
pac_mhw_events['duration'].append(rand_dur[ii,jj])
pac_mhw_events['time_start'].append(pac_start[ii,jj])
pac_mhw_events['category'].append(ran_cat[ii,jj])
pac_tbl = pandas.DataFrame(pac_mhw_events)
pac_tbl.head()
atl_lons = lons[(lons > (360-70)) & (lons < (360-20))]
atl_lats = lats[(lats > 10) & (lats < 50)]
atl_XX, atl_YY = np.meshgrid(atl_lons, atl_lats)
atl_XX = atl_XX.flatten()
atl_YY = atl_YY.flatten()
atl_XX.shape
nrand_atl = 500
pos_rand = np.random.choice(np.arange(atl_XX.size), nrand_atl, replace=False)
pos_rand[0:5]
atl_dur = 7
atl_start = pac_date_end.toordinal() - np.random.randint(10,100, nrand_atl)
atl_cat = np.random.randint(0,4, nrand_atl)
atl_mhw_events = dict(lat=[], lon=[], duration=[], time_start=[], category=[])
for ii in range(nrand_atl):
# Fill it up
atl_mhw_events['lat'].append(atl_YY[pos_rand[ii]])
atl_mhw_events['lon'].append(atl_XX[pos_rand[ii]])
atl_mhw_events['duration'].append(atl_dur)
atl_mhw_events['time_start'].append(atl_start[ii])
atl_mhw_events['category'].append(atl_cat[ii])
atl_tbl = pandas.DataFrame(atl_mhw_events)
atl_tbl.head()
mhw_tbl = pandas.concat([pac_tbl, atl_tbl])
mhw_tbl.head()
reload(mhw_cube)
cube = mhw_cube.build_cube('test_cube.npz', mhw_events=mhw_tbl, dmy_start=(2013,1,1), dmy_end=(2013,12,31))
np.max(cube), cube.dtype
reload(mhw_build)
mhw_sys, mhw_mask = mhw_build.main(cube=cube, mhwsys_file='test_mhw_systems.hdf',
dmy_start=(1983,1,1))
mhw_sys = mhw_sys.sort_values('NSpax', ascending=False)
mhw_sys
tst_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'tst_indian_systems.hdf')
tst_sys = mhw_sys_io.load_systems(mhw_sys_file=tst_file)
tst_sys
tst_msk_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'tst_indian_mask.hdf')
f = h5py.File(tst_msk_file, mode='r')
mask = f['mask'][:]
f.close()
mask[180,500,45]
tst2_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'test_basins_systems.hdf')
tst2_sys = mhw_sys_io.load_systems(mhw_sys_file=tst2_file)
imax = np.argmax(tst2_sys.NSpax)
tst2_sys.iloc[imax]
np.log10(175183369)
lat_c, lon_c = sst_utils.noaa_oi_coords(as_iris_coord=True)
tst2_msk_file = os.path.join(resource_filename('mhw_analysis', 'systems'), 'test_basins_mask.hdf')
f = h5py.File(tst2_msk_file, mode='r')
mask2 = f['mask'][:]
f.close()
from matplotlib import pyplot as plt
import matplotlib.ticker as mticker
import cartopy.crs as ccrs
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
bigone = mask2 == 2196
np.sum(bigone)
i0, step=2015, 50
any_mask = np.sum(bigone[:,:,i0:i0+step], axis=2)
any_mask[any_mask > 0] = 1
np.sum(any_mask)
any_big_cube = iris.cube.Cube(any_mask, var_name='BigOne',
dim_coords_and_dims=[(lat_c, 0),
(lon_c, 1)])
fig = plt.figure(figsize=(10, 6))
plt.clf()
proj = ccrs.PlateCarree(central_longitude=-180.0)
ax = plt.gca(projection=proj)
# Pacific events
# Draw the contour with 25 levels.
cm = plt.get_cmap('Blues')
cplt = iris.plot.contourf(any_big_cube, cmap=cm) # , vmin=0, vmax=20)#, 5)
#cb = plt.colorbar(cplt, fraction=0.020, pad=0.04)
#cb.set_label('Blob')
# Add coastlines to the map created by contourf.
plt.gca().coastlines()
plt.show()
np.max(tst2_sys.zboxmax-tst2_sys.zboxmin)
| 0.363647 | 0.782164 |
# Pandas
Pandas est une librairie Python dédiée à l'analyse de données.
## Series
La structure de données Series permet de gérer une **table de données à deux colonnes**, dans laquelle :
- les données sont ordonnées
- la première colonne contient une clé (index)
- le deuxième colonne contient des valeurs
- la deuxième colonne porte un nom
On peut initialiser une structure Series **à partir d'une liste** de valeurs. Dans ce cas, Pandas affecte automatiquement un index numérique à chaque valeur en partant de zéro.
```
import pandas as pd
animaux = ["chien", "chat", "lapin"]
pd.Series(animaux)
```
On note que dans ce cas le type de données est object.
```
nombres = [10,4,8]
ns = pd.Series(nombres)
ns
```
On note que dans ce cas le type de données est int64.
La structure Series stocke les données sous la forme d'un **tableau Numpy typé**, ce qui lui donne un avantage en termes de performances par rapport à une liste.
```
nombres = [10,4,None]
pd.Series(nombres)
```
On note que l'absence de valeur None est convertie en **np.nan** dans un tableau numérique.
On teste la présence de la valeur NaN de la manière suivante :
```
import numpy as np
np.isnan(np.nan)
```
Un structure Séries peut également être initialisée **à partir d'un dictionnaire** : dans ce cas, Pandas utilise les clés du dictionnaire pour intialiser les index.
```
personne = { "nom":"Dupont", "prénom":"Jean", "age":40 }
s = pd.Series(personne)
s
```
La propriété **index** permet d'accéder aux index d'une structure Series :
```
s.index
```
Les index peuvent être également initialisés en passant une liste en tant que paramètre nommé du constructeur :
```
pd.Series(["Dupont","Jean",40], index=["nom","prénom","age"])
```
On peut accéder aux valeurs stockées dans une structure Series :
- par leur position : propriété **iloc**[position]
- par leur index (clé) : propriété **loc**[clé]
```
s.iloc[2]
s.loc["nom"]
```
## Performances
On peut parcourir les valeurs d'une structure Series et en calculer la somme explicitement :
```
somme = 0
for num in ns:
somme = somme + num
somme
```
Mais cette méthode est lente, car elle ne tire pas partie des capacités de **calcul parallèle** des ordinateurs modernes.
Numpy et Pandas définissent des méthodes applicables directement à leurs structures de données qui sont optimisées pour réaliser les opérations en parallèle :
```
total = np.sum(ns)
total
```
L'exemple ci-dessous va illustrer comment **mesurer la différence de performance** entre ces deux techniques :
```
# Création d'une série de 100 000 éléments
s = pd.Series(np.random.randint(0,100,100000))
# La méthode head() permet d'afficher les 5 premiers éléments
s.head()
len(s)
```
La directive **%%timeit** permet de mesurer le temps d'exécution d'une cellule du notebook. Elle prend pour paramètre le nombre de fois où on souhaite réexécuter le fragment de code avant de prendre la moyenne des temps d'exécution :
```
%%timeit -n 10
somme = 0
for num in s:
somme += num
%%timeit -n 10
somme = np.sum(s)
```
On voit que la deuxième méthode utilisant les fonctionnalités de Numpy est environ **100 fois plus rapide** sur la machine de test (dépend des capacités de la machine).
On ne devrait donc **jamais coder de parcours explicite des éléments** d'un tableau Numpy ou d'une structure Pandas dans le code (boucle for ou while).
## Différences par rapport à une base de données relationnelle
Les index (clés) comme les valeurs peuvent être de **types différents** dans une même structure Series :
```
mixed = pd.Series([1,2,3])
mixed.loc["animal"] = "chien"
mixed
```
Les index (clés) ne sont **pas obligatoirement uniques** et peuvent être répétés :
```
repeat = pd.Series(["chien","rose","chat"],index=["animal","fleur","animal"])
repeat
```
Dans ce cas, le résultat d'une requête sur une structure Series n'est pas une valeur mais à nouveau une structure Series :
```
repeat.loc["animal"]
repeat.loc["fleur"]
```
Les opérations réalisées sur une structure Series ne **modifient pas la structure originale**, mais retournent un nouvel objet.
Exemple avec la méthode **append()** :
```
s1 = pd.Series([1,2,3])
s2 = pd.Series([4,5,6])
s1.append(s2)
s1
```
## DataFrame
La structure **DataFrame est une table de données à deux dimensions**, dans laquelle chaque ligne a un index (clé), et chaque colonne a un nom.
La liste des index (clés) est accessible par la propriété **index**, la liste des noms de colonnes est accessible par la propriété **columns**.
Comme dans la structure Series, les **propriétés loc[] et iloc[]** permettent d'accéder aux lignes par index ou par position.
L'opérateur d'indexation (crochets) permet d'accéder à une valeur particulière d'une ligne à partir du nom de colonne.
On peut créer une DataFrame à partir de :
- une **liste de Series** où chaque Series représente une ligne de donnée
- une **liste de dictionnaires** où chaque dictionnaire représente une ligne de données
```
achat1 = pd.Series({"nom":"Jean","article":"pain","prix":1.1})
achat2 = pd.Series({"nom":"Pierre","article":"lait","prix":2.5})
achat3 = pd.Series({"nom":"Marc","article":"chips","prix":1.9})
df = pd.DataFrame([achat1,achat2,achat3],index=["magasin1","magasin1","magasin2"])
df
```
La sélection d'une ligne est réalisée à l'aide de la propriété loc :
```
df.index
df.loc["magasin2"]
type(df.loc["magasin2"])
df.loc["magasin1"]
type(df.loc["magasin1"])
```
La sélection d'une colonne se fait simplement par son nom :
```
df.columns
df["article"]
df["nom"]
```
Il est recommandé de sélectionner une valeur de la table de la manière suivante
```
df.loc["magasin2","article"]
```
On peut également chaîner une sélection de ligne, puis une sélection de colonne, mais il faut se rappeler qu'une nouvelle structure Series ou DataFrame est créée à chaque appel, ce qui est inefficace en lecture et conduit à des erreurs en écriture (la structure originale n'est pas modifiée comme on s'y attend).
Le **chaînage est donc à éviter** avec Pandas.
```
df.loc["magasin2"]["article"]
df.loc["magasin2"]["article"] = "alumettes"
df.loc["magasin2"]["article"]
```
La propriété **T** permet d'accéder à une version transposée du tableau, qui échange les colonnes et les lignes :
```
df.T
df.T.loc["article"]
```
La sélection des lignes et colonnes d'une DataFrame supporte la syntaxe de slicing :
```
df.loc["magasin2":,["nom","prix"]]
```
## Opérations sur la structure DataFrame
La méthode **drop()** permet de supprimer une ligne désignée par son index. Attention, comme indiqué précédemment, cette méthode retourne une nouvelle structure avec une ligne en moins, et ne modifie pas la structure originale.
```
df.drop("magasin2")
df
```
Le paramètre **inplace** permet de modifier directement la structure originale :
```
dfc = df.copy()
dfc.drop("magasin2",inplace=True)
dfc
```
Le paramètre **axis** permet de supprimer une colonne :
```
dfc.drop("nom",axis=1)
```
On ajoute une nouvelle colonne simplement en lui affectant une valeur :
```
dfc["quantité"] = None
dfc
```
On peut modifier une colonne en masse à l'aide des opérateurs vus dans la section Performances. Par exemple, pour appliquer une réduction de prix de 20%, on peut écrire :
```
df["prix"] *= 0.8
df
```
|
github_jupyter
|
import pandas as pd
animaux = ["chien", "chat", "lapin"]
pd.Series(animaux)
nombres = [10,4,8]
ns = pd.Series(nombres)
ns
nombres = [10,4,None]
pd.Series(nombres)
import numpy as np
np.isnan(np.nan)
personne = { "nom":"Dupont", "prénom":"Jean", "age":40 }
s = pd.Series(personne)
s
s.index
pd.Series(["Dupont","Jean",40], index=["nom","prénom","age"])
s.iloc[2]
s.loc["nom"]
somme = 0
for num in ns:
somme = somme + num
somme
total = np.sum(ns)
total
# Création d'une série de 100 000 éléments
s = pd.Series(np.random.randint(0,100,100000))
# La méthode head() permet d'afficher les 5 premiers éléments
s.head()
len(s)
%%timeit -n 10
somme = 0
for num in s:
somme += num
%%timeit -n 10
somme = np.sum(s)
mixed = pd.Series([1,2,3])
mixed.loc["animal"] = "chien"
mixed
repeat = pd.Series(["chien","rose","chat"],index=["animal","fleur","animal"])
repeat
repeat.loc["animal"]
repeat.loc["fleur"]
s1 = pd.Series([1,2,3])
s2 = pd.Series([4,5,6])
s1.append(s2)
s1
achat1 = pd.Series({"nom":"Jean","article":"pain","prix":1.1})
achat2 = pd.Series({"nom":"Pierre","article":"lait","prix":2.5})
achat3 = pd.Series({"nom":"Marc","article":"chips","prix":1.9})
df = pd.DataFrame([achat1,achat2,achat3],index=["magasin1","magasin1","magasin2"])
df
df.index
df.loc["magasin2"]
type(df.loc["magasin2"])
df.loc["magasin1"]
type(df.loc["magasin1"])
df.columns
df["article"]
df["nom"]
df.loc["magasin2","article"]
df.loc["magasin2"]["article"]
df.loc["magasin2"]["article"] = "alumettes"
df.loc["magasin2"]["article"]
df.T
df.T.loc["article"]
df.loc["magasin2":,["nom","prix"]]
df.drop("magasin2")
df
dfc = df.copy()
dfc.drop("magasin2",inplace=True)
dfc
dfc.drop("nom",axis=1)
dfc["quantité"] = None
dfc
df["prix"] *= 0.8
df
| 0.131312 | 0.986058 |
# Strings and Stuff in Python
```
import numpy as np
```
## Strings are just arrays of characters
```
s = 'spam'
s,len(s),s[0],s[0:2]
s[::-1]
```
#### But unlike numerical arrays, you cannot reassign elements:
```
s[0] = "S"
s
```
### Arithmetic with Strings
```
s = 'spam'
e = "eggs"
s + e
s + " " + e
4 * (s + " ") + e
print(4 * (s + " ") + s + " and\n" + e) # use \n to get a newline with the print function
```
### String operators and comparisons
```
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"sp" < "spam"
"spam" < "eggs"
"sp" in "spam"
"sp" not in "spam"
```
## Python supports `Unicode` characters
You can enter `unicode` characters directly from the keyboard (depends on your operating system), or you can use the `ASCII` encoding.
A list of `ASCII` encoding can be found [here](https://en.wikipedia.org/wiki/List_of_Unicode_characters).
For example the `ASCII` ecoding for the greek capital omega is `U+03A9`, so you can create the character with `\U000003A9`
```
my_resistor = "This resistor has a value of 100 k\U000003A9"
print(my_resistor)
Ω = 1e3
Ω + np.pi
```
### [Emoji](https://en.wikipedia.org/wiki/Emoji) are unicode characters, so you can use them a well (not all OSs will show all characters!)
```
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
```
### Emoji can not be used as variable names (at least not yet ...)
```
☢ = 2.345
☢ ** 2
```
### Raw strings
* Sometime you do not want python to interpret anything in the string
* You can do this y adding a "r" to the front of the string
```
my_resistor = r"This resistor has a value of 100 k\U000003A9"
print(my_resistor)
```
### Watch out for variable types!
```
n = 4
print("I would like " + n + " orders of spam")
print("I would like " + str(n) + " orders of spam")
```
## Use explicit formatting to avoid these errors
### Python string formatting has the form:
`{Variable Index: Format Type} .format(Variable)`
```
A = 42
B = 1.23456
C = 1.23456e10
D = 'Forty Two'
"I like the number {0:d}".format(A)
```
```
"I like the number {0:s}".format(D)
"The number {0:f} is fine, but not a cool as {1:d}".format(B,A)
"The number {0:.3f} is fine, but not a cool as {1:d}".format(C,A) # 3 places after decimal
"The number {0:.3e} is fine, but not a cool as {1:d}".format(C,A) # sci notation
"{0:g} and {1:g} are the same format but different results".format(B,C)
```
### Nice trick to convert number to a different base
```
"Representation of the number {1:s} - dec: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(A,D)
```
## Formatting is way better than piecing strings together
```
import pandas as pd
planet_table = pd.read_csv('./Data/Planets.csv')
print(planet_table)
for index,value in enumerate(planet_table['Name']):
a = planet_table['a'][index]
if (a < 3.0):
Place = "Inner"
else:
Place = "Outer"
my_string = ("The planet {0:s}, at a distance of {1:.1f} AU, is in the {2:s} solar system"
.format(value,a,Place))
print(my_string)
```
### Really long strings
```
long_string = (
"""
The planets {0:s} and {1:s} are at a distance
of {2:.1f} AU and {3:.1f} AU from the Sun.
"""
.format(planet_table['Name'][1],planet_table['Name'][3],
planet_table['a'][1],planet_table['a'][3])
)
print(long_string)
```
### You can also use the `textwrap` module
```
import textwrap
lots_of_spam = (s + " ") * 100
print(lots_of_spam)
print(textwrap.fill(lots_of_spam, width=70))
```
## Working with strings
```
line = "My hovercraft is full of eels"
```
### Find and Replace
```
line.replace('eels', 'wheels')
```
### Justification and Cleaning
```
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2.strip()
line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$"
line3.strip('*$')
line3.lstrip('*$'), line3.rstrip('*$')
```
### Splitting and Joining
```
line.split()
'_*_'.join(line.split())
' '.join(line.split()[::-1])
```
### Line Formatting
```
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
```
|
github_jupyter
|
import numpy as np
s = 'spam'
s,len(s),s[0],s[0:2]
s[::-1]
s[0] = "S"
s
s = 'spam'
e = "eggs"
s + e
s + " " + e
4 * (s + " ") + e
print(4 * (s + " ") + s + " and\n" + e) # use \n to get a newline with the print function
"spam" == "good"
"spam" != "good"
"spam" == "spam"
"sp" < "spam"
"spam" < "eggs"
"sp" in "spam"
"sp" not in "spam"
my_resistor = "This resistor has a value of 100 k\U000003A9"
print(my_resistor)
Ω = 1e3
Ω + np.pi
radio_active = "\U00002622"
wink = "\U0001F609"
print((radio_active * 5) + " " + (wink * 3))
☢ = 2.345
☢ ** 2
my_resistor = r"This resistor has a value of 100 k\U000003A9"
print(my_resistor)
n = 4
print("I would like " + n + " orders of spam")
print("I would like " + str(n) + " orders of spam")
A = 42
B = 1.23456
C = 1.23456e10
D = 'Forty Two'
"I like the number {0:d}".format(A)
"I like the number {0:s}".format(D)
"The number {0:f} is fine, but not a cool as {1:d}".format(B,A)
"The number {0:.3f} is fine, but not a cool as {1:d}".format(C,A) # 3 places after decimal
"The number {0:.3e} is fine, but not a cool as {1:d}".format(C,A) # sci notation
"{0:g} and {1:g} are the same format but different results".format(B,C)
"Representation of the number {1:s} - dec: {0:d}; hex: {0:x}; oct: {0:o}; bin: {0:b}".format(A,D)
import pandas as pd
planet_table = pd.read_csv('./Data/Planets.csv')
print(planet_table)
for index,value in enumerate(planet_table['Name']):
a = planet_table['a'][index]
if (a < 3.0):
Place = "Inner"
else:
Place = "Outer"
my_string = ("The planet {0:s}, at a distance of {1:.1f} AU, is in the {2:s} solar system"
.format(value,a,Place))
print(my_string)
long_string = (
"""
The planets {0:s} and {1:s} are at a distance
of {2:.1f} AU and {3:.1f} AU from the Sun.
"""
.format(planet_table['Name'][1],planet_table['Name'][3],
planet_table['a'][1],planet_table['a'][3])
)
print(long_string)
import textwrap
lots_of_spam = (s + " ") * 100
print(lots_of_spam)
print(textwrap.fill(lots_of_spam, width=70))
line = "My hovercraft is full of eels"
line.replace('eels', 'wheels')
line.center(100)
line.ljust(100)
line.rjust(100, "*")
line2 = " My hovercraft is full of eels "
line2.strip()
line3 = "*$*$*$*$*$*$*$*$My hovercraft is full of eels*$*$*$*$"
line3.strip('*$')
line3.lstrip('*$'), line3.rstrip('*$')
line.split()
'_*_'.join(line.split())
' '.join(line.split()[::-1])
anotherline = "mY hoVErCRaft iS fUlL oF eEELS"
anotherline.upper()
anotherline.lower()
anotherline.title()
anotherline.capitalize()
anotherline.swapcase()
| 0.33928 | 0.89096 |
# This notebook processes CAFE v3 ocean daily data for building climatologies. Only the last 100 years are used.
Currently only runs on Raijin, as control run data not yet transferred to Canberra
```
# Import packages -----
import pandas as pd
import xarray as xr
import numpy as np
from ipywidgets import FloatProgress
from dateutil.relativedelta import relativedelta
```
#### Initialise
```
# Standard naming -----
fields = pd.DataFrame( \
{'name_CAFE': ['sst', 'patm_t', 'eta_t', 'sss', 'u_surf', 'v_surf', 'mld'],
'name_std' : ['sst', 'patm', 'eta', 'sss', 'u_s', 'v_s', 'mld']}
)
name_dict = fields.set_index('name_CAFE').to_dict()['name_std']
fields
```
#### Only use last 100 years
```
# Loop over all paths -----
base = '/g/data1/v14/coupled_model/v3/OUTPUT/'
years = range(400,500)
paths = []
for year in years:
path = base + 'ocean_daily_0' + str(year) + '_01_01.nc'
paths.append(path)
ds = xr.open_mfdataset(paths, autoclose=True) \
.drop(['average_T1','average_T2','average_DT','time_bounds',
'area_t','area_u','geolat_c','geolat_t','ht']) \
.rename(name_dict)
if 'xu_ocean' in ds.dims:
ds = ds.rename({'xu_ocean':'lon_u','yu_ocean':'lat_u'})
if 'xt_ocean' in ds.dims:
ds = ds.rename({'xt_ocean':'lon_t','yt_ocean':'lat_t'})
# Use year 2016 as time -----
path = '/g/data1/v14/forecast/v1/yr2016/mn1/OUTPUT.1/ocean_daily*.nc'
dataset = xr.open_mfdataset(path, autoclose=True)
time_use = xr.concat([dataset.time[:59], dataset.time[60:366]],dim='time')
time_ly = dataset.time[59]
# Make month_day array of month-day -----
m = [str(ds.time.values[i].timetuple()[1]).zfill(2) + '-' for i in range(len(ds.time))]
d = [str(ds.time.values[i].timetuple()[2]).zfill(2) for i in range(len(ds.time))]
md = np.core.defchararray.add(m, d)
# Replace time array with month_day array and groupby -----
ds['time'] = md
clim = ds.groupby('time').mean(dim='time',keep_attrs=True)
clim['time'] = time_use
# Replicate Feb 28th as Feb 29th to deal with leap years -----
clim_ly = clim.copy().sel(time='2016-02-28')
clim_ly['time'] = np.array([time_ly.values])
clim = xr.auto_combine([clim,clim_ly]).sortby('time')
# Save the climatology -----
save_fldr = '/g/data1/v14/squ027/tmp/'
clim.to_netcdf(save_fldr + 'cafe.c3.ocean.400_499.clim.nc', mode = 'w',
encoding = {'time':{'dtype':'float','calendar':'JULIAN',
'units':'days since 0001-01-01 00:00:00'}})
```
|
github_jupyter
|
# Import packages -----
import pandas as pd
import xarray as xr
import numpy as np
from ipywidgets import FloatProgress
from dateutil.relativedelta import relativedelta
# Standard naming -----
fields = pd.DataFrame( \
{'name_CAFE': ['sst', 'patm_t', 'eta_t', 'sss', 'u_surf', 'v_surf', 'mld'],
'name_std' : ['sst', 'patm', 'eta', 'sss', 'u_s', 'v_s', 'mld']}
)
name_dict = fields.set_index('name_CAFE').to_dict()['name_std']
fields
# Loop over all paths -----
base = '/g/data1/v14/coupled_model/v3/OUTPUT/'
years = range(400,500)
paths = []
for year in years:
path = base + 'ocean_daily_0' + str(year) + '_01_01.nc'
paths.append(path)
ds = xr.open_mfdataset(paths, autoclose=True) \
.drop(['average_T1','average_T2','average_DT','time_bounds',
'area_t','area_u','geolat_c','geolat_t','ht']) \
.rename(name_dict)
if 'xu_ocean' in ds.dims:
ds = ds.rename({'xu_ocean':'lon_u','yu_ocean':'lat_u'})
if 'xt_ocean' in ds.dims:
ds = ds.rename({'xt_ocean':'lon_t','yt_ocean':'lat_t'})
# Use year 2016 as time -----
path = '/g/data1/v14/forecast/v1/yr2016/mn1/OUTPUT.1/ocean_daily*.nc'
dataset = xr.open_mfdataset(path, autoclose=True)
time_use = xr.concat([dataset.time[:59], dataset.time[60:366]],dim='time')
time_ly = dataset.time[59]
# Make month_day array of month-day -----
m = [str(ds.time.values[i].timetuple()[1]).zfill(2) + '-' for i in range(len(ds.time))]
d = [str(ds.time.values[i].timetuple()[2]).zfill(2) for i in range(len(ds.time))]
md = np.core.defchararray.add(m, d)
# Replace time array with month_day array and groupby -----
ds['time'] = md
clim = ds.groupby('time').mean(dim='time',keep_attrs=True)
clim['time'] = time_use
# Replicate Feb 28th as Feb 29th to deal with leap years -----
clim_ly = clim.copy().sel(time='2016-02-28')
clim_ly['time'] = np.array([time_ly.values])
clim = xr.auto_combine([clim,clim_ly]).sortby('time')
# Save the climatology -----
save_fldr = '/g/data1/v14/squ027/tmp/'
clim.to_netcdf(save_fldr + 'cafe.c3.ocean.400_499.clim.nc', mode = 'w',
encoding = {'time':{'dtype':'float','calendar':'JULIAN',
'units':'days since 0001-01-01 00:00:00'}})
| 0.314051 | 0.761494 |
```
from datascience import *
path_data = '../../data/'
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
cones = Table.read_table(path_data + 'cones.csv')
nba = Table.read_table(path_data + 'nba_salaries.csv').relabeled(3, 'SALARY')
movies = Table.read_table(path_data + 'movies_by_year.csv')
```
# Introduction to Tables
We can now apply Python to analyze data. We will work with data stored in Table structures.
Tables are a fundamental way of representing data sets. A table can be viewed in two ways:
* a sequence of named columns that each describe a single attribute of all entries in a data set, or
* a sequence of rows that each contain all information about a single individual in a data set.
We will study tables in great detail in the next several chapters. For now, we will just introduce a few methods without going into technical details.
The table `cones` has been imported for us; later we will see how, but here we will just work with it. First, let's take a look at it.
```
cones
```
The table has six rows. Each row corresponds to one ice cream cone. The ice cream cones are the *individuals*.
Each cone has three attributes: flavor, color, and price. Each column contains the data on one of these attributes, and so all the entries of any single column are of the same kind. Each column has a label. We will refer to columns by their labels.
A table method is just like a function, but it must operate on a table. So the call looks like
`name_of_table.method(arguments)`
For example, if you want to see just the first two rows of a table, you can use the table method `show`.
```
cones.show(2)
```
You can replace 2 by any number of rows. If you ask for more than six, you will only get six, because `cones` only has six rows.
### Choosing Sets of Columns ###
The method `select` creates a new table consisting of only the specified columns.
```
cones.select('Flavor')
```
This leaves the original table unchanged.
```
cones
```
You can select more than one column, by separating the column labels by commas.
```
cones.select('Flavor', 'Price')
```
You can also *drop* columns you don't want. The table above can be created by dropping the `Color` column.
```
cones.drop('Color')
```
You can name this new table and look at it again by just typing its name.
```
no_colors = cones.drop('Color')
no_colors
```
Like `select`, the `drop` method creates a smaller table and leaves the original table unchanged. In order to explore your data, you can create any number of smaller tables by using choosing or dropping columns. It will do no harm to your original data table.
### Sorting Rows ###
The `sort` method creates a new table by arranging the rows of the original table in ascending order of the values in the specified column. Here the `cones` table has been sorted in ascending order of the price of the cones.
```
cones.sort('Price')
```
To sort in descending order, you can use an *optional* argument to `sort`. As the name implies, optional arguments don't have to be used, but they can be used if you want to change the default behavior of a method.
By default, `sort` sorts in increasing order of the values in the specified column. To sort in decreasing order, use the optional argument `descending=True`.
```
cones.sort('Price', descending=True)
```
Like `select` and `drop`, the `sort` method leaves the original table unchanged.
### Selecting Rows that Satisfy a Condition ###
The `where` method creates a new table consisting only of the rows that satisfy a given condition. In this section we will work with a very simple condition, which is that the value in a specified column must be equal to a value that we also specify. Thus the `where` method has two arguments.
The code in the cell below creates a table consisting only of the rows corresponding to chocolate cones.
```
cones.where('Flavor', 'chocolate')
```
The arguments, separated by a comma, are the label of the column and the value we are looking for in that column. The `where` method can also be used when the condition that the rows must satisfy is more complicated. In those situations the call will be a little more complicated as well.
It is important to provide the value exactly. For example, if we specify `Chocolate` instead of `chocolate`, then `where` correctly finds no rows where the flavor is `Chocolate`.
```
cones.where('Flavor', 'Chocolate')
```
Like all the other table methods in this section, `where` leaves the original table unchanged.
### Example: Salaries in the NBA ###
"The NBA is the highest paying professional sports league in the world," [reported CNN](http://edition.cnn.com/2015/12/04/sport/gallery/highest-paid-nba-players/) in March 2016. The table `nba` contains the [salaries of all National Basketball Association players](https://www.statcrunch.com/app/index.php?dataid=1843341) in 2015-2016.
Each row represents one player. The columns are:
| **Column Label** | Description |
|--------------------|-----------------------------------------------------|
| `PLAYER` | Player's name |
| `POSITION` | Player's position on team |
| `TEAM` | Team name |
|`SALARY` | Player's salary in 2015-2016, in millions of dollars|
The code for the positions is PG (Point Guard), SG (Shooting Guard), PF (Power Forward), SF (Small Forward), and C (Center). But what follows doesn't involve details about how basketball is played.
The first row shows that Paul Millsap, Power Forward for the Atlanta Hawks, had a salary of almost $\$18.7$ million in 2015-2016.
```
nba
```
Fans of Stephen Curry can find his row by using `where`.
```
nba.where('PLAYER', 'Stephen Curry')
```
We can also create a new table called `warriors` consisting of just the data for the Golden State Warriors.
```
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
```
By default, the first 10 lines of a table are displayed. You can use `show` to display more or fewer. To display the entire table, use `show` with no argument in the parentheses.
```
warriors.show()
```
The `nba` table is sorted in alphabetical order of the team names. To see how the players were paid in 2015-2016, it is useful to sort the data by salary. Remember that by default, the sorting is in increasing order.
```
nba.sort('SALARY')
```
These figures are somewhat difficult to compare as some of these players changed teams during the season and received salaries from more than one team; only the salary from the last team appears in the table.
The CNN report is about the other end of the salary scale – the players who are among the highest paid in the world. To identify these players we can sort in descending order of salary and look at the top few rows.
```
nba.sort('SALARY', descending=True)
```
Kobe Bryant, since retired, was the highest earning NBA player in 2015-2016.
|
github_jupyter
|
from datascience import *
path_data = '../../data/'
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plots
plots.style.use('fivethirtyeight')
cones = Table.read_table(path_data + 'cones.csv')
nba = Table.read_table(path_data + 'nba_salaries.csv').relabeled(3, 'SALARY')
movies = Table.read_table(path_data + 'movies_by_year.csv')
cones
cones.show(2)
cones.select('Flavor')
cones
cones.select('Flavor', 'Price')
cones.drop('Color')
no_colors = cones.drop('Color')
no_colors
cones.sort('Price')
cones.sort('Price', descending=True)
cones.where('Flavor', 'chocolate')
cones.where('Flavor', 'Chocolate')
nba
nba.where('PLAYER', 'Stephen Curry')
warriors = nba.where('TEAM', 'Golden State Warriors')
warriors
warriors.show()
nba.sort('SALARY')
nba.sort('SALARY', descending=True)
| 0.483648 | 0.946001 |
##Allele-specific expression analysis in An. coluzzii
```
import matplotlib.pyplot as P
%matplotlib inline
import numpy as np
import pandas as pd
RNA = ['A','B','C','D'] #These are wells that contain cDNA
D = pd.read_csv("round2/iPLEX_HYBRID_MAPHIG_6_24_16.csv")
#focus on UTR SNP
cyp9k1 = D.loc[D['Assay']=='CYP9K1-3u']
#grab only cDNA
#cyp9k1 = cyp9k1.loc[cyp9k1['WELL'].str[0].isin(RNA)]
cyp9k1 = cyp9k1.loc[cyp9k1['WELL'].str[0].isin(['A','B','C','D'])]
cyp2 = cyp9k1.loc[cyp9k1['OLIGO_ID']=='A']
cyp1 = cyp9k1.loc[cyp9k1['OLIGO_ID']=='G']
ASE = pd.merge(cyp1,cyp2,on='WELL').sort_values(by='SAMPLE_x')
ASE.drop(ASE.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#.sort_values(by='WELL')
ASE['ASE'] = ASE['AREA_x']/ASE['AREA_y']
#print(ASE[['SAMPLE_x','ASE']])
ASE['T'] = ASE['SAMPLE_x'].str[1]=="T"
#Drop inf and NaN samples
ASE = ASE.drop([47,5,12,43,20,41])
print(ASE)
ASE.boxplot(column='ASE',by='T')
#control = ASE.loc[ASE['SAMPLE_x'].str[1]=="C"]
#treat = ASE.loc[ASE['SAMPLE_x'].str[1]=="T"]
#control.boxplot(column='ASE')
#print(treat)
import matplotlib.pyplot as P
%matplotlib inline
import numpy as np
import pandas as pd
import math
#CYP9K1 3'UTR SNP G=cyp1 ; A=cyp2
##Round 1
D = pd.read_table("pig_iplex_june9.txt")
#focus on UTR SNP
r1 = D.loc[D['Assay']=='CYP9K1-3u']
r1cyp2 = r1.loc[r1['OLIGO_ID']=='A']
r1cyp1 = r1.loc[r1['OLIGO_ID']=='G']
r1df = pd.merge(r1cyp1,r1cyp2,on='WELL').sort_values(by='SAMPLE_x')
#remove unnecessary columns
r1df.drop(r1df.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#grab only cDNA
r1df['cDNA'] = r1df['WELL'].str[0].isin(['A','B'])#These are wells that contain cDNA
r1df = r1df.loc[r1df['AREA_y']>0.00001]
r1df['T'] = r1df['SAMPLE_x'].str[10]=="T"
#print('r1',len(r1df),r1df)
##Round 2
D2 = pd.read_csv("round2/iPLEX_HYBRID_MAPHIG_6_24_16.csv")
#focus on UTR SNP
r2 = D2.loc[D2['Assay']=='CYP9K1-3u']
r2cyp2 = r2.loc[r2['OLIGO_ID']=='A']
r2cyp1 = r2.loc[r2['OLIGO_ID']=='G']
r2df = pd.merge(r2cyp1,r2cyp2,on='WELL').sort_values(by='SAMPLE_x')
r2df.drop(r2df.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#highlight cDNA
r2df['cDNA'] = r2df['WELL'].str[0].isin(['A','B','C','D'])#These are wells that contain cDNA
#drop samples that didn't amplify well
r2df = r2df.loc[r2df['AREA_y']>0.00001]
r2df['T'] = r2df['SAMPLE_x'].str[1]=="T"
#concatenate 1st and 2nd iPLEX run data
ALL = pd.concat([r1df,r2df])
#Calculate ASE cyp1/cyp2
ALL['ASE'] = ALL['AREA_x']/ALL['AREA_y']
#Drop inf and NaN samples
ALL = ALL.drop([13,17,76,80,85])
##print('all',len(ALL))
#plot raw ASE on cDNA samples
RNA = ALL.loc[ALL['cDNA']==True]
##print(len(RNA))
RNA.boxplot(column='ASE',by='T')
P.title('Raw ASE in single female malpighigian tubules')
P.suptitle("")
P.ylabel("iPLEX area cyp1/area cyp2")
#Estimate amplification bias from ASE on DNA
allelic_bias = ALL.loc[ALL['cDNA']==False]
##print(len(allelic_bias))
allelic_bias.boxplot(column='ASE',by='T')
P.title('DNA from single female malpighigian tubules')
P.suptitle("")
norm_factor = allelic_bias['ASE'].mean()
P.ylabel("iPLEX area cyp1/area cyp2")
#Plot ASE normalized by the mean allelic bias in DNA (assumes no sample-specific bias for now)
RNA['Nase'] = RNA['ASE']/norm_factor
#print(len(RNA))
RNA.boxplot(column='Nase',by='T')
#print('median',RNA['Nase'].median)
#print(RNA['Nase'].median())
P.title('Normalized ASE in single female malpighigian tubules')
P.suptitle("")
#P.semilogy()
P.ylabel("iPLEX area cyp1/area cyp2")
#print(RNA)
#print(RNA.describe())
t = RNA.loc[RNA['T']==True]
c = RNA.loc[RNA['T']==False]
print("C",c.describe())
print("T",t.describe())
```
|
github_jupyter
|
import matplotlib.pyplot as P
%matplotlib inline
import numpy as np
import pandas as pd
RNA = ['A','B','C','D'] #These are wells that contain cDNA
D = pd.read_csv("round2/iPLEX_HYBRID_MAPHIG_6_24_16.csv")
#focus on UTR SNP
cyp9k1 = D.loc[D['Assay']=='CYP9K1-3u']
#grab only cDNA
#cyp9k1 = cyp9k1.loc[cyp9k1['WELL'].str[0].isin(RNA)]
cyp9k1 = cyp9k1.loc[cyp9k1['WELL'].str[0].isin(['A','B','C','D'])]
cyp2 = cyp9k1.loc[cyp9k1['OLIGO_ID']=='A']
cyp1 = cyp9k1.loc[cyp9k1['OLIGO_ID']=='G']
ASE = pd.merge(cyp1,cyp2,on='WELL').sort_values(by='SAMPLE_x')
ASE.drop(ASE.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#.sort_values(by='WELL')
ASE['ASE'] = ASE['AREA_x']/ASE['AREA_y']
#print(ASE[['SAMPLE_x','ASE']])
ASE['T'] = ASE['SAMPLE_x'].str[1]=="T"
#Drop inf and NaN samples
ASE = ASE.drop([47,5,12,43,20,41])
print(ASE)
ASE.boxplot(column='ASE',by='T')
#control = ASE.loc[ASE['SAMPLE_x'].str[1]=="C"]
#treat = ASE.loc[ASE['SAMPLE_x'].str[1]=="T"]
#control.boxplot(column='ASE')
#print(treat)
import matplotlib.pyplot as P
%matplotlib inline
import numpy as np
import pandas as pd
import math
#CYP9K1 3'UTR SNP G=cyp1 ; A=cyp2
##Round 1
D = pd.read_table("pig_iplex_june9.txt")
#focus on UTR SNP
r1 = D.loc[D['Assay']=='CYP9K1-3u']
r1cyp2 = r1.loc[r1['OLIGO_ID']=='A']
r1cyp1 = r1.loc[r1['OLIGO_ID']=='G']
r1df = pd.merge(r1cyp1,r1cyp2,on='WELL').sort_values(by='SAMPLE_x')
#remove unnecessary columns
r1df.drop(r1df.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#grab only cDNA
r1df['cDNA'] = r1df['WELL'].str[0].isin(['A','B'])#These are wells that contain cDNA
r1df = r1df.loc[r1df['AREA_y']>0.00001]
r1df['T'] = r1df['SAMPLE_x'].str[10]=="T"
#print('r1',len(r1df),r1df)
##Round 2
D2 = pd.read_csv("round2/iPLEX_HYBRID_MAPHIG_6_24_16.csv")
#focus on UTR SNP
r2 = D2.loc[D2['Assay']=='CYP9K1-3u']
r2cyp2 = r2.loc[r2['OLIGO_ID']=='A']
r2cyp1 = r2.loc[r2['OLIGO_ID']=='G']
r2df = pd.merge(r2cyp1,r2cyp2,on='WELL').sort_values(by='SAMPLE_x')
r2df.drop(r2df.columns[[2,3,6,7,8,9,12]], axis=1, inplace=True)
#highlight cDNA
r2df['cDNA'] = r2df['WELL'].str[0].isin(['A','B','C','D'])#These are wells that contain cDNA
#drop samples that didn't amplify well
r2df = r2df.loc[r2df['AREA_y']>0.00001]
r2df['T'] = r2df['SAMPLE_x'].str[1]=="T"
#concatenate 1st and 2nd iPLEX run data
ALL = pd.concat([r1df,r2df])
#Calculate ASE cyp1/cyp2
ALL['ASE'] = ALL['AREA_x']/ALL['AREA_y']
#Drop inf and NaN samples
ALL = ALL.drop([13,17,76,80,85])
##print('all',len(ALL))
#plot raw ASE on cDNA samples
RNA = ALL.loc[ALL['cDNA']==True]
##print(len(RNA))
RNA.boxplot(column='ASE',by='T')
P.title('Raw ASE in single female malpighigian tubules')
P.suptitle("")
P.ylabel("iPLEX area cyp1/area cyp2")
#Estimate amplification bias from ASE on DNA
allelic_bias = ALL.loc[ALL['cDNA']==False]
##print(len(allelic_bias))
allelic_bias.boxplot(column='ASE',by='T')
P.title('DNA from single female malpighigian tubules')
P.suptitle("")
norm_factor = allelic_bias['ASE'].mean()
P.ylabel("iPLEX area cyp1/area cyp2")
#Plot ASE normalized by the mean allelic bias in DNA (assumes no sample-specific bias for now)
RNA['Nase'] = RNA['ASE']/norm_factor
#print(len(RNA))
RNA.boxplot(column='Nase',by='T')
#print('median',RNA['Nase'].median)
#print(RNA['Nase'].median())
P.title('Normalized ASE in single female malpighigian tubules')
P.suptitle("")
#P.semilogy()
P.ylabel("iPLEX area cyp1/area cyp2")
#print(RNA)
#print(RNA.describe())
t = RNA.loc[RNA['T']==True]
c = RNA.loc[RNA['T']==False]
print("C",c.describe())
print("T",t.describe())
| 0.118921 | 0.556098 |
# Ames Housing Dataset price modeling
We investigate the data to remove unnecessary columns and max-scale label
This could either happen in the private data lake or on the modeler's machine. In this case, we mimic a modeler requesting certain fields and a certain series of preprocessing steps.
```
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("darkgrid")
df = pd.read_csv("data.csv")
print(f"Original size of dataframe {df.shape}")
```
# Initial Investigation
We first subset the data according to some pre-determined requirements like only numerical data, and data that is immediately relevant to the problem. Take a look at the data_description.txt file to get a better understanding of this.
```
residential_areas = {"RH", "RL", "RP", "RM"}
acceptable_housing_conditions = {10, 9, 8, 7, 6}
df = df[df["MSZoning"].isin(residential_areas)]
print(f"First subset iteration taking only residential areas. Size of dataframe {df.shape}\n")
df = df[df["OverallCond"].isin(acceptable_housing_conditions)]
print(f"Second subset iteration taking only homes above some quality. Size of dataframe {df.shape}")
df
df.columns
columns_to_keep = ["LotArea", 'YearBuilt', 'TotalBsmtSF',
'1stFlrSF', '2ndFlrSF', 'MiscVal',
"GarageCars", "Fireplaces", "BedroomAbvGr",
"SalePrice" # Our label
]
df = df[columns_to_keep]
df = df.reset_index()
df.head()
df.info()
```
We note that there are 546 entries in the dataset and that all of the entries are non-null which is appropriate for our problem setup.
```
df.describe()
df.head()
```
# Price Investigation
In this section we conduct EDA to gain an understanding of the distribution of prices.
```
df.boxplot(column="SalePrice")
```
## Boxplot discussion
From the boxplot we see that there exist many outliers in the data that would lead to issues in our linear regression.
## Next Steps
We investigate this further by sorting the points by value then plotting
We follow this by removing values above and below the "minimum" and "maximum" line and visualizing both the boxplot and the sorting and plotting. Again, in this scenario we imagine a data modeler that has submitted a series of steps to the private data lake.
```
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
```
## Scatter Plot Discussion
From the plot it is apparent that the data is non-linear. We proceed to remove the outliers according to the boxplot from earlier
```
Q1 = df['SalePrice'].quantile(0.25)
Q3 = df['SalePrice'].quantile(0.75)
IQR = Q3 - Q1 #IQR is interquartile range.
filter = (df['SalePrice'] >= Q1 - 1.5 * IQR) & (df['SalePrice'] <= Q3 + 1.5 *IQR)
df = df.loc[filter]
```
# Post-outlier removal analysis
```
df.boxplot(column="SalePrice")
```
## Boxplot Discussion 2
We see that the outliers (based on the boxplot calculations) have been removed from the dataset and the range of values is acceptable. To "verify" this, we do a scatter plot of the data.
```
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
```
# Normalize
Normalization can be easily accomplished on the server
```
df
for col_to_scale in df.columns:
col_min = min(df[col_to_scale])
col_max = max(df[col_to_scale])
df[col_to_scale] = (df[col_to_scale] - col_min )/ (col_max - col_min)
label = df.SalePrice
df.drop(columns=["SalePrice", "index"], inplace=True)
df
label
```
## Scatter Plot Discussion 2
Although the data is still non-linear, this is acceptable and we can begin modeling.
```
df.to_csv("processed_X.csv", index=False, sep=",")
label.to_csv("processed_y.csv", index=False, sep=",")
```
# Closing Words
Although the resulting graph is better, there are still methods that exist which could help transform the values to exhibit more linearity. Nevertheless, those methods are outside the scope of this project which is to showcase the efficacy of encrypted linear regressions
|
github_jupyter
|
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set_style("darkgrid")
df = pd.read_csv("data.csv")
print(f"Original size of dataframe {df.shape}")
residential_areas = {"RH", "RL", "RP", "RM"}
acceptable_housing_conditions = {10, 9, 8, 7, 6}
df = df[df["MSZoning"].isin(residential_areas)]
print(f"First subset iteration taking only residential areas. Size of dataframe {df.shape}\n")
df = df[df["OverallCond"].isin(acceptable_housing_conditions)]
print(f"Second subset iteration taking only homes above some quality. Size of dataframe {df.shape}")
df
df.columns
columns_to_keep = ["LotArea", 'YearBuilt', 'TotalBsmtSF',
'1stFlrSF', '2ndFlrSF', 'MiscVal',
"GarageCars", "Fireplaces", "BedroomAbvGr",
"SalePrice" # Our label
]
df = df[columns_to_keep]
df = df.reset_index()
df.head()
df.info()
df.describe()
df.head()
df.boxplot(column="SalePrice")
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
Q1 = df['SalePrice'].quantile(0.25)
Q3 = df['SalePrice'].quantile(0.75)
IQR = Q3 - Q1 #IQR is interquartile range.
filter = (df['SalePrice'] >= Q1 - 1.5 * IQR) & (df['SalePrice'] <= Q3 + 1.5 *IQR)
df = df.loc[filter]
df.boxplot(column="SalePrice")
sorted_prices = df.SalePrice.sort_values()
sorted_prices = sorted_prices.reset_index()
sorted_prices.drop(columns="index", inplace=True)
sorted_prices
sorted_prices.plot(style='.')
df
for col_to_scale in df.columns:
col_min = min(df[col_to_scale])
col_max = max(df[col_to_scale])
df[col_to_scale] = (df[col_to_scale] - col_min )/ (col_max - col_min)
label = df.SalePrice
df.drop(columns=["SalePrice", "index"], inplace=True)
df
label
df.to_csv("processed_X.csv", index=False, sep=",")
label.to_csv("processed_y.csv", index=False, sep=",")
| 0.286469 | 0.98661 |
```
import bs4 as bs
import datetime as dt
import pandas as pd
import os
import pandas_datareader.data as web
import pickle
import requests
from dateutil.relativedelta import relativedelta, FR
end_date = pd.Timestamp(pd.to_datetime('today').strftime("%m/%d/%Y"))
start_date = end_date - relativedelta(years=3)
def save_sp500_tickers():
resp = requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
soup = bs.BeautifulSoup(resp.text, 'lxml')
table = soup.find('table', {'class': 'wikitable sortable'})
tickers = []
for row in table.findAll('tr')[1:]:
ticker = row.findAll('td')[0].text[:-1]
tickers.append(ticker)
with open("sp500tickers.pickle", "wb") as f:
pickle.dump(tickers, f)
return tickers
# save_sp500_tickers()
def get_data_from_yahoo(start_date, end_date, reload_sp500=False):
if reload_sp500:
tickers = save_sp500_tickers()
else:
with open("sp500tickers.pickle", "rb") as f:
tickers = pickle.load(f)
if not os.path.exists('stock_dfs'):
os.makedirs('stock_dfs')
start = start_date
end = end_date
for ticker in tickers:
# just in case your connection breaks, we'd like to save our progress!
ticker = ticker.replace('.', '-')
if not os.path.exists('stock_dfs/{}.csv'.format(ticker)):
try:
df = web.DataReader(ticker, 'yahoo', start, end)
df.reset_index(inplace=True)
df.set_index("Date", inplace=True)
df.to_csv('stock_dfs/{}.csv'.format(ticker))
print('Create {}'.format(ticker))
except:
print('Drop {}'.format(ticker))
pass
else:
print('Already have {}'.format(ticker))
get_data_from_yahoo(start_date, end_date, reload_sp500=True)
def compile_data():
with open("sp500tickers.pickle", "rb") as f:
tickers = pickle.load(f)
main_df = pd.DataFrame()
for count, ticker in enumerate(tickers):
try:
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker))
df.set_index('Date', inplace=True)
df = pd.DataFrame(df['Adj Close'])
df.rename(columns={'Adj Close': ticker}, inplace=True)
if main_df.empty:
main_df = df
else:
main_df = main_df.join(df, how='outer')
except:
pass
if count % 10 == 0:
print(count)
print(main_df.head())
main_df.to_csv('sp500_joined_closes.csv')
compile_data()
```
|
github_jupyter
|
import bs4 as bs
import datetime as dt
import pandas as pd
import os
import pandas_datareader.data as web
import pickle
import requests
from dateutil.relativedelta import relativedelta, FR
end_date = pd.Timestamp(pd.to_datetime('today').strftime("%m/%d/%Y"))
start_date = end_date - relativedelta(years=3)
def save_sp500_tickers():
resp = requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
soup = bs.BeautifulSoup(resp.text, 'lxml')
table = soup.find('table', {'class': 'wikitable sortable'})
tickers = []
for row in table.findAll('tr')[1:]:
ticker = row.findAll('td')[0].text[:-1]
tickers.append(ticker)
with open("sp500tickers.pickle", "wb") as f:
pickle.dump(tickers, f)
return tickers
# save_sp500_tickers()
def get_data_from_yahoo(start_date, end_date, reload_sp500=False):
if reload_sp500:
tickers = save_sp500_tickers()
else:
with open("sp500tickers.pickle", "rb") as f:
tickers = pickle.load(f)
if not os.path.exists('stock_dfs'):
os.makedirs('stock_dfs')
start = start_date
end = end_date
for ticker in tickers:
# just in case your connection breaks, we'd like to save our progress!
ticker = ticker.replace('.', '-')
if not os.path.exists('stock_dfs/{}.csv'.format(ticker)):
try:
df = web.DataReader(ticker, 'yahoo', start, end)
df.reset_index(inplace=True)
df.set_index("Date", inplace=True)
df.to_csv('stock_dfs/{}.csv'.format(ticker))
print('Create {}'.format(ticker))
except:
print('Drop {}'.format(ticker))
pass
else:
print('Already have {}'.format(ticker))
get_data_from_yahoo(start_date, end_date, reload_sp500=True)
def compile_data():
with open("sp500tickers.pickle", "rb") as f:
tickers = pickle.load(f)
main_df = pd.DataFrame()
for count, ticker in enumerate(tickers):
try:
df = pd.read_csv('stock_dfs/{}.csv'.format(ticker))
df.set_index('Date', inplace=True)
df = pd.DataFrame(df['Adj Close'])
df.rename(columns={'Adj Close': ticker}, inplace=True)
if main_df.empty:
main_df = df
else:
main_df = main_df.join(df, how='outer')
except:
pass
if count % 10 == 0:
print(count)
print(main_df.head())
main_df.to_csv('sp500_joined_closes.csv')
compile_data()
| 0.179387 | 0.172311 |
```
import pandas as pd
import geopandas as gpd
import seaborn as sns
import matplotlib.pyplot as plt
import husl
from legendgram import legendgram
import mapclassify
from matplotlib_scalebar.scalebar import ScaleBar
from matplotlib.colors import ListedColormap
from shapely.geometry import Point
from tqdm import tqdm
clusters = pd.read_csv('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/clustering/200309_clusters_complete_n30.csv', index_col=0)
clusters
years = pd.read_parquet('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/raw/bag_data.pq')
years.columns
years = years[['uID','bouwjaar']]
years['year'] = years['bouwjaar'].apply(lambda x: x[:4] if x else None)
years['year'].value_counts()
bins = [0, 1800, 1850, 1900, 1930, 1945, 1960, 1975, 1985, 1995, 2005, 2020]
years = years.dropna()
years['year'] = pd.cut(years['year'].astype(int), bins)
joined = clusters.merge(years[['uID', 'year']], on='uID', how='left')
joined.head(4)
buildings = gpd.read_file('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/clustering/geometry.gpkg', layer='buildings')
buildings = buildings.merge(joined, on='uID', how='left')
buildings
buildings.year.unique()
```
## plot
```
def north_arrow(f, ax, rotation=0, loc=2, legend_size=(.1,.1), frameon=False, thick=.1, outline=3, edgecolor='k', facecolor='k'):
from legendgram.util import make_location
from matplotlib.transforms import Affine2D
arrpos = make_location(ax, loc, legend_size=legend_size)
arrax = f.add_axes(arrpos)
circle = plt.Circle((0, 0), radius=1, edgecolor=edgecolor, facecolor='w', linewidth=outline)
arrax.add_patch(circle)
rectangle = plt.Rectangle((-0.05, 0), thick, 1, facecolor=facecolor)
t = Affine2D().rotate_deg(rotation) + arrax.transData
rectangle.set_transform(t)
arrax.add_patch(rectangle)
arrax.axis('scaled')
arrax.set_frame_on(frameon)
arrax.get_yaxis().set_visible(False)
arrax.get_xaxis().set_visible(False)
return arrax
cols = []
colors = [(98, 93, 78), (14, 79, 58), (75, 90, 85), (347, 72, 60), (246, 79, 60), (257, 71, 27)]
for col in colors:
pal = sns.light_palette(col, input="husl", n_colors=3)
for rgb in pal[1:]:
cols.append(rgb)
cols.reverse()
fig, ax = plt.subplots(figsize=(20, 5))
for i, c in enumerate(cols):
ax.add_artist(plt.Circle((i, 0), 0.4, color=c))
ax.set_axis_off()
ax.set_aspect(1)
plt.xlim(-1.25,36.25)
plt.ylim(-2,2)
color = (257, 71, 27) # here for arrow, title, scalebar
# plotting
c = husl.husl_to_hex(*color)
cmap = ListedColormap(cols)
ax = buildings.plot('year', categorical=True, figsize=(30, 30), cmap=cmap, legend=True,
legend_kwds=dict(loc='center right', frameon=False))
ax.set_axis_off()
# add scalebar
scalebar = ScaleBar(dx=1,
color=c,
location=1,
height_fraction=0.001,
#fixed_value=1000,
label='historical period',
label_loc='bottom'
)
ax.add_artist(scalebar)
# add arrow
north_arrow(plt.gcf(), ax, 0, legend_size=(.04,.04), outline=1, edgecolor=c, facecolor=c)
for ext in ['pdf', 'png']:
plt.savefig('figures/AMS_origin.' + ext, bbox_inches='tight')
color = (257, 71, 27) # here for arrow, title, scalebar
# plotting
c = husl.husl_to_hex(*color)
cmap = ListedColormap(cols)
ax = buildings.cx[118000:126000, 480000:490000].plot('year', categorical=True, figsize=(30, 30), cmap=cmap, legend=True,
legend_kwds=dict(loc='center right', frameon=False))
ax.set_axis_off()
# add scalebar
scalebar = ScaleBar(dx=1,
color=c,
location=1,
height_fraction=0.001,
#fixed_value=1000,
label='historical period',
label_loc='bottom'
)
ax.add_artist(scalebar)
# add arrow
north_arrow(plt.gcf(), ax, 0, legend_size=(.04,.04), outline=1, edgecolor=c, facecolor=c)
for ext in ['pdf', 'png']:
plt.savefig('figures/AMS_origin_detail.' + ext, bbox_inches='tight')
import numpy as np
def show_values_on_bars(axs):
def _show_on_single_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height() + 0.02
value = '{:.2f}'.format(p.get_height())
ax.text(_x, _y, value, ha="center")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
pal = [husl.husl_to_hex(*color) for color in colors]
# historical core
data = joined.loc[joined['cluster'].isin([8])]['year'].value_counts(sort=False, normalize=True)
sns.set(context="paper", style="ticks", rc={'patch.force_edgecolor': False})
fig, ax = plt.subplots(figsize=(10, 5))
sns.barplot(ax=ax, x=data.index, y=data, order=data.index, palette=cols)
sns.despine(offset=10)
plt.ylabel('frequency')
plt.xlabel('historical period')
plt.ylim(0, 1)
show_values_on_bars(ax)
import scipy.stats as ss
import numpy as np
def cramers_v(x, y):
confusion_matrix = pd.crosstab(x,y)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum().sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1),(rcorr-1)))
cramers_v(joined.cluster, joined.year)
confusion_matrix = pd.crosstab(joined.cluster, joined.year)
chi, p, dof, exp = ss.chi2_contingency(confusion_matrix)
p
chi
dof
ss.chi2_contingency(confusion_matrix)
confusion_matrix = pd.crosstab(joined.cluster, joined.year)
print(confusion_matrix.to_markdown())
```
|
github_jupyter
|
import pandas as pd
import geopandas as gpd
import seaborn as sns
import matplotlib.pyplot as plt
import husl
from legendgram import legendgram
import mapclassify
from matplotlib_scalebar.scalebar import ScaleBar
from matplotlib.colors import ListedColormap
from shapely.geometry import Point
from tqdm import tqdm
clusters = pd.read_csv('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/clustering/200309_clusters_complete_n30.csv', index_col=0)
clusters
years = pd.read_parquet('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/raw/bag_data.pq')
years.columns
years = years[['uID','bouwjaar']]
years['year'] = years['bouwjaar'].apply(lambda x: x[:4] if x else None)
years['year'].value_counts()
bins = [0, 1800, 1850, 1900, 1930, 1945, 1960, 1975, 1985, 1995, 2005, 2020]
years = years.dropna()
years['year'] = pd.cut(years['year'].astype(int), bins)
joined = clusters.merge(years[['uID', 'year']], on='uID', how='left')
joined.head(4)
buildings = gpd.read_file('/Users/martin/Dropbox/Academia/Data/Geo/Amsterdam/clustering/geometry.gpkg', layer='buildings')
buildings = buildings.merge(joined, on='uID', how='left')
buildings
buildings.year.unique()
def north_arrow(f, ax, rotation=0, loc=2, legend_size=(.1,.1), frameon=False, thick=.1, outline=3, edgecolor='k', facecolor='k'):
from legendgram.util import make_location
from matplotlib.transforms import Affine2D
arrpos = make_location(ax, loc, legend_size=legend_size)
arrax = f.add_axes(arrpos)
circle = plt.Circle((0, 0), radius=1, edgecolor=edgecolor, facecolor='w', linewidth=outline)
arrax.add_patch(circle)
rectangle = plt.Rectangle((-0.05, 0), thick, 1, facecolor=facecolor)
t = Affine2D().rotate_deg(rotation) + arrax.transData
rectangle.set_transform(t)
arrax.add_patch(rectangle)
arrax.axis('scaled')
arrax.set_frame_on(frameon)
arrax.get_yaxis().set_visible(False)
arrax.get_xaxis().set_visible(False)
return arrax
cols = []
colors = [(98, 93, 78), (14, 79, 58), (75, 90, 85), (347, 72, 60), (246, 79, 60), (257, 71, 27)]
for col in colors:
pal = sns.light_palette(col, input="husl", n_colors=3)
for rgb in pal[1:]:
cols.append(rgb)
cols.reverse()
fig, ax = plt.subplots(figsize=(20, 5))
for i, c in enumerate(cols):
ax.add_artist(plt.Circle((i, 0), 0.4, color=c))
ax.set_axis_off()
ax.set_aspect(1)
plt.xlim(-1.25,36.25)
plt.ylim(-2,2)
color = (257, 71, 27) # here for arrow, title, scalebar
# plotting
c = husl.husl_to_hex(*color)
cmap = ListedColormap(cols)
ax = buildings.plot('year', categorical=True, figsize=(30, 30), cmap=cmap, legend=True,
legend_kwds=dict(loc='center right', frameon=False))
ax.set_axis_off()
# add scalebar
scalebar = ScaleBar(dx=1,
color=c,
location=1,
height_fraction=0.001,
#fixed_value=1000,
label='historical period',
label_loc='bottom'
)
ax.add_artist(scalebar)
# add arrow
north_arrow(plt.gcf(), ax, 0, legend_size=(.04,.04), outline=1, edgecolor=c, facecolor=c)
for ext in ['pdf', 'png']:
plt.savefig('figures/AMS_origin.' + ext, bbox_inches='tight')
color = (257, 71, 27) # here for arrow, title, scalebar
# plotting
c = husl.husl_to_hex(*color)
cmap = ListedColormap(cols)
ax = buildings.cx[118000:126000, 480000:490000].plot('year', categorical=True, figsize=(30, 30), cmap=cmap, legend=True,
legend_kwds=dict(loc='center right', frameon=False))
ax.set_axis_off()
# add scalebar
scalebar = ScaleBar(dx=1,
color=c,
location=1,
height_fraction=0.001,
#fixed_value=1000,
label='historical period',
label_loc='bottom'
)
ax.add_artist(scalebar)
# add arrow
north_arrow(plt.gcf(), ax, 0, legend_size=(.04,.04), outline=1, edgecolor=c, facecolor=c)
for ext in ['pdf', 'png']:
plt.savefig('figures/AMS_origin_detail.' + ext, bbox_inches='tight')
import numpy as np
def show_values_on_bars(axs):
def _show_on_single_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height() + 0.02
value = '{:.2f}'.format(p.get_height())
ax.text(_x, _y, value, ha="center")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
pal = [husl.husl_to_hex(*color) for color in colors]
# historical core
data = joined.loc[joined['cluster'].isin([8])]['year'].value_counts(sort=False, normalize=True)
sns.set(context="paper", style="ticks", rc={'patch.force_edgecolor': False})
fig, ax = plt.subplots(figsize=(10, 5))
sns.barplot(ax=ax, x=data.index, y=data, order=data.index, palette=cols)
sns.despine(offset=10)
plt.ylabel('frequency')
plt.xlabel('historical period')
plt.ylim(0, 1)
show_values_on_bars(ax)
import scipy.stats as ss
import numpy as np
def cramers_v(x, y):
confusion_matrix = pd.crosstab(x,y)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum().sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1),(rcorr-1)))
cramers_v(joined.cluster, joined.year)
confusion_matrix = pd.crosstab(joined.cluster, joined.year)
chi, p, dof, exp = ss.chi2_contingency(confusion_matrix)
p
chi
dof
ss.chi2_contingency(confusion_matrix)
confusion_matrix = pd.crosstab(joined.cluster, joined.year)
print(confusion_matrix.to_markdown())
| 0.396769 | 0.669664 |
<a href="https://colab.research.google.com/github/sid-chaubs/data-mining-assignment-1/blob/main/DMT_1_PJ.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!git clone https://github.com/sid-chaubs/data-mining-assignment-1.git
%cd data-mining-assignment-1/
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import regex
from sklearn import tree, model_selection, preprocessing, ensemble
from scipy import stats
pd.set_option('display.precision', 2)
#read in data
data = pd.read_csv('ODI-2021.csv')
print('Original data shape: ', data.shape)
#sanitize programmes
data['Programme'] = data['What programme are you in?'].apply(lambda s: regex.sub(r'(masters|master|msc|m\s|\sat.*|\suva|\(uva\)|\\|\svu)', '', s.lower()))
#sanitize birthdays
data['Birthday'] = pd.to_datetime(data['When is your birthday (date)?'], errors='coerce')
#normalize course participation data
data['DB course taken'] = data['Have you taken a course on databases?'] == 'ja'
data['Information retrieval course taken'] = data['Have you taken a course on information retrieval?'] == '1'
data['ML course taken'] = data['Have you taken a course on machine learning?'] == 'yes'
data['Statistics course taken'] = data['Have you taken a course on statistics?'] == 'mu'
#sanitize/convert other columns
data['Number of neighbors'] = pd.to_numeric(data['Number of neighbors sitting around you?'], errors='coerce')
data['Stood up'] = data['Did you stand up?'] == 'yes'
data['Stress level'] = pd.to_numeric(data['What is your stress level (0-100)?'], errors='coerce')
data['Stress level'] = list(map(lambda d: min(100, d), data['Stress level']))
data['Competition reward'] = pd.to_numeric(data['You can get 100 euros if you win a local DM competition, or we don’t hold any competitions and I give everyone some money (not the same amount!). How much do you think you would deserve then? '], errors='coerce')
data['Random number'] = pd.to_numeric(data['Give a random number'], errors='coerce')
match_single_hours = r'(^[0-9]+)\s*(am|pm|$)$'
match_dots = r'([0-9]+)\.([0-9]+)'
#sanitize bedtime
data['Bedtime'] = pd.to_datetime(list(map(lambda dt: regex.sub(match_single_hours, r'\1:00 \2', dt),
map(lambda dt: regex.sub(match_dots, r'\1:\2', dt), data['Time you went to be Yesterday']))),
errors='coerce')
data['Bedtime'].groupby(data['Bedtime'].dt.hour).count().plot(kind='bar')
#different regexs for matching possible observed programme names
match_superfluous = r'(masters|master|msc|m\s|\sat.*|\suva|\(uva\)|\\|\svu)'
match_cs = r'.*(^cs|\scs|computer science|computational science).*'
match_ai = r'.*(^ai|\sai|artificial intelli).*'
match_bio = r'.*(bioinformatics and s.*|bioinformatics & systems biology).*'
match_qrm = r'.*(qrm|quantative risk management|quantitative risk management).*'
match_ba = r'.*(^ba|\sba|business analytics)'
match_eor = r'.*(^eor|^e&or|^or|econometrics and op.*|econometrics & op.*)'
match_eds = r'.*(^eds|econometrics and data science.*)'
match_ec = r'.*(econometrics)'
match_ft = r'.*(^ft|fintech|finance & technology|finance and technology)'
#zip the matching regexes and corresponding substitutions together
regsubs = zip([match_superfluous, match_cs, match_ai, match_bio, match_qrm, match_ba, match_eor, match_eds, match_ec, match_ft],
['', 'Computer Science', 'Artificial Intelligence', 'Bioinformatics and Systems Biology', 'Quantitative Risk Management',
'Business Analytics', 'Econometrics and Operations Research', 'Econometrics and Data Science', 'Econometrics', 'Finance and Technology'])
def regex_to_sub(re, substr):
'''Helper function for creating an anonymous substitution function with regex.'''
return lambda s: regex.sub(re, substr, s)
#convert to lowercase, substitute course names, remove leading/trailing spaces and capitalize everything left
regfuncs = [lambda s: s.lower()]\
+ [regex_to_sub(re, s) for re, s in regsubs]\
+ [str.strip, lambda s: s[0].upper() + s[1:]]
def chain_sanitize(data, funcs):
'''Apply a list of functions to data in sequence and return the result.'''
res = data
for f in funcs:
res = res.apply(f)
return res
data['Programme'] = chain_sanitize(data['What programme are you in?'], regfuncs)
list(data['Programme'])
data.loc[(data['What is your gender?'] == 'female')]['Stress level'].plot()
gendermeans = data.groupby(['What is your gender?']).mean()
gendermeans['Stress level'].plot(ylabel='Stress level', ylim=(0,100))
data.loc[data['What is your gender?']=='male']['Stress level'].hist()
data.loc[data['What is your gender?']=='female']['Stress level'].hist()
plt.legend(['Male', 'Female'])
```
Overall gender composition of the course:
```
gender_counts = data.groupby(['What is your gender?']).size()
gender_counts.plot.pie(autopct='%.1f')
```
Most popular programmes amongst course-takers:
```
most_popular_progs = data.groupby(['Programme']).size().nlargest(4)
print('Programme counts:\n', most_popular_progs)
print('Most popular programmes account for {0:.2f}% of all represented programmes'
.format(100 * most_popular_progs.sum()/data.shape[0]))
gender_programmes = data.groupby(['What is your gender?', 'Programme']).size()
gp_unstacked = gender_programmes.unstack(fill_value=0)
programmes = ['Artificial Intelligence', 'Computer Science', 'Bioinformatics and Systems Biology', 'Business Analytics']
fig, axes = plt.subplots(nrows=1, ncols=len(programmes), figsize=(20,5))
#create a few pie charts for select most popular programmes to get an idea of gender distribution
for i, programme in enumerate(programmes):
gp_unstacked[programme].plot.pie(autopct='%1.f%%', ax=axes[i])
```
Note that this doesn't necessarily represent the distributions of genders in these courses as it is possible that we're dealing with a biased sample of participants who took this course. (In other words, the distribution of genders in these programmes could be even, but a majority of males from them took Data Mining as a course)
```
#get the total number of participants by gender in most popular programmes
totals_in_popular = gp_unstacked[programmes].sum(axis=1).values
others_count = gender_counts.values - totals_in_popular
stats.chi2_contingency(pd.DataFrame.from_records([totals_in_popular, others_count]))
```
With a p-value of >0.57 we see that there is no significant effect of gender on studying one of the four most popular programmes for this course vs the others. We can also ask whether there is a relationship between gender and any specific programme in the top four:
```
print('p-value for chi-squared contingency test of most popular programmes for this course: ',
stats.chi2_contingency(gp_unstacked[programmes])[1])
```
Thus, among the four most popular programmes, even though there is an imbalance between the amount of male vs female programme candidates, there is no significant effect of gender vs choosing one of the most popular programmes, nor is there a significant gender effect within those programmes.
```
chocolate_answer_coded = pd.get_dummies(data['Chocolate makes you.....'])
tree_data = pd.concat([chocolate_answer_coded, data['Stress level']], axis=1)
print(tree_data.shape)
tree_data.head()
train_cs, test_cs, train_g, test_g = model_selection.train_test_split(tree_data, data['What is your gender?'], test_size=0.33)
print(train_cs.shape, test_cs.shape, train_g.shape, test_g.shape)
train_cs[:3], train_g[:3]
data.iloc[train_cs[:3].index]
dect_chocstress = tree.DecisionTreeClassifier()
dect_chocstress = dect_chocstress.fit(train_cs, train_g)
print('one time test score: ', dect_chocstress.score(test_cs, test_g))
print('cross-validation scores: ', model_selection.cross_val_score(dect_chocstress, test_cs, test_g))
plt.figure(figsize=(25, 20))
tree.plot_tree(dect_chocstress, fontsize=10);
```
#Task 2
```
titanic_train = pd.read_csv('titanic_train.csv')
titanic_test = pd.read_csv('titanic_test.csv')
titanic_train.head()
#plotting the histogram of ages and a fitted normal PDF
_, bins, _ = plt.hist(titanic_train['Age']) #store x coords of bins for later PDF plotting
plt.grid()
t_age_mu, t_age_sigma = stats.norm.fit(titanic_train['Age'].dropna())
chi2_params = stats.chi2.fit(titanic_train['Age'].dropna())
#maxh/maxn is a scaling factor that makes it possible to compare the two graphs
maxn = max(stats.norm.pdf(bins, t_age_mu, t_age_sigma))
maxc = max(stats.chi2.pdf(bins, *chi2_params))
maxh = max(titanic_train['Age'].value_counts(bins=10))
plt.plot(bins, (maxh/maxn)* stats.norm.pdf(bins, t_age_mu, t_age_sigma))
plt.plot(bins, (maxh/maxc)* stats.chi2.pdf(bins, *chi2_params))
plt.legend(['Normal distribution fit', 'X^2 distribution fit'])
print('Age normal mu: {0}, sigma: {1}'.format(t_age_mu, t_age_sigma))
print('Age X^2 df: {0} location: {1}, scale: {2}'.format(*chi2_params))
fig, axes = plt.subplots(nrows=1, ncols=2)
titanic_train.groupby('Sex').size().plot.pie(ax=axes[0], autopct='%.2f')
titanic_train.groupby('Pclass').size().plot.pie(ax=axes[1], autopct='%.2f')
sns.pairplot(titanic_train[['Survived', 'Sex', 'Age', 'Pclass', 'Parch']])
sex_codes = pd.get_dummies(titanic_train['Sex'])
pclass_codes = pd.get_dummies(titanic_train['Pclass'])
pclass_codes.columns = ['class1', 'class2', 'class3']
titanic_tree_data_X = pd.concat([titanic_train[['Age', 'SibSp', 'Parch']], sex_codes, pclass_codes], axis=1)
print('Rows with missing age account for {0:.2f}% of the data'
.format(100*titanic_train['Age'].isna().sum()/titanic_train.shape[0]))
#generate filler to not discard the missing age rows
substitute_ages = stats.chi2.rvs(*chi2_params, titanic_train['Age'].isna().sum()).round()
titanic_tree_data_X.loc[titanic_tree_data_X['Age'].isna(), 'Age'] = substitute_ages
ttd_X_train, ttd_X_test, ttd_Y_train, ttd_Y_test = model_selection.train_test_split(titanic_tree_data_X, titanic_train['Survived'])
print(ttd_X_train.head())
print(ttd_X_test.head())
print(ttd_Y_train.head())
print(ttd_Y_test.head())
titanic_forest = ensemble.RandomForestClassifier()
np.mean(model_selection.cross_val_score(titanic_forest, titanic_tree_data_X, titanic_train['Survived'], cv=10))
depths = [None] + list(range(2, 20))
estimator_counts = np.arange(50, 500, 25)
scores = []
#test out different hyperparameters
for n_estim in estimator_counts:
for d in depths:
titanic_forest = ensemble.RandomForestClassifier(n_estim, max_depth=d)
score = model_selection.cross_val_score(titanic_forest,
titanic_tree_data_X,
titanic_train['Survived'])
scores.append([n_estim, d, score, np.mean(score)])
hyperparam_results = pd.DataFrame.from_records(scores, columns=['N. of estimators',
'Max tree depth',
'CV scores',
'Mean score'])
f, ax = plt.subplots(figsize=(10, 5))
hyperparam_results['Mean score'].plot(ax=ax)
plt.grid()
max_score = hyperparam_results['Mean score'].max()
max_id = hyperparam_results['Mean score'].idxmax(1)
hyperparam_results.loc[hyperparam_results['Mean score'] == max_score]
best_estimator_count, best_depth = hyperparam_results.iloc[max_id][['N. of estimators', 'Max tree depth']]
titanic_forest = ensemble.RandomForestClassifier(best_estimator_count, max_depth=best_depth)
titanic_forest.fit(titanic_tree_data_X, titanic_train['Survived'])
scores = model_selection.cross_validate(titanic_forest, titanic_tree_data_X, titanic_train['Survived'], scoring=['precision_weighted', 'recall_weighted', 'f1_weighted'])
print(scores)
print('f1:', scores['test_f1_weighted'].mean())
print('precision:', scores['test_precision_weighted'].mean())
print('recall:', scores['test_recall_weighted'].mean())
sex_codes_validation = pd.get_dummies(titanic_test['Sex'])
pclass_codes_validation = pd.get_dummies(titanic_test['Pclass'])
titanic_validation_data = pd.concat([titanic_test[['Age', 'SibSp', 'Parch']], sex_codes_validation, pclass_codes_validation], axis=1)
chi2_params_validation = stats.chi2.fit(titanic_validation_data['Age'].dropna())
substitute_ages_validation = stats.chi2.rvs(*chi2_params_validation, titanic_validation_data['Age'].isna().sum())
titanic_validation_data.loc[titanic_validation_data['Age'].isna(), 'Age'] = substitute_ages_validation
titanic_forest.predict(titanic_validation_data)
```
|
github_jupyter
|
!git clone https://github.com/sid-chaubs/data-mining-assignment-1.git
%cd data-mining-assignment-1/
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import regex
from sklearn import tree, model_selection, preprocessing, ensemble
from scipy import stats
pd.set_option('display.precision', 2)
#read in data
data = pd.read_csv('ODI-2021.csv')
print('Original data shape: ', data.shape)
#sanitize programmes
data['Programme'] = data['What programme are you in?'].apply(lambda s: regex.sub(r'(masters|master|msc|m\s|\sat.*|\suva|\(uva\)|\\|\svu)', '', s.lower()))
#sanitize birthdays
data['Birthday'] = pd.to_datetime(data['When is your birthday (date)?'], errors='coerce')
#normalize course participation data
data['DB course taken'] = data['Have you taken a course on databases?'] == 'ja'
data['Information retrieval course taken'] = data['Have you taken a course on information retrieval?'] == '1'
data['ML course taken'] = data['Have you taken a course on machine learning?'] == 'yes'
data['Statistics course taken'] = data['Have you taken a course on statistics?'] == 'mu'
#sanitize/convert other columns
data['Number of neighbors'] = pd.to_numeric(data['Number of neighbors sitting around you?'], errors='coerce')
data['Stood up'] = data['Did you stand up?'] == 'yes'
data['Stress level'] = pd.to_numeric(data['What is your stress level (0-100)?'], errors='coerce')
data['Stress level'] = list(map(lambda d: min(100, d), data['Stress level']))
data['Competition reward'] = pd.to_numeric(data['You can get 100 euros if you win a local DM competition, or we don’t hold any competitions and I give everyone some money (not the same amount!). How much do you think you would deserve then? '], errors='coerce')
data['Random number'] = pd.to_numeric(data['Give a random number'], errors='coerce')
match_single_hours = r'(^[0-9]+)\s*(am|pm|$)$'
match_dots = r'([0-9]+)\.([0-9]+)'
#sanitize bedtime
data['Bedtime'] = pd.to_datetime(list(map(lambda dt: regex.sub(match_single_hours, r'\1:00 \2', dt),
map(lambda dt: regex.sub(match_dots, r'\1:\2', dt), data['Time you went to be Yesterday']))),
errors='coerce')
data['Bedtime'].groupby(data['Bedtime'].dt.hour).count().plot(kind='bar')
#different regexs for matching possible observed programme names
match_superfluous = r'(masters|master|msc|m\s|\sat.*|\suva|\(uva\)|\\|\svu)'
match_cs = r'.*(^cs|\scs|computer science|computational science).*'
match_ai = r'.*(^ai|\sai|artificial intelli).*'
match_bio = r'.*(bioinformatics and s.*|bioinformatics & systems biology).*'
match_qrm = r'.*(qrm|quantative risk management|quantitative risk management).*'
match_ba = r'.*(^ba|\sba|business analytics)'
match_eor = r'.*(^eor|^e&or|^or|econometrics and op.*|econometrics & op.*)'
match_eds = r'.*(^eds|econometrics and data science.*)'
match_ec = r'.*(econometrics)'
match_ft = r'.*(^ft|fintech|finance & technology|finance and technology)'
#zip the matching regexes and corresponding substitutions together
regsubs = zip([match_superfluous, match_cs, match_ai, match_bio, match_qrm, match_ba, match_eor, match_eds, match_ec, match_ft],
['', 'Computer Science', 'Artificial Intelligence', 'Bioinformatics and Systems Biology', 'Quantitative Risk Management',
'Business Analytics', 'Econometrics and Operations Research', 'Econometrics and Data Science', 'Econometrics', 'Finance and Technology'])
def regex_to_sub(re, substr):
'''Helper function for creating an anonymous substitution function with regex.'''
return lambda s: regex.sub(re, substr, s)
#convert to lowercase, substitute course names, remove leading/trailing spaces and capitalize everything left
regfuncs = [lambda s: s.lower()]\
+ [regex_to_sub(re, s) for re, s in regsubs]\
+ [str.strip, lambda s: s[0].upper() + s[1:]]
def chain_sanitize(data, funcs):
'''Apply a list of functions to data in sequence and return the result.'''
res = data
for f in funcs:
res = res.apply(f)
return res
data['Programme'] = chain_sanitize(data['What programme are you in?'], regfuncs)
list(data['Programme'])
data.loc[(data['What is your gender?'] == 'female')]['Stress level'].plot()
gendermeans = data.groupby(['What is your gender?']).mean()
gendermeans['Stress level'].plot(ylabel='Stress level', ylim=(0,100))
data.loc[data['What is your gender?']=='male']['Stress level'].hist()
data.loc[data['What is your gender?']=='female']['Stress level'].hist()
plt.legend(['Male', 'Female'])
gender_counts = data.groupby(['What is your gender?']).size()
gender_counts.plot.pie(autopct='%.1f')
most_popular_progs = data.groupby(['Programme']).size().nlargest(4)
print('Programme counts:\n', most_popular_progs)
print('Most popular programmes account for {0:.2f}% of all represented programmes'
.format(100 * most_popular_progs.sum()/data.shape[0]))
gender_programmes = data.groupby(['What is your gender?', 'Programme']).size()
gp_unstacked = gender_programmes.unstack(fill_value=0)
programmes = ['Artificial Intelligence', 'Computer Science', 'Bioinformatics and Systems Biology', 'Business Analytics']
fig, axes = plt.subplots(nrows=1, ncols=len(programmes), figsize=(20,5))
#create a few pie charts for select most popular programmes to get an idea of gender distribution
for i, programme in enumerate(programmes):
gp_unstacked[programme].plot.pie(autopct='%1.f%%', ax=axes[i])
#get the total number of participants by gender in most popular programmes
totals_in_popular = gp_unstacked[programmes].sum(axis=1).values
others_count = gender_counts.values - totals_in_popular
stats.chi2_contingency(pd.DataFrame.from_records([totals_in_popular, others_count]))
print('p-value for chi-squared contingency test of most popular programmes for this course: ',
stats.chi2_contingency(gp_unstacked[programmes])[1])
chocolate_answer_coded = pd.get_dummies(data['Chocolate makes you.....'])
tree_data = pd.concat([chocolate_answer_coded, data['Stress level']], axis=1)
print(tree_data.shape)
tree_data.head()
train_cs, test_cs, train_g, test_g = model_selection.train_test_split(tree_data, data['What is your gender?'], test_size=0.33)
print(train_cs.shape, test_cs.shape, train_g.shape, test_g.shape)
train_cs[:3], train_g[:3]
data.iloc[train_cs[:3].index]
dect_chocstress = tree.DecisionTreeClassifier()
dect_chocstress = dect_chocstress.fit(train_cs, train_g)
print('one time test score: ', dect_chocstress.score(test_cs, test_g))
print('cross-validation scores: ', model_selection.cross_val_score(dect_chocstress, test_cs, test_g))
plt.figure(figsize=(25, 20))
tree.plot_tree(dect_chocstress, fontsize=10);
titanic_train = pd.read_csv('titanic_train.csv')
titanic_test = pd.read_csv('titanic_test.csv')
titanic_train.head()
#plotting the histogram of ages and a fitted normal PDF
_, bins, _ = plt.hist(titanic_train['Age']) #store x coords of bins for later PDF plotting
plt.grid()
t_age_mu, t_age_sigma = stats.norm.fit(titanic_train['Age'].dropna())
chi2_params = stats.chi2.fit(titanic_train['Age'].dropna())
#maxh/maxn is a scaling factor that makes it possible to compare the two graphs
maxn = max(stats.norm.pdf(bins, t_age_mu, t_age_sigma))
maxc = max(stats.chi2.pdf(bins, *chi2_params))
maxh = max(titanic_train['Age'].value_counts(bins=10))
plt.plot(bins, (maxh/maxn)* stats.norm.pdf(bins, t_age_mu, t_age_sigma))
plt.plot(bins, (maxh/maxc)* stats.chi2.pdf(bins, *chi2_params))
plt.legend(['Normal distribution fit', 'X^2 distribution fit'])
print('Age normal mu: {0}, sigma: {1}'.format(t_age_mu, t_age_sigma))
print('Age X^2 df: {0} location: {1}, scale: {2}'.format(*chi2_params))
fig, axes = plt.subplots(nrows=1, ncols=2)
titanic_train.groupby('Sex').size().plot.pie(ax=axes[0], autopct='%.2f')
titanic_train.groupby('Pclass').size().plot.pie(ax=axes[1], autopct='%.2f')
sns.pairplot(titanic_train[['Survived', 'Sex', 'Age', 'Pclass', 'Parch']])
sex_codes = pd.get_dummies(titanic_train['Sex'])
pclass_codes = pd.get_dummies(titanic_train['Pclass'])
pclass_codes.columns = ['class1', 'class2', 'class3']
titanic_tree_data_X = pd.concat([titanic_train[['Age', 'SibSp', 'Parch']], sex_codes, pclass_codes], axis=1)
print('Rows with missing age account for {0:.2f}% of the data'
.format(100*titanic_train['Age'].isna().sum()/titanic_train.shape[0]))
#generate filler to not discard the missing age rows
substitute_ages = stats.chi2.rvs(*chi2_params, titanic_train['Age'].isna().sum()).round()
titanic_tree_data_X.loc[titanic_tree_data_X['Age'].isna(), 'Age'] = substitute_ages
ttd_X_train, ttd_X_test, ttd_Y_train, ttd_Y_test = model_selection.train_test_split(titanic_tree_data_X, titanic_train['Survived'])
print(ttd_X_train.head())
print(ttd_X_test.head())
print(ttd_Y_train.head())
print(ttd_Y_test.head())
titanic_forest = ensemble.RandomForestClassifier()
np.mean(model_selection.cross_val_score(titanic_forest, titanic_tree_data_X, titanic_train['Survived'], cv=10))
depths = [None] + list(range(2, 20))
estimator_counts = np.arange(50, 500, 25)
scores = []
#test out different hyperparameters
for n_estim in estimator_counts:
for d in depths:
titanic_forest = ensemble.RandomForestClassifier(n_estim, max_depth=d)
score = model_selection.cross_val_score(titanic_forest,
titanic_tree_data_X,
titanic_train['Survived'])
scores.append([n_estim, d, score, np.mean(score)])
hyperparam_results = pd.DataFrame.from_records(scores, columns=['N. of estimators',
'Max tree depth',
'CV scores',
'Mean score'])
f, ax = plt.subplots(figsize=(10, 5))
hyperparam_results['Mean score'].plot(ax=ax)
plt.grid()
max_score = hyperparam_results['Mean score'].max()
max_id = hyperparam_results['Mean score'].idxmax(1)
hyperparam_results.loc[hyperparam_results['Mean score'] == max_score]
best_estimator_count, best_depth = hyperparam_results.iloc[max_id][['N. of estimators', 'Max tree depth']]
titanic_forest = ensemble.RandomForestClassifier(best_estimator_count, max_depth=best_depth)
titanic_forest.fit(titanic_tree_data_X, titanic_train['Survived'])
scores = model_selection.cross_validate(titanic_forest, titanic_tree_data_X, titanic_train['Survived'], scoring=['precision_weighted', 'recall_weighted', 'f1_weighted'])
print(scores)
print('f1:', scores['test_f1_weighted'].mean())
print('precision:', scores['test_precision_weighted'].mean())
print('recall:', scores['test_recall_weighted'].mean())
sex_codes_validation = pd.get_dummies(titanic_test['Sex'])
pclass_codes_validation = pd.get_dummies(titanic_test['Pclass'])
titanic_validation_data = pd.concat([titanic_test[['Age', 'SibSp', 'Parch']], sex_codes_validation, pclass_codes_validation], axis=1)
chi2_params_validation = stats.chi2.fit(titanic_validation_data['Age'].dropna())
substitute_ages_validation = stats.chi2.rvs(*chi2_params_validation, titanic_validation_data['Age'].isna().sum())
titanic_validation_data.loc[titanic_validation_data['Age'].isna(), 'Age'] = substitute_ages_validation
titanic_forest.predict(titanic_validation_data)
| 0.463687 | 0.925869 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.