markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Birth weightsLet's look at the distribution of birth weights again.
import first live, firsts, others = first.MakeFrames()
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Based on KDE, it looks like the distribution is skewed to the left.
birth_weights = live.totalwgt_lb.dropna() pdf = thinkstats2.EstimatedPdf(birth_weights) thinkplot.Pdf(pdf, label='birth weight') thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PDF')
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The mean is less than the median, which is consistent with left skew.
Mean(birth_weights), Median(birth_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
And both ways of computing skew are negative, which is consistent with left skew.
Skewness(birth_weights), PearsonMedianSkewness(birth_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Adult weightsNow let's look at adult weights from the BRFSS. The distribution looks skewed to the right.
adult_weights = df.wtkg2.dropna() pdf = thinkstats2.EstimatedPdf(adult_weights) thinkplot.Pdf(pdf, label='Adult weight') thinkplot.Config(xlabel='Adult weight (kg)', ylabel='PDF')
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The mean is greater than the median, which is consistent with skew to the right.
Mean(adult_weights), Median(adult_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
And both ways of computing skewness are positive.
Skewness(adult_weights), PearsonMedianSkewness(adult_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Exercises The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is.The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded `hinc06.xls`, which is an Excel spreadsheet with information about household income, and converted it to `hinc06.csv`, a CSV file you will find in the repository for this book. You will also find `hinc2.py`, which reads this file and transforms the data.The dataset is in the form of a series of income ranges and the number of respondents who fell in each range. The lowest range includes respondents who reported annual household income “Under \$5000.” The highest range includes respondents who made “\$250,000 or more.”To estimate mean and other statistics from these data, we have to make some assumptions about the lower and upper bounds, and how the values are distributed in each range. `hinc2.py` provides `InterpolateSample`, which shows one way to model this data. It takes a `DataFrame` with a column, `income`, that contains the upper bound of each range, and `freq`, which contains the number of respondents in each frame.It also takes `log_upper`, which is an assumed upper bound on the highest range, expressed in `log10` dollars. The default value, `log_upper=6.0` represents the assumption that the largest income among the respondents is $10^6$, or one million dollars.`InterpolateSample` generates a pseudo-sample; that is, a sample of household incomes that yields the same number of respondents in each range as the actual data. It assumes that incomes in each range are equally spaced on a `log10` scale.
def InterpolateSample(df, log_upper=6.0): """Makes a sample of log10 household income. Assumes that log10 income is uniform in each range. df: DataFrame with columns income and freq log_upper: log10 of the assumed upper bound for the highest range returns: NumPy array of log10 household income """ # compute the log10 of the upper bound for each range df['log_upper'] = np.log10(df.income) # get the lower bounds by shifting the upper bound and filling in # the first element df['log_lower'] = df.log_upper.shift(1) df.loc[0, 'log_lower'] = 3.0 # plug in a value for the unknown upper bound of the highest range df.loc[41, 'log_upper'] = log_upper # use the freq column to generate the right number of values in # each range arrays = [] for _, row in df.iterrows(): vals = np.linspace(row.log_lower, row.log_upper, row.freq) arrays.append(vals) # collect the arrays into a single sample log_sample = np.concatenate(arrays) return log_sample import hinc income_df = hinc.ReadData() log_sample = InterpolateSample(income_df, log_upper=6.0) log_cdf = thinkstats2.Cdf(log_sample) thinkplot.Cdf(log_cdf) thinkplot.Config(xlabel='Household income (log $)', ylabel='CDF') sample = np.power(10, log_sample) cdf = thinkstats2.Cdf(sample) thinkplot.Cdf(cdf) thinkplot.Config(xlabel='Household income ($)', ylabel='CDF')
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound?
# Solution Mean(sample), Median(sample) # Solution Skewness(sample), PearsonMedianSkewness(sample) # Solution # About 66% of the population makes less than the mean cdf.Prob(Mean(sample))
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
교차 검증 학습 모델을 학습할 때, 검증을 위해 우리는 train data와 validation data를 분리합니다. 이 경우, validation data는 모델의 학습에 영향을 미치지 않습니다. 따라서 모델이 학습하는 data의 수가 줄어들고, train data에 overfitting 됩니다. 이를 해결하기 위해 train data와 validation data를 나누는 과정을 여러번 반복하고 다양한 데이터셋을 사용하여 모델을 학습하는 방법을 cross validation (교차 검증 학습) 이라고 합니다. cross validation을 사용할 경우 모든 데이터를 학습과 평가에 사용할 수 있다는 장점이 있지만 학습시간이 오래걸린다는 단점이 있습니다.. cross validation에는 다양한 방법이 있지만 이번 노트북에서는 Stratified k-fold cross validation을 사용해보았습니다. stratified k-fold cross validation을 사용하면 Label의 분포가 불균형한 데이터일 경우 Label의 갯수를 고려하여 train, validation data를 나눠줍니다. 모델은 klue/bert-base 모델을 사용했습니다.
import random from tqdm.notebook import tqdm, tnrange import os import numpy as np import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import AdamW from transformers import get_linear_schedule_with_warmup from sklearn.model_selection import train_test_split from sklearn.model_selection import StratifiedKFold from sklearn.metrics import accuracy_score import torch from torch import nn from torch.utils.data import Dataset,TensorDataset, DataLoader, RandomSampler if torch.cuda.is_available(): print("사용가능한 GPU수 : ",torch.cuda.device_count()) device = torch.device("cuda") else: print("CPU 사용") device = torch.device("cpu")
사용가능한 GPU수 : 1
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
Reproduction을 위한 Seed 고정 출처 : https://dacon.io/codeshare/2363?dtype=vote&s_id=0
RANDOM_SEED = 42 def seed_everything(seed: int = 42): random.seed(seed) np.random.seed(seed) os.environ["PYTHONHASHSEED"] = str(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) # type: ignore torch.backends.cudnn.deterministic = True # type: ignore torch.backends.cudnn.benchmark = True # type: ignore seed_everything(RANDOM_SEED) model_checkpoint = "klue/bert-base" batch_size = 32 dataset = pd.read_csv("data/train_data.csv") test = pd.read_csv("data/test_data.csv") tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) def bert_tokenize(dataset,sent_key,label_key,tokenizer): if label_key is None : labels = [np.int64(0) for i in dataset[sent_key]] else : labels = [np.int64(i) for i in dataset[label_key]] sentences = tokenizer(dataset[sent_key].tolist(),truncation=True,padding=True) input_ids = sentences.input_ids token_type_ids = sentences.token_type_ids attention_mask = sentences.attention_mask return [input_ids, token_type_ids, attention_mask, labels]
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
sklearn의 StratifiedKFold를 불러오고 예측한 데이터를 저장할 수 있는 변수를 만듭니다. `StratifiedKFold()`에서 `n_split=5`는 5개의 train data와 validation data를 만들겠다는 이야기입니다.
NUM_TEST_DATA = len(test) skf = StratifiedKFold(n_splits=5) final_test_pred = np.zeros([NUM_TEST_DATA,7])
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
parameter들을 정의합니다.
lr = 2e-5 adam_epsilon = 1e-8 epochs = 3 num_warmup_steps = 0 num_labels = 7
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
`train()`, `evaluate()`, `predict()`를 정의합니다.
def train(model,train_dataloader): train_loss_set = [] learning_rate = [] batch_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): model.train() batch = tuple(t.to(device) for t in batch) b_input_ids, b_token_type_ids, b_input_mask, b_labels = batch outputs = model(b_input_ids, token_type_ids=b_token_type_ids, attention_mask=b_input_mask, labels=b_labels) loss = outputs[0] loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) optimizer.step() scheduler.step() optimizer.zero_grad() batch_loss += loss.item() avg_train_loss = batch_loss / len(train_dataloader) for param_group in optimizer.param_groups: print("\n\tCurrent Learning rate: ",param_group['lr']) learning_rate.append(param_group['lr']) train_loss_set.append(avg_train_loss) print(F'\n\tAverage Training loss: {avg_train_loss}') def evaluate(model, validation_dataloader): # validation model.eval() eval_accuracy,nb_eval_steps = 0, 0 for batch in tqdm(validation_dataloader): batch = tuple(t.to(device) for t in batch) b_input_ids, b_token_type_ids, b_input_mask, b_labels = batch with torch.no_grad(): logits = model(b_input_ids, token_type_ids=b_token_type_ids, attention_mask=b_input_mask) logits = logits[0].to('cpu').numpy() label_ids = b_labels.to('cpu').numpy() pred_flat = np.argmax(logits, axis=1).flatten() labels_flat = label_ids.flatten() tmp_eval_accuracy = accuracy_score(labels_flat,pred_flat) eval_accuracy += tmp_eval_accuracy nb_eval_steps += 1 print(F'\n\tValidation Accuracy: {eval_accuracy/nb_eval_steps}') def predict(model, test_dataloader): pred = [] model.eval() for batch in tqdm(test_dataloader): batch = tuple(t.to(device) for t in batch) b_input_ids, b_token_type_ids, b_input_mask, b_labels = batch with torch.no_grad(): logits = model(b_input_ids, token_type_ids=b_token_type_ids, attention_mask=b_input_mask) logits = logits[0].to('cpu').numpy() for p in logits: pred.append(p) return pred
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
`StratifiedKFold()`의 `split()`함수를 사용하면 인자로 주어진 데이터를 train data와 validation data로 나눈 index를 돌려줍니다. DataFrame에 index를 사용하여 train data와 validation data를 나눌 수 있습니다. 나눠진 데이터로 학습과 평가를 진행한 뒤 test data를 예측합니다. 예측한 데이터는 최종 예측 데이터(`final_test_pred`)에 합쳐집니다. 총 학습에 걸리는 시간은 한번 학습하는데 걸리는 시간 * `n_splits`로 넘겨준 수 ( 여기서는 5 )입니다.
for train_idx, validation_idx in skf.split(dataset["title"],dataset["topic_idx"]): dataset_train = pd.DataFrame() dataset_val = pd.DataFrame() dataset_train["title"] = dataset["title"][train_idx] dataset_train["topic_idx"] = dataset["topic_idx"][train_idx] dataset_val["title"] = dataset["title"][validation_idx] dataset_val["topic_idx"] = dataset["topic_idx"][validation_idx] train_inputs = bert_tokenize(dataset_train,"title","topic_idx",tokenizer) validation_inputs = bert_tokenize(dataset_val,"title","topic_idx",tokenizer) test_inputs = bert_tokenize(test,"title",None,tokenizer) for i in range(len(train_inputs)): train_inputs[i] = torch.tensor(train_inputs[i]) for i in range(len(validation_inputs)): validation_inputs[i] = torch.tensor(validation_inputs[i]) for i in range(len(test_inputs)): test_inputs[i] = torch.tensor(test_inputs[i]) train_data = TensorDataset(*train_inputs) train_sampler = RandomSampler(train_data) train_dataloader = DataLoader(train_data,sampler=train_sampler,batch_size=batch_size) validation_data = TensorDataset(*validation_inputs) validation_sampler = RandomSampler(validation_data) validation_dataloader = DataLoader(validation_data,sampler=validation_sampler,batch_size=batch_size) test_data = TensorDataset(*test_inputs) test_dataloader = DataLoader(test_data,batch_size=batch_size) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint,num_labels=num_labels) model.zero_grad() model.to(device) optimizer = AdamW(model.parameters(), lr=lr,eps=adam_epsilon,correct_bias=False) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=len(train_dataloader)*epochs) for _ in tnrange(1,epochs+1,desc='Epoch'): print("<" + "="*22 + F" Epoch {_} "+ "="*22 + ">") # train train(model, train_dataloader) # validation evaluate(model, validation_dataloader) # predict pred = predict(model, test_dataloader) final_test_pred += pred
Some weights of the model checkpoint at klue/bert-base were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.predictions.decoder.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.predictions.bias'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at klue/bert-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
5번의 교차 학습동안 서로 다른 train, validation data를 통해 학습한 model이 예측한 값은 `final_test_pred`에 더해져 있습니다. 이 예측값을 `argmax`하여 최종 예측값을 만들어냅니다.
final_test_pred[:10] len(final_test_pred) total_pred = np.argmax(final_test_pred,axis = 1) total_pred[:10] submission = pd.read_csv('data/sample_submission.csv') submission['topic_idx'] = total_pred submission.to_csv("results/klue-bert-base-kfold5.csv",index=False)
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
Exercise 3 **Please Note**: We updated the requirements.txtPlease install the new requirements before editing this exercise. Import packages
import os from vll.utils.download import download_mnist import numpy as np import matplotlib.pyplot as plt import skimage import skimage.io import torch import torch.nn.functional as F from torchvision import transforms from models.mnist.simple_cnn import Net
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 1(2 points)In this task, you will learn some basic tensor operations using the PyTorch library.Reference for torch: https://pytorch.org/docs/stable/torch.html
# Create a numpy array that looks like this: [0, 1, 2, ..., 19] arr = # Convert the numpy array to a torch tensor tensor = print(tensor) # Create a tensor that contains random numbers. # It should have the same size like the numpy array. # Multiply it with the previous tensor. rand_tensor = tensor = print(tensor) # Create a tensor that contains only 1s. # It should have the same size like the numpy array. # Substract it from the previous tensor. tensor = print(tensor) # Get the 5th element using a index. element = print(element) # Create a tensor that contains only 0s. # It should have the same size like the numpy array. # Multiply it with the previous tensor without any assignment (in place). # Load the image from the last exercise as RGB image. image = # Convert the image to a tensor image = # Print its shape print(image.shape) # Flatten the image image = print(len(image)) # Add another dimension resulting in a 1x78643 tensor print(image.shape) # Revert the last action print(image.shape) # Reshape the tensor, so that it has the original 2D dimensions image = print(image.shape) # Calculate the sum, mean and max of the tensor print(torch.sum(image)) print(torch.mean(image)) print(torch.max(image))
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 2(2 points)Use Autograd to perform operations on a tensor and output then gradients.
# Create a random 2x2 tensor which requires gradients x = print(x) # Create another tensor by adding 2.0 y = print(y) # Create a third tensor z = y^2 z = print(z) # Compute out as the mean of values in z out = print(out) # Perform back propagation on out # Print the gradients dout/dx # Create a copy of y whithout gradients y2 = print(y2.requires_grad) # Perform the mean operation on z # with gradients globally disabled
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 3(3 points)Implement a Dataset class for MNIST.
# We first download the MNIST dataset download_mnist() class MNIST: """ Dataset class for MNIST """ def __init__(self, root, transform=None): """ root -- path to either "training" or "testing" transform -- transform (from torchvision.transforms) to be applied to the data """ # save transforms self.transform = transform # TODO: create a list of all subdirectories (named like the classes) # within the dataset root # TODO: create a list of paths to all images # with the ground truth label def __len__(self): """ Returns the lenght of the dataset (number of images) """ # TODO: return the length (number of images) of the dataset def __getitem__(self, index): """ Loads and returns one image as floating point numpy array index -- image index in [0, self.__len__() - 1] """ # TODO: load the ith image as an numpy array (dtype=float32) # TODO: apply transforms to the image (if there are any) # TODO: return a tuple (transformed image, ground truth)
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 4(3 points)You can now load a pretrained neural network model we provide.Your last task is to run the model on the MNIST test dataset, plot some example images with the predicted labels and compute the prediction accuracy.
def validate(model, data_loader): # TODO: Create a 10x10 grid of subplots model.eval() correct = 0 # count for correct predictions with torch.no_grad(): for i, item in enumerate(data_loader): # TODO: unpack item into image and ground truth # and run network on them # TODO: get class with highest probability # TODO: check if prediction is correct # and add it to correct count # plot the first 100 images if i < 100: # TODO: compute position of ith image in the grid # TODO: convert image tensor to numpy array # and normalize to [0, 1] # TODO: make wrongly predicted images red # TODO: disable axis and show image # TODO: show the predicted class next to each image elif i == 100: plt.show() # TODO: compute and print the prediction accuracy in percent # create a DataLoader using the implemented MNIST dataset class data_loader = torch.utils.data.DataLoader( MNIST('data/mnist/testing', transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])), batch_size=1, shuffle=True) # create the neural network model = Net() # load the statedict from 'models/mnist/simple_cnn.pt' model.load_state_dict(torch.load('models/mnist/simple_cnn.pt')) # validate the model validate(model, data_loader)
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
选择 布尔类型、数值和表达式![](../Photo/33.png)- 注意:比较运算符的相等是两个等号,一个等到代表赋值- 在Python中可以用整型0来代表False,其他数字来代表True- 后面还会讲到 is 在判断语句中的用发 字符串的比较使用ASCII值 Markdown - https://github.com/younghz/Markdown EP:- - 输入一个数字,判断其实奇数还是偶数
#除了bool(0)是false以外,其他数全是true #bool(0) 执行时也是false #if bool(1-1): # print(yes) #else: # print(no) #结果是打印 no b1=bool(4) print(b1) i=3 if i==5: print('i=5') else: print("i!=5") i=eval(input("输入i" )) if i==5: print('i=5') else: print("i!=5")
输入i3 i!=5
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
产生随机数字- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数 产生一个随机数,你去输入,如果你输入的数大于随机数,那么就告诉你太大了,反之,太小了,然后你一直输入,知道它满意为止
import random a=random.randint(1,100) while 1: b=eval(input("比较数")) if a>b: print("太小了") if a<b: print("太大了") if a==b: print("yes") break
比较数50 太小了 比较数60 太小了 比较数70 太小了 比较数80 太小了 比较数90 太小了 比较数95 yes
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
其他random方法- random.random 返回0.0到1.0之间前闭后开区间的随机浮点- random.randrange(a,b) 前闭后开 EP:- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确- 进阶:写一个随机序号点名程序
import random a=random.randint(1,10) b=random.randint(1,10) c=a+b number=0 while number<5: d=eval(input("和为?")) if c>d: print("太小了") if c<d: print("太大了") if c==d: print("yes") break number +=1 import random a=random.randint(1,10) b=random.randint(1,10) c=a+b for i in range(5): d=eval(input("和为?")) if c>d: print("太小了") if c<d: print("太大了") if c==d: print("yes") break #输入一个数字,把它拆分成因子 #range(a,b) 从a按正序输出到b, a,b 可以是数字可以是变量。 a=eval(input("输入一个数")) while number<a:
_____no_output_____
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
if语句- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句- Python有很多选择语句:> - 单向if - 双向if-else - 嵌套if - 多向if-elif-else - 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进- 切记不可tab键和space混用,单用tab 或者 space- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐 EP:- 用户输入一个数字,判断其实奇数还是偶数- 进阶:可以查看下4.5实例研究猜生日 双向if-else 语句- 如果条件为真,那么走if内部语句,否则走else内部语句
a=eval(input("数字")) if a>2: if a%2==0: print("大于二的偶数") else: print("大于二的奇数") else: print("不大于二") a=input("有钱吗?") a1="有钱" b1="帅" c1="没有" if a==a1: #字符串可以直接比较,不需要定义变量 b=input("帅不帅") if b==b1: print("有没有老婆") c=input("") if c==c1: print("见一面") else: print("滚") else: print("回家等着吧") else: print("不大于二")
有钱吗?有钱 帅不帅帅 有没有老婆 有 滚
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
EP:- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误 嵌套if 和多向if-elif-else![](../Photo/35.png)
#出现一次elif,就要出现一次if #有点相似于else不能单独出现 a=input("有钱吗?") if a=="有": b=input("帅不帅 ") elif b=="不帅": c=input("有老婆吗 ") elif c=="没有": print("结婚") else: print("滚")
有钱吗?有 帅不帅 不帅
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
EP:- 提示用户输入一个年份,然后显示表示这一年的动物![](../Photo/36.png)- 计算身体质量指数的程序- BMI = 以千克为单位的体重除以以米为单位的身高的平方![](../Photo/37.png)
#多行同时输入 按住ALT 等鼠标变加号,下拉被选中的行,同时编写 year=eval(input("请输入年份")) if year%12==0: print("猴") elif year%12==1: print("鸡") elif year%12==2: print("狗") elif year%12==3: print("猪") elif year%12==4: print("鼠") elif year%12==5: print("牛") elif year%12==6: print("虎") elif year%12==7: print("兔") elif year%12==8: print("龙") elif year%12==9: print("蛇") elif year%12==10: print("马") elif year%12==11: print("羊") h=eval(input("请输入身高")) w=eval(input("请输入体重")) BMI=w/h/h if BMI<18.5: print("超轻") elif 18.5<=BMI<25: print("标准") elif 25<=BMI<30: print("超重") elif 30<=30: print("痴肥")
请输入身高1.69 请输入体重47 超轻
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
逻辑运算符![](../Photo/38.png) ![](../Photo/39.png)![](../Photo/40.png) EP:- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年- 提示用户输入一个年份,并返回是否是闰年- 提示用户输入一个数字,判断其是否为水仙花数
year=eval(input("请输入年份")) if (year%100!=0) and (year%4==0): print("是闰年") if year%400==0: print("是闰年") else: print("是平年") shu=eval(input("请输入一个数")) bai=shu//100 shi=shu//10 shi1=shi%10 ge=shu%10 a=bai/bai b=shi1/shi1 #已经知道是三位数了,不需要判断 #c=ge/ge #d=a+b+c #e=bai**d+shi1**d+ge**d e=bai**3+shi1**3+ge**3 if e==shu: print("是水仙花数") else: print("不是") shu=eval(input("请输入一个数")) bai=shu//100 shi=shu//10%10 ge=shu%10 print(bai,shi,ge) if bai**3+shi**3+ge**3==shu: print(shu) else: print("不是") for i in range(100,999): bai=i//100 shi=i//10 shi1=shi%10 ge=i%10 e=bai**3+shi1**3+ge**3 if e==i: print(i)
153 370 371 407
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
实例研究:彩票![](../Photo/41.png) Homework- 1![](../Photo/42.png)
import math a=eval(input("a")) b=eval(input("b")) c=eval(input("c")) pan=b**2-4*a*c if pan>0: print("两个根") elif pan<0: print("没有根") else: print("有一个根")
a1 b2 c3 没有根
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 2![](../Photo/43.png)
import random a=random.randint(1,100) b=random.randint(1,100) c=a+b d=eval(input("和为?")) if c==d: print("真") else: print("假")
_____no_output_____
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 3![](../Photo/44.png)
x=eval(input("今天是星期几?")) jth=eval(input("你想算几天以后")) c=(x+jth)%7 if c==0: print("今天是星期日") else: print("今天是星期",c)
今天是星期几?5 你想算几天以后7 今天是星期 5
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 4![](../Photo/45.png)
i=eval(input("请输入一个整数")) c=eval(input("请输入一个整数")) k=eval(input("请输入一个整数")) list1=[i,c,k] list1.sort() print(list1)
请输入一个整数5 请输入一个整数1 请输入一个整数9 [1, 5, 9]
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 5![](../Photo/46.png)
w1=eval(input("请输入包装")) m1=eval(input("请输入重量")) w2=eval(input("请输入包装")) m2=eval(input("请输入重量")) b1=w1*m1 b2=w2*m2 if b1>b2: print("b2更合适") else : print("b1更合适")
请输入包装50 请输入重量24.59 请输入包装25 请输入重量11.99 b2更合适
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 6![](../Photo/47.png)
mo1=eval(input("请输入月")) year1=eval(input("请输入年")) if (year1%100!=0) and (year1%4==0) and year1%400==0: if mo1==2: print(year1,"年",mo1,"月份","有29天") else: if mo1==1: print(year1,"年",mo1,"月份","有31天") elif mo1==2: print(year1,"年",mo1,"月份","有28天") elif mo1==3: print(year1,"年",mo1,"月份","有31天") elif mo1==4: print(year1,"年",mo1,"月份","有30天") elif mo1==5: print(year1,"年",mo1,"月份","有31天") elif mo1==6: print(year1,"年",mo1,"月份","有30天") elif mo1==7: print(year1,"年",mo1,"月份","有31天") elif mo1==8: print(year1,"年",mo1,"月份","有31天") elif mo1==9: print(year1,"年",mo1,"月份","有30天") elif mo1==10: print(year1,"年",mo1,"月份","有31天") elif mo1==11: print(year1,"年",mo1,"月份","有30天") elif mo1==12: print(year1,"年",mo1,"月份","有31天")
请输入月2 请输入年2001 2001 年 2 月份 有28天
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 7![](../Photo/48.png)
import random yingbi=random.randint(1,2) cai=eval(input("你猜猜")) if yingbi==cai: print("正确") else: print("错误")
你猜猜2 错误
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 8![](../Photo/49.png)
import random dian_nao=random.randint(0,2) ren=eval(input("你要出什么?"+"石头=0 剪刀=2 布=1 ")) print(dian_nao) if ren==dian_nao: print("平局") else: if ren==0 and dian_nao==2: print("赢了") elif ren==2 and dian_nao==0: print("输了") elif ren>dian_nao: print("赢了") else: print("输了")
你要出什么?石头=0 剪刀=2 布=1 0 1 输了
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 9![](../Photo/50.png)
import math year=eval(input("请输入年")) m=eval(input("请输入月")) q=eval(input("请输入日")) if m==1: m=13 year=year-1 if m==2: m=14 year=year-1 h=(q+int(26*(m+1)/10)+int(year%100)+int(year%100/4)+int(year/100/4)+int(5*year/100))%7 if h==0: print("今天是星期六") if h==1: print("今天是星期日") if h==2: print("今天是星期一") if h==3: print("今天是星期二") if h==4: print("今天是星期三") if h==5: print("今天是星期四") if h==6: print("今天是星期五") a=3.7 print(int(a)) h=(q+int(26*(m+1)/10)+int(year%100)+int(year%100/4)+int(year/100/4)+int(5*year/100)%7
3
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 10![](../Photo/51.png)
import random hua=random.randint(1,4) daxiao=random.randint(1,13) if hua==1: hua="红桃" elif hua==2: hua="梅花" elif hua==3: hua="方块" elif hua==4: hua="黑桃" if daxiao==1: daxiao="Ace" elif daxiao==11: daxiao="Jack" elif daxiao==12: daxiao="Queen" elif daxiao==13: daxiao="King" print("这张牌是 ",hua,daxiao)
这张牌是 方块 King
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 11![](../Photo/52.png)
shu11=eval(input("请输入一个数")) bai=shu11//100 shi=shu11//10%10 ge=shu11%10 if bai==ge: print(shu11,"是回文数") else: print("不是回文数")
请输入一个数123 不是回文数
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 12![](../Photo/53.png)
bian1=eval(input("请输入第一条边的边长")) bian2=eval(input("请输入第二条边的边长")) bian3=eval(input("请输入第三条边的边长")) if bian1+bian2>bian3 and abs(bian1-bian2)<bian3: print("合理") else: print("不合理") bian1=eval(input("请输入第一条边的边长")) bian2=eval(input("请输入第二条边的边长")) bian3=eval(input("请输入第三条边的边长")) qing3=bian1+bian2 qing2=bian1+bian3 qing1=bian3+bian2 q3=bian1-bian2 q2=bian1-bian3 q1=bian3-bian2 if qing1>bian1 and qing2>bian2 and qing3>bian3 : print("合理") else: print("不合理")
请输入第一条边的边长1 请输入第二条边的边长1 请输入第三条边的边长9 不合理
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
Part I. ETL Pipeline for Pre-Processing the Files PLEASE RUN THE FOLLOWING CODE FOR PRE-PROCESSING THE FILES Import Python packages
# Import Python packages import pandas as pd import cassandra import re import os import glob import numpy as np import json import csv
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Creating list of filepaths to process original event csv data files
# checking your current working directory print(os.getcwd()) # Get your current folder and subfolder event data filepath = os.getcwd() + '/event_data' # Create a for loop to create a list of files and collect each filepath for root, dirs, files in os.walk(filepath): # join the file path and roots with the subdirectories using glob file_path_list = glob.glob(os.path.join(root,'*')) #print(file_path_list)
/home/workspace
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Processing the files to create the data file csv that will be used for Apache Casssandra tables
# initiating an empty list of rows that will be generated from each file full_data_rows_list = [] # for every filepath in the file path list for f in file_path_list: # reading csv file with open(f, 'r', encoding = 'utf8', newline='') as csvfile: # creating a csv reader object csvreader = csv.reader(csvfile) next(csvreader) # extracting each data row one by one and append it for line in csvreader: #print(line) full_data_rows_list.append(line) # uncomment the code below if you would like to get total number of rows #print(len(full_data_rows_list)) # uncomment the code below if you would like to check to see what the list of event data rows will look like #print(full_data_rows_list) # creating a smaller event data csv file called event_datafile_full csv that will be used to insert data into the \ # Apache Cassandra tables csv.register_dialect('myDialect', quoting=csv.QUOTE_ALL, skipinitialspace=True) with open('event_datafile_new.csv', 'w', encoding = 'utf8', newline='') as f: writer = csv.writer(f, dialect='myDialect') writer.writerow(['artist','firstName','gender','itemInSession','lastName','length',\ 'level','location','sessionId','song','userId']) for row in full_data_rows_list: if (row[0] == ''): continue writer.writerow((row[0], row[2], row[3], row[4], row[5], row[6], row[7], row[8], row[12], row[13], row[16])) # check the number of rows in your csv file with open('event_datafile_new.csv', 'r', encoding = 'utf8') as f: print(sum(1 for line in f))
6821
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Part II. Complete the Apache Cassandra coding portion of your project. Now you are ready to work with the CSV file titled event_datafile_new.csv, located within the Workspace directory. The event_datafile_new.csv contains the following columns: - artist - firstName of user- gender of user- item number in session- last name of user- length of the song- level (paid or free song)- location of the user- sessionId- song title- userIdThe image below is a screenshot of what the denormalized data should appear like in the **event_datafile_new.csv** after the code above is run: Begin writing your Apache Cassandra code in the cells below Creating a Cluster
# This should make a connection to a Cassandra instance your local machine # (127.0.0.1) from cassandra.cluster import Cluster cluster = Cluster() # To establish connection and begin executing queries, need a session session = cluster.connect()
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Create Keyspace
# TO-DO: Create a Keyspace try: session.execute(""" CREATE KEYSPACE IF NOT EXISTS udacity WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }""" ) except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Set Keyspace
# TO-DO: Set KEYSPACE to the keyspace specified above try: session.set_keyspace('udacity') except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Now we need to create tables to run the following queries. Remember, with Apache Cassandra you model the database tables on the queries you want to run. Create queries to ask the following three questions of the data 1. Give me the artist, song title and song's length in the music app history that was heard during sessionId = 338, and itemInSession = 4 2. Give me only the following: name of artist, song (sorted by itemInSession) and user (first and last name) for userid = 10, sessionid = 182 3. Give me every user name (first and last) in my music app history who listened to the song 'All Hands Against His Own'
# Creating table for query "sessionId = 338, and itemInSession = 4" create_table_query = """ CREATE TABLE IF NOT EXISTS session_library ( session_id INT, item INT, artist TEXT, song_title TEXT, song_length FLOAT, PRIMARY KEY (session_id, item) ); """ try: session.execute(create_table_query) except Exception as e: print(e) # CSV file file = 'event_datafile_new.csv' # Insert data into table with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) # skip header for line in csvreader: insert_query = """ INSERT INTO session_library (session_id, item, artist, song_title, song_length) VALUES (%s, %s, %s, %s, %s); """ session.execute(insert_query, (int(line[8]), int(line[3]), line[0], line[9], float(line[5])))
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Do a SELECT to verify that the data have been inserted into each table
query = """ SELECT artist, song_title, song_length FROM session_library WHERE session_id = %s AND item = %s """ try: rows = session.execute(query, (338, 4)) except Exception as e: print(e) for row in rows: print ("Artist:", row.artist, ", Song:", row.song_title, ", Song length:", row.song_length)
Artist: Faithless , Song: Music Matters (Mark Knight Dub) , Song length: 495.30731201171875
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
COPY AND REPEAT THE ABOVE THREE CELLS FOR EACH OF THE THREE QUESTIONS
# Creating table for query "userid = 10, sessionid = 182" sorted by item create_table_query = """ CREATE TABLE IF NOT EXISTS user_library ( user_id TEXT, session_id INT, item INT, artist TEXT, song_title TEXT, first_name TEXT, last_name TEXT, PRIMARY KEY ((user_id, session_id), item) ); """ try: session.execute(create_table_query) except Exception as e: print(e) # Insert data into table with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) # skip header for line in csvreader: insert_query = """ INSERT INTO user_library (user_id, session_id, item, artist, song_title, first_name, last_name) VALUES (%s, %s, %s, %s, %s, %s, %s); """ session.execute(insert_query, (line[10], int(line[8]), int(line[3]), line[0], line[9], line[1], line[4])) # Select the data query = """ SELECT artist, song_title, first_name, last_name FROM user_library WHERE user_id =% s AND session_id = %s """ try: rows = session.execute(query, ("10", 182)) except Exception as e: print(e) for row in rows: print ("Artist:", row.artist, ", Song:", row.song_title, ", First name:", row.first_name, ", Last name:", row.last_name) # Creating table for query "song_title = All Hands Against His Own" create_table_query = """ CREATE TABLE IF NOT EXISTS song_library ( song_title TEXT, user_id TEXT, first_name TEXT, last_name TEXT, PRIMARY KEY (song_title, user_id) ); """ try: session.execute(create_table_query) except Exception as e: print(e) # Insert data into table with open(file, encoding = 'utf8') as f: csvreader = csv.reader(f) next(csvreader) # skip header for line in csvreader: insert_query = """ INSERT INTO song_library (song_title, user_id, first_name, last_name) VALUES (%s, %s, %s, %s); """ session.execute(insert_query, (line[9], line[10], line[1], line[4])) # Select the data query = """ SELECT first_name, last_name FROM song_library WHERE song_title = %s """ try: rows = session.execute(query, ("All Hands Against His Own",)) except Exception as e: print(e) for row in rows: print ("First Name:", row.first_name, ", Last Name:", row.last_name,)
First Name: Jacqueline , Last Name: Lynch First Name: Tegan , Last Name: Levine First Name: Sara , Last Name: Johnson
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Drop the tables before closing out the sessions
## TO-DO: Drop the table before closing out the sessions try: session.execute("DROP TABLE IF EXISTS session_library") session.execute("DROP TABLE IF EXISTS user_library") session.execute("DROP TABLE IF EXISTS song_library") except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Close the session and cluster connection¶
session.shutdown() cluster.shutdown()
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Module 2.3: Working with LSTMs in Keras (A Review)We turn to implementing a type of recurrent neural network know as LSTM in the Keras functional API. In this module we will pay attention to:1. Using the Keras functional API for defining models.2. Mounting your Google drive to your Colab environment for file interface.3. Generating synthetic data from a LSTM and sequence seed.Those students who are comfortable with all these matters might consider skipping ahead.Note that we will not spend time tuning hyper-parameters: The purpose is to show how different techniques can be implemented in Keras, not to solve particular data science problems as optimally as possible. Obviously, most techniques include hyper-parameters that need to be tuned for optimal performance. First we import required libraries.
import sys import numpy from google.colab import drive from keras.models import Sequential from keras import Model from keras.optimizers import Adadelta from keras.layers import Dense,Dropout,LSTM,Input from keras.callbacks import ModelCheckpoint from keras.utils import np_utils
Using TensorFlow backend.
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We will have a little fun and try to teach a neural network to write like Lewis Carroll, the author of Alice in Wonderland.Note, though, that the same technique can be used to model any sequential system, and generate simulations from seeds for such a system. Here the sequence are the characters written by Carroll during Alice in Wonderland, but it could be, for example, an industrial system that evolves in time. In that case, when we generate simulations of the system based on current and recent conditions we simulate the expected evolution of the system - something of great value! We will use the [Project Gutenburg text file of Alice in Wonderland](https://www.gutenberg.org/files/11/11.txt). But we need to get the file into our colab environment and this takes some work.First, you need to place the file in your google drive. We will assume that you will place it in a folder called "Mastering Keras Datasets", and that you rename it "Alice.txt". If you don't, you will need to the file path used in the code.Once you have done that, you will need to mount your google drive in Colab. Run the following code and complete the required authorizations.Note that you will need to mount your drive every time you use code from this tutorial.
# Note: You will need to mount your drive every time you # run code in this tutorial. drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code Enter your authorization code: ·········· Mounted at /content/drive
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we can load the file using code and prepare the data. We want to work with sequences of 100 characters as input data, and our target will be the next (101st) character.To keep things simple, we will ignore upper/lower case character distinctions, and cast all alphabetical characters to lower case. To allow our model to work with these characters, we will encode them as integers. We will then normalize them to real numbers between 0 and 1 and add a dimension (we are working with a system with a single feature). Finally we will one-hot encode the target character (see previous module for discussion of one-hot encoding). This is not the only way to handle the data, but it is a simple one.We will also return the unnormalized and non-reshaped X data, the number of characters found and an integer coding to character dictionary, all for use later.
def load_alice ( rawTextFile="/content/drive/My Drive/Mastering Keras Datasets/Alice.txt" ): # load ascii text and covert to lowercase raw_text = open(rawTextFile, encoding='utf-8').read() raw_text = raw_text.lower() # create mapping of unique chars to integers chars = sorted(list(set(raw_text))) char_to_int = dict((c, i) for i, c in enumerate(chars)) int_to_char = dict((i, c) for i, c in enumerate(chars)) # summarize the loaded data n_chars = len(raw_text) n_vocab = len(chars) print ("Total Characters: ", n_chars) print ("Total Vocab: ", n_vocab) # prepare the dataset of input to output pairs encoded as integers seq_length = 100 dataX = [] dataY = [] for i in range(0, n_chars - seq_length, 1): seq_in = raw_text[i:i + seq_length] seq_out = raw_text[i + seq_length] dataX.append([char_to_int[char] for char in seq_in]) dataY.append(char_to_int[seq_out]) n_patterns = len(dataX) print ("Total Patterns: ", n_patterns) # reshape X to be [samples, time steps, features] X = numpy.reshape(dataX, (n_patterns, seq_length, 1)) # normalize X = X / float(n_vocab) # one hot encode the output variable Y = np_utils.to_categorical(dataY) return X,Y,dataX,n_vocab,int_to_char
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now lets load the data. X and Y are the input and target label datasets we will use in training. X_ is the un-reshaped X data for use later.
X,Y,X_,n_vocab,int_to_char = load_alice()
Total Characters: 163810 Total Vocab: 58 Total Patterns: 163710
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
You can play around below to look at the shape of the resulting X and Y arrays, as well as their contents. But they are no longer understandable character strings.
# Play around here to look at data characteristics
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we define our LSTM using the Keras function API. We are going to make use of LSTM layers, and add a dropout layer for regularization.We will pass the data to the model defining function so that we can read input and output dimensions of it, rather than hard coding them.For comparison, a second version of the function is included showing how to use the sequential approach.
def get_model (X,Y): # define the LSTM model inputs=Input(shape=(X.shape[1],X.shape[2]),name="Input") lstm1=LSTM(256, input_shape=(100,1),return_sequences=True)(inputs) drop1=Dropout(0.2)(lstm1) lstm2=LSTM(256)(drop1) drop2=Dropout(0.2)(lstm2) outputs=Dense(Y.shape[1], activation='softmax')(drop2) model=Model(inputs=inputs,outputs=outputs) return model def get_model_sequential (X,Y): # define the LSTM model model = Sequential() model.add(LSTM(256, input_shape=(X.shape[1],X.shape[2]),return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(256)) model.add(Dropout(0.2)) model.add(Dense(Y.shape[1], activation='softmax')) return model
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We get our model.
model=get_model(X,Y)
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we will define an optimizer and compile it. If you are unfamiliar with the different types of optimizers available in keras, I suggest you read the keras documentation [here](https://keras.io/optimizers/) and play around training the model with different alternatives.
opt=Adadelta()
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
And we compile our model with the optimizer ready for training. We use categorical crossentropy as our loss function as this is a good default choice for working with a multi-class categorical target variable (i.e. the next character labels).
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we will make a function to fit the model. We will not do this very professionally (it is just a fun project), and so will not use any validation data. Rather, we will just run the training for a number of epoches - by default 20, though you can change this.We will, though, use a ModelCheckpoint callback to save the best performing weights and load these into the model and the conclusion of the training. Note that training performance should normally improve with more epoches, so this is unlikely to improve performance. What we really want is to be able to load the best weights without having to redo the training process (see below)If you want to, you are encouraged to alter the code in this tutorial to work with a training and validation set, and adjust the fit function below to incorporate an EarlyStopping callback based on performance on the validation data.We have two one LSTM layer, we are dealing with sequences of length 100. So if we 'unroll' it, we have a network of 200 LSTM layers. And inside these layers are infact multiple internal layers setting up the LSTM architecture! So this is actually a pretty big network, and training will take some time (about 200 hours on the free Colab environment for 200 epochs). This is probably too much to conveniently run yourself.Here we have an example of how we could train it on Colab. Colab will eventually time out. The best thing to do is to save our weights file to our google drive, so we can load it at leisure later and resume training. This is what we will do. Remember that if you didn't use the default name for your folder in your google drive you should change the path string in the code.In real life, you will also often want to save the state of the optimizer (so that it keeps its current learning rate, etc). You can do this by accessing and saving model.optimizer.get_state(). It is left as an exercise to implement this.*It is not expected that you train the network using this function - see below to load trained weights from your google drive.*
def fit_model (model,X,Y,epochs=100): # define the checkpoint callback filepath="/content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min') callbacks_list = [checkpoint] # fit the model model.fit(X, Y, epochs=epochs, batch_size=128, callbacks=callbacks_list) # load the best weights model.load_weights(filename) # return the final model return model
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We would then fit (train) the model by calling the above function.*It is not expected that you train the network using this function - see below to load trained weights from your google drive.*
model=fit_model(model,X,Y,100)
Epoch 1/100 163710/163710 [==============================] - 3246s 20ms/step - loss: 3.0840 - acc: 0.1663 Epoch 00001: loss improved from inf to 3.08398, saving model to /content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Here we will load saved weights. You can use the "alice_best_weights.hdf5" file that comes with the course - just place it in the same folder as the "alice.txt" file in your google drive. This file has been trained for 200 epoches, and gets a loss around 1.16.If you train the network yourself, the best weights will be saved as "alice_best_weights.hdf5" in the same location as above. You can therefore use the same code in both cases.In all cases remember to change the filepath if you are not using the default folder name.If you are resuming this tutorial here in a new session, you should re-mount your Google drive using the earlier code, re-load the data, and then run this code block to load the weights into a new model. If you want to train the model further, you will need to compile it with an optimizer.
model=get_model(X,Y) filepath="/content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5" model.load_weights(filepath)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:148: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3733: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:203: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we can see if our network has mastered the art of writing like Lewis Carroll! Let's write a function to let us see, and then call it.
def write_like_Lewis_Carroll(model,X_,n_vocab,int_to_char): # pick a random seed... start = numpy.random.randint(0, len(X_)-1) # ... in order to decide which X datum to use to start pattern = X_[start] print ("Seed:") print ("\"", ''.join([int_to_char[value] for value in pattern]), "\"") # generate characters for i in range(1000): # We transform the integer mapping of the characters to # real numbers suitable for input into our model. x = numpy.reshape(pattern, (1, len(pattern), 1)) x = x/float(n_vocab) # We use the model to estimate the probability distribution for # the next character prediction = model.predict(x, verbose=0) # We choose as the next character whichever the model thinks is most likely index = numpy.argmax(prediction) result = int_to_char[index] seq_in = [int_to_char[value] for value in pattern] sys.stdout.write(result) # We add the integer to our pattern... pattern.append(index) # ... and drop the earliest integer from our pattern. pattern = pattern[1:len(pattern)] print ("\nDone.") write_like_Lewis_Carroll(model,X_,n_vocab,int_to_char)
Seed: " for it to speak with. alice waited till the eyes appeared, and then nodded. 'it's no use speaking t " o see the mock turtle shat ' 'i should hiv tereat ' thought alice, 'i must be giederen seams to be a bonk,' she said to herself, 'it would be of very curious to onow what there was a sery dortut, and the ooral of that iss thin the cook and a large rister sha thought the was now one of the court. but the dould not heve a little botrle of the thate with a things of tee the door, she could not hear the conlers on the coor with pisted so see it was she same sotnd and mook up and was that it was ouer the whnle shoiek, and the thought the was now a bot of ceain, and was domencd it voice and bookdrs shat the was nuire silent for a minute, and she was nooiing at the court. 'i should hit tere things,' said the caterpillar. 'well, perhaps you may bean the same siings tuertion,' the duchess said to the gryphon. 'what i cen the thing,' said the caterpillar. 'well, perhaps you may bean the same siings tuertion,' the mock turtle seplied, 'that i man the mice,' said the caterpillar. 'well, per Done.
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Model Training - Basic ModelIn this Notebook, we will go through building a basic PyTorch Model for Training, and training it to get results on our dataset. ImportsIn this project, we will be using PyTorch for Deep Learning. NLP Pre-Processing, however, will be done using Keras's modules, because I prefer the implementation provided in the library. Instead of installing Keras, the relavant modules are imported in as scripts from GitHub.
import pandas as pd; import numpy as np; import torch; from torch import nn; from torch.utils.data import Dataset, DataLoader; import torch.nn.functional as F; from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score; import math; from numpy import save, load; import keras_sequence_preprocessing as seq_preprocessing; import keras_text_preprocessing as text_preprocessing; import matplotlib.pyplot as plt; import time; from PyTorchTools import EarlyStopping; quora_train_text = pd.read_csv('data/augmented_quora_text.txt'); quora_train_text = quora_train_text.dropna()
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Word EmbeddingsWe have 2 different types of Word Embeddings we will try in this application: Glove and FastText. To use the specific embedding, run that cell and not the other, as both are loaded in with the same formatting.
embed_size = 300; # GLOVE Embeddings embeddings_dict = {}; with open('../Embeddings/glove.6B/glove.6B.%dd.txt'%(embed_size), 'rb') as f: for line in f: values = line.split() word = values[0] vector = np.asarray(values[1:], "float32") embeddings_dict[word] = vector # FASTTEXT Embeddings embeddings_dict = {}; with open('../Embeddings/crawl-%dd-2M.vec'%(embed_size), 'rb') as f: for line in f: splits = line.split(); word = splits[0]; vec = np.asarray(splits[1:], dtype='float32') embeddings_dict[word.decode()] = vec;
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We build a Word Index from the embeddings. To quickly do this, we will simply be iterating over the dataset and assigning an integer value to each word.
word_index = {}; token_num = 0; for row in quora_train_text[['cleaned_text', 'target']].iterrows(): text, label = row[1] tokens = [token for token in text.split(' ')]; for token in tokens: if token not in word_index: word_index[token] = token_num; token_num = token_num + 1; MAX_WORDS = 200000 MAX_LEN = 70
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Next, we encode the individual sentences into sequences of integers from the word index. Than Pad them to fixed lengths using post-sequence-padding.
def encode_sentences(sentence, word_index=word_index, max_words=MAX_WORDS): output = []; for token in sentence.split(' '): if (token in word_index) and (word_index[token] < max_words): output.append(word_index[token]); return output; encoded_sentences = [encode_sentences(sent) for sent in quora_train_text['cleaned_text']] encoded_lengths = [len(x) for x in encoded_sentences] padded_sequences = seq_preprocessing.pad_sequences(encoded_sentences, maxlen=MAX_LEN, padding='post', truncating='post');
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
To do training / testing, we will divide the dataset into proper Training and Validation. 85% of the dataset for training, and the remaining 15% fo validation.
val_split = int(0.85 * len(quora_train_text)); train_ds = padded_sequences[:val_split]; val_ds = padded_sequences[val_split:]; train_y = quora_train_text.iloc[:val_split]['target'].values; val_y = quora_train_text.iloc[val_split:]['target'].values; train_lens = encoded_lengths[:val_split]; val_lens = encoded_lengths[val_split:]; len(train_ds), len(val_ds)
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We build an Embeddings Matrix. Each row in the matrix is a vector from Glove / Fasttext.
vocab_size = min(MAX_WORDS, len(word_index))+1; embeddings_matrix = np.zeros((vocab_size, embed_size)); for word, posit in word_index.items(): if posit >= vocab_size: break; vec = embeddings_dict.get(word); if vec is None: vec = np.random.sample(embed_size); embeddings_dict[word] = vec; embeddings_matrix[posit] = vec; embeddings_tensor = torch.Tensor(embeddings_matrix)
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Build a Data Loader to iterate over during the training process in a fixed batch size:
class QuoraDataset(Dataset): def __init__(self, encoded_sentences, labels, lengths): self.encoded_sentences = encoded_sentences; self.labels = labels; self.lengths = lengths; def __len__(self): return len(self.encoded_sentences); def __getitem__(self, index): x = self.encoded_sentences[index, :]; x = torch.LongTensor(x); y = self.labels[index]; y = torch.Tensor([y]); length = self.lengths[index]; length = torch.Tensor([length]); return x, y, length; train_dataset = QuoraDataset(train_ds, train_y, train_lens); val_dataset = QuoraDataset(val_ds, val_y, val_lens); batch_size = 512; train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True); val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=True);
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Creating a Model The Torch Model will have the following architecture:1. Embeddings Layer2. 1st LSTM Layer2. 1st Dense Fully Connected Layer3. ReLU Activation4. 2nd LSTM Layer5. Global Max-Average Pooling Layer6. 2nd Dense Fully Connected Layer
class Model(nn.Module): def __init__(self, embedding_matrix, hidden_unit = 64): super(Model, self).__init__(); vocab_size = embeddings_tensor.shape[0]; embedding_dim = embeddings_tensor.shape[1]; self.embedding_layer = nn.Embedding(vocab_size, embedding_dim); self.embedding_layer.weight = nn.Parameter(embeddings_tensor); self.embedding_layer.weight.requires_grad = True; self.lstm_1 = nn.LSTM(embedding_dim, hidden_unit, bidirectional=True); self.fc_1 = nn.Linear(hidden_unit*2, hidden_unit*2); self.lstm_2 = nn.LSTM(hidden_unit*2, hidden_unit, bidirectional=True); self.fc_2 = nn.Linear(hidden_unit * 2 * 2, 1); def forward(self, x): out = self.embedding_layer(x); out, _ = self.lstm_1(out); out = self.fc_1(out); out = torch.relu(out); out, _ = self.lstm_2(out); out_avg, out_max = torch.mean(out, 1), torch.max(out, 1)[0]; out = torch.cat((out_avg, out_max), 1); out = self.fc_2(out); return out; device = 'cuda' if torch.cuda.is_available() else 'cpu' device model = Model(embeddings_tensor, 64); model = model.to(device); model
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We use a Binary-Cross-Entropy Loss Function, and an Adam Optimizer with a 0.003 Learning Rate.
criterion = nn.BCEWithLogitsLoss(); optimizer = torch.optim.Adam(lr=0.003, params = model.parameters());
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Model TrainingNow we write the methods to iterate over the data to train and evaluate our model.
def train(nn_model, nn_optimizer, nn_criterion, data_loader, val_loader = None, num_epochs = 5, print_ratio = 0.1, verbose=True): print_every_step = int(print_ratio * len(train_loader)); if verbose: print('Training with model: '); print(nn_model); for epoch in range(num_epochs): epoch_time = time.time(); f1_scores_train = [] # Enable Training for the model nn_model.train() running_loss = 0; all_ys = torch.tensor(data=[]).to(device); all_preds = torch.tensor(data=[]).to(device); for ite, (x, y, l) in enumerate(data_loader): init_time = time.time(); # Convert our tensors to GPU tensors x = x.cuda() y = y.cuda() # Clear gradients nn_optimizer.zero_grad() # Forward Propagation and compute predictions preds = nn_model.forward(x, l) # Compute loss against actual values loss = nn_criterion(preds, y) # Add predictions and actuals into larger list for scoring all_preds = torch.cat([all_preds, preds]); all_ys = torch.cat([all_ys, y]); # Back Propagation and Updating weights loss.backward() nn_optimizer.step() running_loss = running_loss + loss.item(); if ite % print_every_step == print_every_step-1: # Compute Sigmoid Activation and Prediction Probabilities preds_sigmoid = torch.sigmoid(all_preds).cpu().detach().numpy(); # Compute Predictions over the Sigmoid base line all_preds = (preds_sigmoid > 0.5).astype(int); # Compute Metrics all_ys = all_ys.detach().cpu().numpy(); f_score = f1_score(all_ys, all_preds); precision = precision_score(all_ys, all_preds); recall = recall_score(all_ys, all_preds); accuracy = accuracy_score(all_ys, all_preds); print('\t[%d %5d %.2f sec] loss: %.3f acc: %.3f prec: %.3f rec: %.3f f1: %.3f'%(epoch+1, ite+1, time.time() - init_time, running_loss / 2000, accuracy, precision, recall, f_score)) all_ys = torch.tensor(data=[]).to(device); all_preds = torch.tensor(data=[]).to(device); print('Epoch %d done in %.2f min'%(epoch+1, (time.time() - epoch_time)/60 )); if val_loader is not None: eval(nn_model, nn_criterion, val_loader); running_loss = 0.0; def eval(nn_model, nn_criterion, data_loader): # Disable weight updates with torch.no_grad(): # Enable Model Evaluation nn_model.eval() running_loss = 0; all_ys = torch.tensor(data=[]).to(device); all_preds = torch.tensor(data=[]).to(device); init_time = time.time(); for ite, (x, y, l) in enumerate(data_loader): # Convert tensors to GPU tensors x = x.cuda() y = y.cuda() # Forward propagation to compute predictions preds = nn_model.forward(x, l) # Compute loss on these predictions loss = nn_criterion(preds, y) all_preds = torch.cat([all_preds, preds]); all_ys = torch.cat([all_ys, y]); running_loss = running_loss + loss.item(); # Compute Sigmoid activation on the predictions, and derive predictions over the Sigmoid base line preds_sigmoid = torch.sigmoid(all_preds).cpu().detach().numpy(); all_preds = (preds_sigmoid > 0.5).astype(int); # Compute metrics all_ys = all_ys.detach().cpu().numpy(); f_score = f1_score(all_ys, all_preds); precision = precision_score(all_ys, all_preds); recall = recall_score(all_ys, all_preds); accuracy = accuracy_score(all_ys, all_preds); print('\tEVAL: [%5d %.2f sec] loss: %.3f acc: %.3f prec: %.3f rec: %.3f f1: %.3f'%(ite+1, time.time() - init_time, running_loss / 2000, accuracy, precision, recall, f_score))
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Running Training on the Model
train(model, optimizer, criterion, train_loader) eval(model, criterion, val_loader)
EVAL: [ 764 16.99 sec] loss: 0.046 acc: 0.953 prec: 0.617 rec: 0.480 f1: 0.540
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Recurring Neural Networks with Keras Sentiment analysis from movie reviewsThis notebook is inspired by the imdb_lstm.py example that ships with Keras. But since I used to run IMDb's engineering department, I couldn't resist!It's actually a great example of using RNN's. The data set we're using consists of user-generated movie reviews and classification of whether the user liked the movie or not based on its associated rating.More info on the dataset is here:https://keras.io/datasets/imdb-movie-reviews-sentiment-classificationSo we are going to use an RNN to do sentiment analysis on full-text movie reviews!Think about how amazing this is. We're going to train an artificial neural network how to "read" movie reviews and guess whether the author liked the movie or not from them.Since understanding written language requires keeping track of all the words in a sentence, we need a recurrent neural network to keep a "memory" of the words that have come before as it "reads" sentences over time.In particular, we'll use LSTM (Long Short-Term Memory) cells because we don't really want to "forget" words too quickly - words early on in a sentence can affect the meaning of that sentence significantly.Let's start by importing the stuff we need:
from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM from tensorflow.keras.datasets import imdb
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now import our training and testing data. We specify that we only care about the 20,000 most popular words in the dataset in order to keep things somewhat managable. The dataset includes 5,000 training reviews and 25,000 testing reviews for some reason.
print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=20000)
Loading data...
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Let's get a feel for what this data looks like. Let's look at the first training feature, which should represent a written movie review:
x_train[0]
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
That doesn't look like a movie review! But this data set has spared you a lot of trouble - they have already converted words to integer-based indices. The actual letters that make up a word don't really matter as far as our model is concerned, what matters are the words themselves - and our model needs numbers to work with, not letters.So just keep in mind that each number in the training features represent some specific word. It's a bummer that we can't just read the reviews in English as a gut check to see if sentiment analysis is really working, though.What do the labels look like?
y_train[0]
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
They are just 0 or 1, which indicates whether the reviewer said they liked the movie or not.So to recap, we have a bunch of movie reviews that have been converted into vectors of words represented by integers, and a binary sentiment classification to learn from.RNN's can blow up quickly, so again to keep things managable on our little PC let's limit the reviews to their first 80 words:
x_train = sequence.pad_sequences(x_train, maxlen=80) x_test = sequence.pad_sequences(x_test, maxlen=80)
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now let's set up our neural network model! Considering how complicated a LSTM recurrent neural network is under the hood, it's really amazing how easy this is to do with Keras.We will start with an Embedding layer - this is just a step that converts the input data into dense vectors of fixed size that's better suited for a neural network. You generally see this in conjunction with index-based text data like we have here. The 20,000 indicates the vocabulary size (remember we said we only wanted the top 20,000 words) and 128 is the output dimension of 128 units.Next we just have to set up a LSTM layer for the RNN itself. It's that easy. We specify 128 to match the output size of the Embedding layer, and dropout terms to avoid overfitting, which RNN's are particularly prone to.Finally we just need to boil it down to a single neuron with a sigmoid activation function to choose our binay sentiment classification of 0 or 1.
model = Sequential() model.add(Embedding(20000, 128)) model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(1, activation='sigmoid'))
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
As this is a binary classification problem, we'll use the binary_crossentropy loss function. And the Adam optimizer is usually a good choice (feel free to try others.)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now we will actually train our model. RNN's, like CNN's, are very resource heavy. Keeping the batch size relatively small is the key to enabling this to run on your PC at all. In the real word of course, you'd be taking advantage of GPU's installed across many computers on a cluster to make this scale a lot better. WarningThis will take a very long time to run, even on a fast PC! Don't execute the next block unless you're prepared to tie up your computer for an hour or more.
model.fit(x_train, y_train, batch_size=32, epochs=15, verbose=2, validation_data=(x_test, y_test))
C:\Users\Frank\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\tensorflow\python\ops\gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
OK, let's evaluate our model's accuracy:
score, acc = model.evaluate(x_test, y_test, batch_size=32, verbose=2) print('Test score:', score) print('Test accuracy:', acc)
Test score: 0.9316869865119457 Test accuracy: 0.80904
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
First Python Notebook project- we perform a simple code and upload to git- taskwe will define a function of$$\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}$$
import numpy as np def phi(x): out = 1./np.sqrt(2.*np.pi)*np.exp(-x**2/2.) return out import matplotlib.pyplot as plt %matplotlib inline x_cod = np.linspace(-5,5,111) y_cod = phi(x_cod) plt.plot(x_cod,y_cod)
_____no_output_____
MIT
src/prj01.ipynb
hhk54250/20MA573-HHK
Predicting house prices: a regression exampleAnother common type of machine-learning problem is regression, which consists of predicting a continuous value instead of a discrete label: for instance, predicting the temperature tomorrow, given meteorological data; or predicting the time that a software project will take to complete, given its specifications. Dataset: The Boston Housing Price dataset We’ll attempt to predict the median price of homes in a given Boston suburb in the mid-1970s, given data points about the suburb at the time, such as the crime rate, the local property tax rate, and so on. It has relatively few data points: only 506, split between 404 training samples and 102 test samples. And each feature in the input data (for example, the crime rate) has a different scale. For instance, some values are pro- portions, which take values between 0 and 1; others take values between 1 and 12, others between 0 and 100, and so on.
import os, time import tensorflow as tf physical_devices = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(physical_devices[0], True) tf.keras.backend.clear_session() from tensorflow.keras.datasets import boston_housing (train_data, train_targets), (test_data, test_targets) = boston_housing.load_data() # Let’s look at the data: print (train_data.shape, test_data.shape) train_targets
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
The prices are typically between 10,000 and 50,000. If that sounds cheap, remember that this was the mid-1970s, and these prices aren’t adjusted for inflation. Preparing the data It would be problematic to feed into a neural network values that all take wildly differ- ent ranges. The network might be able to automatically adapt to such heterogeneous data, but it would definitely make learning more difficult. A widespread best practice to deal with such data is to do feature-wise normalization: for each feature in the input divide by the standard deviation, so that the feature is centered around 0 and has a unit standard deviation. This is easily done in Numpy.
mean = train_data.mean(axis=0) train_data -= mean std = train_data.std(axis=0) train_data /= std test_data -= mean test_data /= std
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Model Architecture Because so few samples are available, you’ll use a very small network with two hidden layers, each with 64 units. In general, the less training data you have, the worse overfitting will be, and using a small network is one way to mitigate overfitting.
from tensorflow.keras import models from tensorflow.keras import layers def build_model(): model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(train_data.shape[1],))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(1)) model.compile(optimizer='rmsprop', loss='mse', metrics=['mae']) return model
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Validating your approach using K-fold validationTo evaluate your networ while you keep adjusting its parameters (such as the number of epochs used for training), you could split the data into a training set and a validation set, as you did in the previous examples. But because you have so few data points, the validation set would end up being very small (for instance, about 100 examples). As a consequence, the validation scores might change a lot depending on which data points you chose to use for validation and which you chose for training: the validation scores might have a high variance with regard to the validation split. This would prevent you from reliably evaluating your model. The best practice in such situations is to use K -fold cross-validation. It consists of splitting the available data into K partitions (typically K = 4 or 5), instantiating K identical models, and training each one on K – 1 partitions while evaluating on the remaining partition. The validation score for the model used is then the average of the K validation scores obtained. In terms of code, this is straightforward
import numpy as np k = 4 num_val_samples = len(train_data) // k num_epochs = 100 all_scores = [] for i in range(k): print('processing fold #', i) val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train_data[:i * num_val_samples], train_data[(i + 1) * num_val_samples:]], axis=0) partial_train_targets = np.concatenate([train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], axis=0) model = build_model() model.fit(partial_train_data, partial_train_targets, epochs=num_epochs, batch_size=1, verbose=0) val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0) all_scores.append(val_mae) all_scores
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Let's train the network for a little longer: 500 epochs
num_epochs = 500 all_mae_histories = [] data from partition #k for i in range(k): print('processing fold #', i) val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train_data[:i * num_val_samples], train_data[(i + 1) * num_val_samples:]], axis=0) partial_train_targets = np.concatenate([train_targets[:i * num_val_samples], train_targets[(i + 1) * num_val_samples:]], axis=0) model = build_model() history = model.fit(partial_train_data, partial_train_targets, validation_data=(val_data, val_targets), epochs=num_epochs, batch_size=1, verbose=0) mae_history = history.history['val_mean_absolute_error'] all_mae_histories.append(mae_history) average_mae_history = [np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)] import matplotlib.pyplot as plt plt.plot(range(1, len(average_mae_history) + 1), average_mae_history) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() def smooth_curve(points, factor=0.9): smoothed_points = [] for point in points: if smoothed_points: previous = smoothed_points[-1] smoothed_points.append(previous * factor + point * (1 - factor)) else: smoothed_points.append(point) return smoothed_points smooth_mae_history = smooth_curve(average_mae_history[10:]) plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history) plt.xlabel('Epochs') plt.ylabel('Validation MAE') plt.show() model = build_model() model.fit(train_data, train_targets, epochs=80, batch_size=16, verbose=0) test_mse_score, test_mae_score = model.evaluate(test_data, test_targets)
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
from google.colab import drive drive.mount('/content/gdrive') import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential import PIL from PIL import Image import time import os os.environ['TF_CPP_MIN_LOG_LEVEL']='3' import pathlib data_dir='/content/gdrive/MyDrive/data' data_dir=pathlib.Path(data_dir) batch_size=32 image_height=180 image_width=180 d_image_count = len(list(data_dir.glob('*/*.png'))) print(d_image_count) train_ds=tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, # 왜 0.2로 하는거지? subset="training", seed=123, # seed는 무얼 의미하는 거지? image_size=(image_height, image_width), batch_size=batch_size ) val_ds=tf.keras.preprocessing.image_dataset_from_directory( data_dir, validation_split=0.2, subset="validation", seed=123, image_size=(image_height, image_width), batch_size=batch_size ) class_names = train_ds.class_names plt.figure(figsize=(10,10)) for images, labels in train_ds.take(1): #take 함수는 무슨 함수일까 for i in range(9): ax=plt.subplot(3,3,i+1) plt.imshow(images[i].numpy().astype("uint8")) #numpy,imshow 는 무슨 함수일까 plt.title(class_names[labels[i]]) plt.axis("off") plt.show() AUTOTUNE =tf.data.experimental.AUTOTUNE train_ds=train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE) val_ds=val_ds.cache().prefetch(buffer_size=AUTOTUNE) normaliztion_layer= layers.experimental.preprocessing.Rescaling(1./255) normalized_ds=train_ds.map(lambda x,y:(normaliztion_layer(x),y)) image_batch,labels_batch = next(iter(normalized_ds)) #자동 반복 함수 first_image = image_batch[0] print(np.min(first_image), np.max(first_image)) data_augmentation = keras.Sequential( [ layers.experimental.preprocessing.RandomFlip( "horizontal", input_shape=( image_height, image_width, 3)), layers.experimental.preprocessing.RandomRotation(0.1), layers.experimental.preprocessing.RandomZoom(0.1), ] ) num_classes=5 model2 =Sequential([ layers.experimental.preprocessing.Rescaling(1. / 255, input_shape=(image_height, image_width, 3)), layers.Conv2D(16, 3, padding='same', activation='relu'), layers.MaxPool2D(), layers.Conv2D(32, 3, padding='same', activation='relu'), layers.MaxPool2D(), layers.Conv2D(64, 3, padding='same', activation='relu'), layers.MaxPool2D(), # Pooling 필터링 개념으로 생각하자 해당 convolution에서 MAX값으로 필터링 하여 데이터 크기를 줄임 layers.Dropout(0.2), layers.Flatten(), # convolution의 dimension을 줄여주는 함수 layers.Dense(128, activation='relu'), layers.Dense(num_classes) ])
_____no_output_____
MIT
speedlimits.ipynb
kimsejin111/git_test
model2.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) epochs=15 history=model2.fit( train_ds, validation_data=val_ds, epochs=epochs, ) acc_E=history.history['accuracy'] val_E_acc=history.history['val_accuracy'] loss_E=history.history['loss'] val_E_loss=history.history['val_loss'] epochs_range= range(epochs) plt.figure(figsize=(8,8)) plt.subplot(1,2,1) plt.plot(epochs_range,acc_E,label="Training ACC") plt.plot(epochs_range,val_E_acc,label="Validation ACC") plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1,2,2) plt.plot(epochs_range,loss_E,label="Training Loss") plt.plot(epochs_range,val_E_loss,label="Validation Loss") plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show()
_____no_output_____
MIT
speedlimits.ipynb
kimsejin111/git_test
Scenario plots with single lines
output_names = ["notifications", "icu_occupancy"] scenario_x_min, scenario_x_max = 367, 920 sc_to_plot = [0, 1] legend = ["With vaccine", "Without vaccine"] lift_time = 731 text_font = 14 sc_colors = [COLOR_THEME[i] for i in scenario_list] sc_linestyles = ["solid"] * (len(scenario_list)) for output_type in ["median", "MLE"]: for output_name in output_names: plot_outputs(output_type, output_name, sc_to_plot, sc_linestyles, sc_colors, False, x_min=scenario_x_min, x_max=scenario_x_max) path = os.path.join(base_dir, output_type, f"{output_name}.png") plt.legend(labels=legend, fontsize=text_font, facecolor="white") ymax = plt.gca().get_ylim()[1] plt.vlines(x=lift_time,ymin=0,ymax=1.05*ymax, linestyle="dashed") # 31 Dec 2021 plt.text(x=(scenario_x_min + lift_time) / 2., y=1.* ymax, s="Vaccination phase", ha="center", fontsize = text_font) plt.text(x=lift_time + 3, y=ymax, s="Restrictions lifted", fontsize = text_font, rotation=90, va="top") plt.savefig(path)
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Make Adverse Effects figures
params = project.param_set.baseline.to_dict() ae_risk = { "AstraZeneca": params["vaccination_risk"]["tts_rate"], "mRNA": params["vaccination_risk"]["myocarditis_rate"] } agg_agegroups = ["10_14","15_19", "20_29", "30_39", "40_49", "50_59", "60_69", "70_plus"] text_font = 12 vacc_scenarios = { "mRNA": 2, "AstraZeneca": 2, } adverse_effects = { "mRNA": "myocarditis", "AstraZeneca": "thrombosis with thrombocytopenia syndrome", } adverse_effects_short= { "mRNA": "myocarditis", "AstraZeneca": "tts", } left_title = "COVID-19-associated hospitalisations prevented" def format_age_label(age_bracket): if age_bracket.startswith("70"): return "70+" else: return age_bracket.replace("_", "-") def make_ae_figure(vacc_scenario, log_scale=False): trimmed_df = uncertainty_df[ (uncertainty_df["scenario"]==vacc_scenarios[vacc_scenario]) & (uncertainty_df["time"]==913) ] right_title = f"Cases of {adverse_effects[vacc_scenario]}" fig = plt.figure(figsize=(10, 4)) plt.style.use("default") axis = fig.add_subplot() h_max = 0 delta_agegroup = 1.2 if log_scale else 4000 barwidth = .7 text_offset = 0.5 if log_scale else 20 unc_color = "black" unc_lw = 1. for i, age_bracket in enumerate(agg_agegroups): y = len(agg_agegroups) - i - .5 plt.text(x=delta_agegroup / 2, y=y, s=format_age_label(age_bracket), ha="center", va="center", fontsize=text_font) # get outputs hosp_output_name = f"abs_diff_cumulative_hospital_admissionsXagg_age_{age_bracket}" ae_output_name = f"abs_diff_cumulative_{adverse_effects_short[vacc_scenario]}_casesXagg_age_{age_bracket}" prev_hosp_df = trimmed_df[trimmed_df["type"] == hosp_output_name] prev_hosp_values = [ # median, lower, upper float(prev_hosp_df['value'][prev_hosp_df["quantile"] == q]) for q in [0.5, 0.025, 0.975] ] log_prev_hosp_values = [math.log10(v) for v in prev_hosp_values] ae_df = trimmed_df[trimmed_df["type"] == ae_output_name] ae_values = [ # median, lower, upper - float(ae_df['value'][ae_df["quantile"] == q]) for q in [0.5, 0.975, 0.025] ] log_ae_values = [max(math.log10(v), 0) for v in ae_values] if log_scale: plot_h_values = log_prev_hosp_values plot_ae_values = log_ae_values else: plot_h_values = prev_hosp_values plot_ae_values = ae_values h_max = max(plot_h_values[2], h_max) origin = 0 # hospital rect = mpatches.Rectangle((origin, y - barwidth/2), width=-plot_h_values[0], height=barwidth, facecolor="cornflowerblue") axis.add_patch(rect) plt.hlines(y=y, xmin=-plot_h_values[1], xmax=-plot_h_values[2], color=unc_color, linewidth=unc_lw) disp_val = int(prev_hosp_values[0]) plt.text(x= -plot_h_values[0] - text_offset, y=y + barwidth/2, s=int(disp_val), ha="right", va="center", fontsize=text_font*.7) min_bar_length = 0 if not log_scale: min_bar_length = 0 if vacc_scenario == "Astrazeneca" else 0 rect = mpatches.Rectangle((delta_agegroup + origin, y - barwidth/2), width=max(plot_ae_values[0], min_bar_length), height=barwidth, facecolor="tab:red") axis.add_patch(rect) plt.hlines(y=y, xmin=delta_agegroup + origin + plot_ae_values[1], xmax=delta_agegroup + origin + plot_ae_values[2], color=unc_color, linewidth=unc_lw) disp_val = int(ae_values[0]) plt.text(x=delta_agegroup + origin + max(plot_ae_values[0], min_bar_length) + text_offset, y=y + barwidth/2, s=int(disp_val), ha="left", va="center", fontsize=text_font*.7) # main title axis.set_title(f"Benefit/Risk analysis with {vacc_scenario} vaccine", fontsize = text_font + 2) # x axis ticks if log_scale: max_val_display = math.ceil(h_max) else: magnitude = 500 max_val_display = math.ceil(h_max / magnitude) * magnitude # sub-titles plt.text(x= - max_val_display / 2, y=len(agg_agegroups) + .3, s=left_title, ha="center", fontsize=text_font) plt.text(x= max_val_display / 2 + delta_agegroup, y=len(agg_agegroups) + .3, s=right_title, ha="center", fontsize=text_font) if log_scale: ticks = range(max_val_display + 1) rev_ticks = [-t for t in ticks] rev_ticks.reverse() x_ticks = rev_ticks + [delta_agegroup + t for t in ticks] labels = [10**(p) for p in range(max_val_display + 1)] rev_labels = [l for l in labels] rev_labels.reverse() x_labels = rev_labels + labels x_labels[max_val_display] = x_labels[max_val_display + 1] = 0 else: n_ticks = 6 x_ticks = [-max_val_display + j * (max_val_display/(n_ticks - 1)) for j in range(n_ticks)] + [delta_agegroup + j * (max_val_display/(n_ticks - 1)) for j in range(n_ticks)] rev_n_ticks = x_ticks[:n_ticks] rev_n_ticks.reverse() x_labels = [int(-v) for v in x_ticks[:n_ticks]] + [int(-v) for v in rev_n_ticks] plt.xticks(ticks=x_ticks, labels=x_labels) # x, y lims axis.set_xlim((-max_val_display, max_val_display + delta_agegroup)) axis.set_ylim((0, len(agg_agegroups) + 1)) # remove axes axis.set_frame_on(False) axis.axes.get_yaxis().set_visible(False) log_ext = "_log_scale" if log_scale else "" path = os.path.join(base_dir, f"{vacc_scenario}_adverse_effects{log_ext}.png") plt.tight_layout() plt.savefig(path, dpi=600) for vacc_scenario in ["mRNA", "AstraZeneca"]: for log_scale in [False,True]: make_ae_figure(vacc_scenario, log_scale)
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Counterfactual no vaccine scenario
output_type = "uncertainty" output_names = ["notifications", "icu_occupancy", "accum_deaths"] sc_to_plot = [0, 1] x_min, x_max = 400, 670 vacc_start = 426 for output_name in output_names: axis = plot_outputs(output_type, output_name, sc_to_plot, sc_linestyles, sc_colors, False, x_min=400, x_max=670) y_max = plt.gca().get_ylim()[1] plt.vlines(x=vacc_start, ymin=0, ymax=y_max, linestyle="dashdot") plt.text(x=vacc_start - 5, y=.6 * y_max, s="Vaccination starts", rotation=90, fontsize=12) path = os.path.join(base_dir, f"{output_name}_counterfactual.png") plt.tight_layout() plt.savefig(path, dpi=600)
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
number of lives saved
today = 660 # 21 Oct df = uncertainty_df[(uncertainty_df["type"] == "accum_deaths") & (uncertainty_df["quantile"] == 0.5) & (uncertainty_df["time"] == today)] baseline = float(df[df["scenario"] == 0]["value"]) counterfact = float(df[df["scenario"] == 1]["value"]) print(counterfact - baseline)
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Captum Insights for Visual Question Answering This notebook provides a simple example for the [Captum Insights API](https://captum.ai/docs/captum_insights), which is an easy to use API built on top of Captum that provides a visualization widget.It is suggested to first read the multi-modal [tutorial](https://captum.ai/tutorials/Multimodal_VQA_Interpret) with VQA that utilises the `captum.attr` API. This tutorial will skip over a large chunk of details for setting up the VQA model.As with the referenced tutorial, you will need the following installed on your machine:- Python Packages: torchvision, PIL, and matplotlib- pytorch-vqa: https://github.com/Cyanogenoid/pytorch-vqa- pytorch-resnet: https://github.com/Cyanogenoid/pytorch-resnet- A pretrained pytorch-vqa model, which can be obtained from: https://github.com/Cyanogenoid/pytorch-vqa/releases/download/v1.0/2017-08-04_00.55.19.pthPlease modify the below section for your specific installation paths:
import sys, os # Replace the placeholder strings with the associated # path for the root of pytorch-vqa and pytorch-resnet respectively PYTORCH_VQA_DIR = os.path.realpath("../../pytorch-vqa") PYTORCH_RESNET_DIR = os.path.realpath("../../pytorch-resnet") # Please modify this path to where it is located on your machine # you can download this model from: # https://github.com/Cyanogenoid/pytorch-vqa/releases/download/v1.0/2017-08-04_00.55.19.pth VQA_MODEL_PATH = "models/2017-08-04_00.55.19.pth" assert(os.path.exists(PYTORCH_VQA_DIR)) assert(os.path.exists(PYTORCH_RESNET_DIR)) assert(os.path.exists(VQA_MODEL_PATH)) sys.path.append(PYTORCH_VQA_DIR) sys.path.append(PYTORCH_RESNET_DIR)
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now, we will import the necessary modules to run the code in this tutorial. Please make sure you have the [prerequisites to run captum](https://captum.ai/docs/getting_started), along with the pre-requisites to run this tutorial (as described in the first section).
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap from PIL import Image import torch import torchvision import torchvision.transforms as transforms import torch.nn.functional as F try: import resnet # from pytorch-resnet except: print("please provide a valid path to pytorch-resnet") try: from model import Net, apply_attention, tile_2d_over_nd # from pytorch-vqa from utils import get_transform # from pytorch-vqa except: print("please provide a valid path to pytorch-vqa") from captum.insights import AttributionVisualizer, Batch from captum.insights.features import ImageFeature, TextFeature from captum.attr import TokenReferenceBase, configure_interpretable_embedding_layer, remove_interpretable_embedding_layer # Let's set the device we will use for model inference device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
VQA Model SetupLet's load the VQA model (again, please refer to the [model interpretation tutorial on VQA](https://captum.ai/tutorials/Multimodal_VQA_Interpret) if you want details)
saved_state = torch.load(VQA_MODEL_PATH, map_location=device) # reading vocabulary from saved model vocab = saved_state["vocab"] # reading word tokens from saved model token_to_index = vocab["question"] # reading answers from saved model answer_to_index = vocab["answer"] num_tokens = len(token_to_index) + 1 # reading answer classes from the vocabulary answer_words = ["unk"] * len(answer_to_index) for w, idx in answer_to_index.items(): answer_words[idx] = w vqa_net = torch.nn.DataParallel(Net(num_tokens), device_ids=[0, 1]) vqa_net.load_state_dict(saved_state["weights"]) vqa_net = vqa_net.to(device) # for visualization to convert indices to tokens for questions question_words = ["unk"] * num_tokens for w, idx in token_to_index.items(): question_words[idx] = w
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's modify the VQA model to use pytorch-resnet. Our model will be called `vqa_resnet`.
class ResNetLayer4(torch.nn.Module): def __init__(self): super().__init__() self.r_model = resnet.resnet152(pretrained=True) self.r_model.eval() self.r_model.to(device) self.buffer = None def save_output(module, input, output): self.buffer = output self.r_model.layer4.register_forward_hook(save_output) def forward(self, x): self.r_model(x) return self.buffer class VQA_Resnet_Model(Net): def __init__(self, embedding_tokens): super().__init__(embedding_tokens) self.resnet_layer4 = ResNetLayer4() def forward(self, v, q, q_len): q = self.text(q, list(q_len.data)) v = self.resnet_layer4(v) v = v / (v.norm(p=2, dim=1, keepdim=True).expand_as(v) + 1e-8) a = self.attention(v, q) v = apply_attention(v, a) combined = torch.cat([v, q], dim=1) answer = self.classifier(combined) return answer vqa_resnet = VQA_Resnet_Model(vqa_net.module.text.embedding.num_embeddings) # `device_ids` contains a list of GPU ids which are used for parallelization supported by `DataParallel` vqa_resnet = torch.nn.DataParallel(vqa_resnet, device_ids=[0, 1]) # saved vqa model's parameters partial_dict = vqa_net.state_dict() state = vqa_resnet.state_dict() state.update(partial_dict) vqa_resnet.load_state_dict(state) vqa_resnet.to(device) vqa_resnet.eval() # This is original VQA model without resnet. Removing it, since we do not need it del vqa_net # this is necessary for the backpropagation of RNNs models in eval mode torch.backends.cudnn.enabled = False
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum