path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
LabExercise8Answer.ipynb | ###Markdown
1D and 2D Discrete Wavelet Transform
###Code
import numpy as np
# we need to install PyWavelets labrary either by running "pip install PyWavelets" or "conda install pywavelets"
import pywt
from matplotlib.pyplot import imshow
import matplotlib.pyplot as plt
# The PyWavelets library contains 14 Mother wavelets, varying by shape, smoothness and compactness.
# They satisfy two mathematical conditions: 1. They are localized in time and frequency, 2. They have 0 mean.
# Explore wavelets further http://wavelets.pybytes.com/
print(pywt.families(short=False))
discrete_wavelets=['Haar','db7','sym3','coif3']
continuous_wavelets=['mexh','morl','gaus5','cgau7']
wavelets=[discrete_wavelets, continuous_wavelets]
funcs=[pywt.Wavelet, pywt.ContinuousWavelet]
fig, axarr = plt.subplots(nrows=2, ncols=4, figsize=(16,8))
for i, get_wavelets in enumerate(wavelets):
func=funcs[i]
row_no=i
for col_no, wavel_name in enumerate(get_wavelets):
wavelet=func(wavel_name)
family_name=wavelet.family_name
if i==0:
f=wavelet.wavefun()
wavelet_function=f[0] # get an array of y-values
x_values=f[-1] # get an array of x-values
else:
wavelet_function, x_values=wavelet.wavefun()
if col_no==0 and i==0:
axarr[row_no, col_no].set_ylabel("Discrete Wavelets", fontsize=16)
if col_no==0 and i==1:
axarr[row_no, col_no].set_ylabel("Continuous Wavelets", fontsize=16)
axarr[row_no, col_no].set_title("{}".format(family_name), fontsize=16)
axarr[row_no, col_no].plot(x_values, wavelet_function)
axarr[row_no, col_no].set_yticks([])
axarr[row_no, col_no].set_yticklabels([])
plt.show()
###Output
_____no_output_____
###Markdown
How are these wavelets different? Discrete Wavelet transform: 1D study We have seen that the DWT is implemented as a filter bank or a cascade of high-pass and low-pass filters. To apply the DWT on a signal, we start with the smallest scale. Small scales correspond to high frequencies. We first analyze high frequency behavior. At the second stage, the scale increases by a factor of 2 (the frequency decreases by a factor of 2). At this stage, we analyze the signal sections of half of the maximum frequency. We keep iterating the decomposition process until we reach a maximum decomposition level.Understanding of the maximum decomposition level: Due to downsampling, at some stage in the process, the number of samples in the signal will become smaller than the length of the wavelet filter and we will have reached the maximum decomposition level.
###Code
# create a signal to analyse
from scipy.signal import chirp, spectrogram
# e.g., linear chirp satisfies the following equation: f(t)=f0+(f1-f0)*t/t1
t=np.linspace(0, 10, 1500) # 1500 sampling points in 10 seconds
signal=chirp(t, f0=6, f1=1,t1=10, method='linear')
plt.plot(t,signal)
plt.title("Linear Chirp, f(0)=6, f(10)=1")
plt.xlabel('t (sec)')
plt.show()
###Output
_____no_output_____
###Markdown
Computing the frequency range of different levels of the coefficients.We have 1500 sampling points in 10 sec. This means that we have the frequency of 150 samples per second.So, the first approximation level will contain frequencies from 0 to 75, and the detail from 75 to 150.The second level approximation will contain frequencies from 0 to 37.5, and the detail will contain the subband from 37.5 until 75.The third level approximation will contain frequencies up to 18.75, and the detail will contain a subband between 18.75 and 37.5.Finally, the fourth level will contain frequencies up to 9.375, and the detail will contain the frequency range of [9.375, 18.75].
###Code
data = signal
waveletname = 'db7'
# let's setup a 4-step filter bank to find the approximation and detail wavelet coefficients of the signal wavelet transform
fig, axarr = plt.subplots(nrows=4, ncols=2, figsize=(8,8))
#collect the wavelet coefficients into
app_coeffs=[]
det_coeffs=[]
for i in range(4):
(data, coeff_d) = pywt.dwt(data, waveletname) # perform single stage iteratively
app_coeffs.append(data)# approximation coefs
det_coeffs.append(coeff_d)
axarr[i, 0].plot(data, 'b')
axarr[i, 1].plot(coeff_d, 'g')
axarr[i, 0].set_ylabel("Level {}".format(i + 1), fontsize=14, rotation=90)
axarr[i, 0].set_yticklabels([])
if i == 0:
axarr[i, 0].set_title("Approximation coefficients", fontsize=14)
axarr[i, 1].set_title("Detail coefficients", fontsize=14)
axarr[i, 1].set_yticklabels([])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Question 1: Given results obtained above, what represents the output of the high pass filter? What happens to signal resolution as you go from one level of the wavelet transform to the next? How were detail coefficients at each level generated?
###Code
# leave your answer here
## Signal reconstruction/ synthesis
#___________________________________
order=[3,2,1,0]
app_coeffs=[app_coeffs[i] for i in order]
det_coeffs=[det_coeffs[i] for i in order]
coeffs = pywt.wavedec(signal, 'db7', level=4)# prepare all coefficients in the right format for .waverec function
signal_r=pywt.waverec(coeffs, 'db7')
fig=plt.figure(figsize=(16,8))
plt.subplot(121)
plt.plot(t,signal)
plt.title("Original")
plt.xlabel('t (sec)')
plt.subplot(122)
plt.plot(t,signal_r, color='r')
plt.title("Reconsructed from 4-step filter bank")
plt.xlabel('t (sec)')
plt.show()
coeffs
###Output
_____no_output_____
###Markdown
2D DWT for image denoising Recall 2D coordinate conventions imshow convention ----------------------- axis y |----------> | | | axis x V Load an image as an array of 515x512 with pixel intensities in the range from 0 to 255. We corrupt the image with Gaussian noise ($\sigma=16$) and perform denoising using Haar wavelet coefficients.
###Code
import scipy
image1 = scipy.misc.ascent().astype(float)
noiseSigma = 32.0
image = image1+ np.random.normal(0, noiseSigma, size=image1.shape)
plt.subplot(121)
imshow(image, cmap='gray')
plt.title('Noisy image')
plt.subplot(122)
imshow(image1, cmap='gray')
plt.title('Original image')
wavelet = pywt.Wavelet('haar')
levels=?
###Output
_____no_output_____
###Markdown
Question 2. What is the maximum highest decomposition level we can reach if we apply the multi-step filter bank? Hint: Consider the size of the image and how many times you can downsample it before you run out of image samples.
###Code
# Leave you answer here
512-->256-->128
levels=9
wavelet_coeffs=pywt.wavedec2(image, wavelet, level=levels)
print("approximation at the highest level", wavelet_coeffs[0])
print("detail coefficients at the highest level (horizontal, vertical, diagonal)", wavelet_coeffs[1])
print("approximation at the second highest level", wavelet_coeffs[1])
print("detail coefficients at the second highest level (horizontal, vertical, diagonal)", wavelet_coeffs[2])
###Output
_____no_output_____
###Markdown
In order to denoise our image, we will be using a threshold model available in pywt library, specifically, pywt.thresholding.soft.We will be applying it to each single wavelet coefficient.
###Code
threshold=noiseSigma*np.sqrt(2*np.log2(image.size))
# We use a soft thresholding on each of the wavelet coefficients. Data values with absolute value less than "threshold"
# are replaced with a substitute
new=[]
k=0
for s in wavelet_coeffs:
if k==0:
new_ar=np.ndarray((1,1),buffer=np.zeros((1,1)))
new_ar=s
new.append(new_ar)
else:
new_ar=[]
for i in range(len(s)):
s_soft = pywt.threshold(s[i], value=threshold, mode='soft')
new_ar.append(s_soft)
new_ar=tuple(new_ar)
new.append(new_ar)
k=k+1
# We obtain the corresponding reconstruction
newimage = pywt.waverec2(new, wavelet)
imshow(newimage, cmap='gray')
plt.title("Reconstructed image with Haar wavelet")
###Output
_____no_output_____
###Markdown
Question 3: Why are you observing a block-like artifact in the reconstructed image? Does the choice of the wavelet matter?
###Code
# Type your answer here
#Can we find a better solution with a different choice of wavelet? In the function below, we keep the threshold the same,
# but we can explore other choices of wavelet functions.
def denoise(data, wavelet, noiseSigma):
levels=9
wave_c=pywt.wavedec2(data,wavelet,level=levels)
threshold=noiseSigma*np.sqrt(2*np.log2(data.size))
new=[]
k=0
for s in wave_c:
if k==0:
new_ar=np.ndarray((1,1),buffer=np.zeros((1,1)))
new_ar=s
new.append(new_ar)
else:
new_ar=[]
for i in range(len(s)):
s_soft = pywt.threshold(s[i], value=threshold, mode='soft')
new_ar.append(s_soft)
new_ar=tuple(new_ar)
new.append(new_ar)
k=k+1
# We obtain the corresponding reconstruction
newimage = pywt.waverec2(new, wavelet)
return newimage
# Let's see the result with coif3
image_coif=denoise(data=image, wavelet='coif3',noiseSigma=32.0)
imshow(image_coif, cmap='gray')
plt.title("Reconstructed image with coif3 wavelet")
###Output
_____no_output_____
###Markdown
Question 4: Choose other two wavelets from discrete_wavelets=['Haar','db7','sym3','coif3'] , use the "denoise" function for noise reduction and comment on the quality of image denoising depending on the choice of the wavelet. What do you think other ways we should try in order to improve denoising result?
###Code
# Leave your answer here
###Output
_____no_output_____ |
bureau.ipynb | ###Markdown
Bureau and Bureau Balance data*bureau.csv* data concerns client's earlier credits from other financial institutions. Some of the credits may be active and some are closed. Each previous (or ongoing) credit has its own row (only one row per credit) in *bureau* dataset. As a single client might have taken other loans from other financial institutions, for each row in the *application_train* data (ie *application_train.csv*) we can have multiple rows in this table. Feature explanations for this dataset are as below. Feature explanations Bureau tableSK_ID_CURR: ID of loan in our sample - one loan in our sample can have 0,1,2 or more related previous credits in credit bureau SK_BUREAU_ID: Recoded ID of previous Credit Bureau credit related to our loan (unique coding for each loan application)CREDIT_ACTIVE: Status of the Credit Bureau (CB) reported creditsCREDIT_CURRENCY: Recoded currency of the Credit Bureau creditDAYS_CREDIT: How many days before current application did client apply for Credit Bureau creditCREDIT_DAY_OVERDUE: Number of days past due on CB credit at the time of application for related loan in our sampleDAYS_CREDIT_ENDDATE: Remaining duration of CB credit (in days) at the time of application in Home CreditDAYS_ENDDATE_FACT: Days since CB credit ended at the time of application in Home Credit (only for closed credit)AMT_CREDIT_MAX_OVERDUE: Maximal amount overdue on the Credit Bureau credit so far (at application date of loan in our sample)CNT_CREDIT_PROLONG: How many times was the Credit Bureau credit prolongedAMT_CREDIT_SUM: Current credit amount for the Credit Bureau creditAMT_CREDIT_SUM_DEBT: Current debt on Credit Bureau creditAMT_CREDIT_SUM_LIMIT: Current credit limit of credit card reported in Credit BureauAMT_CREDIT_SUM_OVERDUE: Current amount overdue on Credit Bureau creditCREDIT_TYPE: Type of Credit Bureau credit (Car, cash,...)DAYS_CREDIT_UPDATE: How many days before loan application did last information about the Credit Bureau credit comeAMT_ANNUITY: Annuity of the Credit Bureau credit Bureau Balance tableSK_BUREAU_ID: Recoded ID of Credit Bureau credit (unique coding for each application) - use this to join to CREDIT_BUREAU tableMONTHS_BALANCE: Month of balance relative to application date (-1 means the freshest balance date) time only relative to the applicationSTATUS: Status of Credit Bureau loan during the month
###Code
# Last amended: 21st October, 2020
# Myfolder: C:\Users\Administrator\OneDrive\Documents\home_credit_default_risk
# Objective:
# Solving Kaggle problem: Home Credit Default Risk
# Processing bureau and bureau_balance datasets.
#
# Data Source: https://www.kaggle.com/c/home-credit-default-risk/data
# Ref: https://www.kaggle.com/jsaguiar/lightgbm-with-simple-features
# 1.0 Libraries
# (Some of these may not be needed here.)
%reset -f
import numpy as np
import pandas as pd
import gc
# 1.1 Reduce read data size
# There is a file reducing.py
# in this folder. A class
# in it is used to reduce
# dataframe size
# (Code modified to
# exclude 'category' dtype)
import reducing
# 1.2 Misc
import warnings
import os
warnings.simplefilter(action='ignore', category=FutureWarning)
# 1.3
pd.set_option('display.max_colwidth', -1)
# 1.4 Display multiple commands outputs from a cell
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# 2.0 One-hot encoding function. Uses pd.get_dummies()
# i) To transform 'object' columns to dummies.
# ii) Treat NaN as one of the categories
# iii) Returns transformed-data and new-columns created
def one_hot_encoder(df, nan_as_category = True):
original_columns = list(df.columns)
categorical_columns = [col for col in df.columns if df[col].dtype == 'object']
df = pd.get_dummies(df,
columns= categorical_columns,
dummy_na= nan_as_category # Treat NaNs as category
)
new_columns = [c for c in df.columns if c not in original_columns]
return df, new_columns
# 3.0 Prepare to read data
pathToData = "C:\\Users\\Administrator\\OneDrive\\Documents\\home_credit_default_risk"
os.chdir(pathToData)
# 3.1 Some constants
num_rows=None # Implies read all rows
nan_as_category = True # While transforming
# 'object' columns to dummies
# 3.2 Read bureau data first
bureau = pd.read_csv(
'bureau.csv.zip',
nrows = None # Read all rows
)
# 3.2.1 Reduce memory usage by appropriately
# changing data-types per feature:
bureau = reducing.Reducer().reduce(bureau)
# 3.2.2 Explore data now
bureau.head(5)
bureau.shape # (rows:17,16,428, cols: 17)
bureau.dtypes
# 3.2.3 In all, how many are categoricals?
bureau.dtypes.value_counts()
# 3.3
bureau.shape # (1716428, 17)
# 3.3.1
# What is the actual number of persons
# who might have taken multiple loans?
bureau['SK_ID_CURR'].nunique() # 305811 -- Many duplicate values exist
# Consider SK_ID_CURR as Foreign Key
# Primary key exists in application_train data
# Primary key: SK_ID_BUREAU
# 3.3.2
# As expected, there are no duplicate values here
bureau['SK_ID_BUREAU'].nunique() # 1716428 -- Unique id for each row
# 3.4 Summary of active/closed cases from bureau
# We aggregate on these also
bureau['CREDIT_ACTIVE'].value_counts()
###Output
_____no_output_____
###Markdown
Aggregationbureau_balance will be aggregated and merged with bureau. bureau will then be aggregated and merged with 'application_train' data. bureau will be aggregated in three different ways. This aggregation will be by SK_ID_CURR. Finally, aggregated bureau, called bureau_agg, will be merged with 'application_train' over (SK_ID_CURR).Aggregation over time is one way to extract behaviour of client. All categorical data is first OneHotEncoded (OHE). What is unique about this OHE is that NaN values are treated as categories.
###Code
# 4.0 OneHotEncode 'object' types in bureau
bureau, bureau_cat = one_hot_encoder(bureau, nan_as_category)
# 4.1
bureau.head()
bureau.shape # (1716428, 40); 17-->40
print(bureau_cat) # List of added columns
###Output
_____no_output_____
###Markdown
bureau_balanceIt is monthly data about the remaining balance of each one of the previous credits of clients that exist in dataset bureau. Each previous credit is identified by a unique ID, SK_ID_BUREAU, in dataset bureau. Each row in bureau_balance is one month of credit-due (from previous credit), and a single previous credit can have multiple rows, one for each month of the credit length. In my personal view, it should be in decreasing order. That is, for every person identified by SK_ID_BUREAU, credits should be decreasing each passing month.
###Code
# 5.0 Read over bureau_balance data
# and reduce memory usage through
# conversion of data-types:
bb = pd.read_csv('bureau_balance.csv.zip', nrows = None)
bb = reducing.Reducer().reduce(bb)
# 5.0.1 Display few rows
bb.head(10)
# 5.0.2 & Compare
bb.shape # (27299925, 3)
bureau.shape # (1716428, 17)
# 5.1 There is just one 'object' column
bb.dtypes.value_counts()
# 5.2 Is the data about all bureau cases?
# No, it appears it is not for all cases
bb['SK_ID_BUREAU'].nunique() # 817395 << 1716428
# 5.3 Just which cases are present in 'bureau' but absent
# in 'bb'
bb_id_set = set(bb['SK_ID_BUREAU']) # Set of IDs in bb
bureau_id_set = set(bureau['SK_ID_BUREAU']) # Set of IDs in bureau
# 5.4 And here is the difference list.
# How many of them?
list(bureau_id_set - bb_id_set)[:5] # sample [6292791,6292792,6292793,6292795,6292796,6292797,6292798,6292799]
len(bureau_id_set - bb_id_set) # 942074
# 5.5 OK. So let us OneHotEncode bb
bb, bb_cat = one_hot_encoder(bb, nan_as_category)
# 5.6 Examine the results
bb.head()
bb.shape # (27299925, 11) ; 3-->11
# 1 (ID) + 1 (numeric) + 9 (dummy)
bb_cat # New columns added
###Output
_____no_output_____
###Markdown
Performing aggregations in bbThere is one numeric feature: 'MONTHS_BALANCE'. On this feature we will perform ['min', 'max', 'size']. And on the rest of the features,dummy features, we will perform [mean]. Aggregation is by unique bureau ID, SK_ID_BUREAU. Resulting dataset is called bureau_agg.
###Code
# 6.0 Bureau balance: Perform aggregations and merge with bureau.csv
# First prepare a dictionary listing operations to be performed
# on various features:
bb_aggregations = {'MONTHS_BALANCE': ['min', 'max', 'size']}
for col in bb_cat:
bb_aggregations[col] = ['mean']
# 6.0.1
len(bb_aggregations) # 10
# 6.1 So what all aggregations to perform column-wise
bb_aggregations
# 6.2 Perform aggregations now in bb:
grouped = bb.groupby('SK_ID_BUREAU')
bb_agg = bb.groupby('SK_ID_BUREAU').agg(bb_aggregations)
# 6.3
bb_agg.shape # (817395, 12)
bb_agg.columns
# 6.3.1 Note that 'SK_ID_BUREAU'
# the grouping column is
# now table-index
bb_agg.head()
# 6.4 Rename bb_agg columns
bb_agg.columns = pd.Index([e[0] + "_" + e[1].upper() for e in bb_agg.columns.tolist()])
# 6.4.1
bb_agg.columns.tolist()
bb_agg.head()
# 6.5 Merge aggregated bb with bureau
bureau = bureau.join(
bb_agg,
how='left',
on='SK_ID_BUREAU'
)
# 6.5.1
bureau.head()
bureau.shape # (1716428, 52)
bureau.dtypes
# 6.5.2 Just for curiosity, what happened
# to those rows in 'bureau' where there
# was no matching record in bb_agg. The list
# of such IDs is:
# [6292791,6292792,6292793,6292795,6292796,6292797,6292798,6292799]
bureau[bureau['SK_ID_BUREAU'] ==6292791]
# 6.6 Drop SK_ID_BUREAU as bb has finally merged.
bureau.drop(['SK_ID_BUREAU'],
axis=1,
inplace= True
)
# We have three types of columns
# Categorical columns generated from bureau
# Categorical columns generated from bb
# Numerical columns
###Output
_____no_output_____
###Markdown
Performing aggregations in bureauAggregate 14 original numeric columns, as: ['min', 'max', 'mean', 'var']Aggregate rest of the columns that is dummy columns as: [mean]. This constitutes one of the three aggretaions. Aggregation is by SK_ID_CURR. Resulting dataset is called bureau_agg
###Code
# 7.0 Have a look at bureau again.
# SK_ID_CURR repeats for many cases.
# So, there is a case for aggregation
bureau.shape # (1716428, 51)
bureau.head()
## Aggregation strategy
# 7.1 Numeric features
# Columns: Bureau + bureau_balance numeric features
# Last three columns are from bureau_balance
# Total: 11 + 3 = 14
num_aggregations = {
'DAYS_CREDIT': ['min', 'max', 'mean', 'var'],
'DAYS_CREDIT_ENDDATE': ['min', 'max', 'mean'],
'DAYS_CREDIT_UPDATE': ['mean'],
'CREDIT_DAY_OVERDUE': ['max', 'mean'],
'AMT_CREDIT_MAX_OVERDUE': ['mean'],
'AMT_CREDIT_SUM': ['max', 'mean', 'sum'],
'AMT_CREDIT_SUM_DEBT': ['max', 'mean', 'sum'],
'AMT_CREDIT_SUM_OVERDUE': ['mean'],
'AMT_CREDIT_SUM_LIMIT': ['mean', 'sum'],
'AMT_ANNUITY': ['max', 'mean'],
'CNT_CREDIT_PROLONG': ['sum'],
'MONTHS_BALANCE_MIN': ['min'],
'MONTHS_BALANCE_MAX': ['max'],
'MONTHS_BALANCE_SIZE': ['mean', 'sum']
}
len(num_aggregations) # 14
# 7.2 Bureau categorical features. Derived from:
# 'CREDIT_ACTIVE', 'CREDIT_CURRENCY', 'CREDIT_TYPE',
# Total:
cat_aggregations = {}
bureau_cat # bureau_cat are newly created dummy columns
# but all are numerical columns
# 7.2.1
len(bureau_cat) # 26
# 7.2.2 For all these new dummy columns in bureau, we will
# take mean
for cat in bureau_cat: cat_aggregations[cat] = ['mean']
cat_aggregations
len(cat_aggregations) # 26
# 7.3.1 In addition, we have in bureau. columns that merged
# from 'bb' ie bb_cat
# So here is our full list
bb_cat
len(bb_cat) # 9
# 7.3.2
for cat in bb_cat: cat_aggregations[cat + "_MEAN"] = ['mean']
cat_aggregations
len(cat_aggregations) # 26 + 9 = 35
# 7.4 Have a look at bureau columns again
# Just to compare above results with what
# already exists
bureau.columns # 51
len(bureau.columns) # 35 (dummy) + 14 (num) + 1 (SK_ID_CURR) + 1 (DAYS_ENDDATE_FACT) = 51
# 7.5 Now that we have decided
# our aggregation strategy for each column
# (except 2), let us now aggregate:
# Note that SK_ID_CURR now becomes an index to data
grouped = bureau.groupby('SK_ID_CURR')
bureau_agg = grouped.agg({**num_aggregations, **cat_aggregations})
# 7.6
bureau_agg.head()
bureau_agg.shape # (305811, 62) (including newly created min, max etc columns)
# 7.7 Remove hierarchical index from bureau_agg
bureau_agg.columns # 62
bureau_agg.columns = pd.Index(['BURO_' + e[0] + "_" + e[1].upper() for e in bureau_agg.columns.tolist()])
# 7.8
bureau_agg.head()
# 7.8.1 Note that SK_ID_CURR is now an index to table
bureau_agg.columns # 62: Due to creation of min, max, var etc columns
# 7.9 No duplicate index
bureau_agg.index.nunique() # 305811
len(set(bureau_agg.index)) # 305811
###Output
_____no_output_____
###Markdown
More Aggregation and mergerWe now filter bureau on CREDIT_ACTIVE_Active feature. This will create two subsets of data. This feature has values of 1 and 0. Filter data where CREDIT_ACTIVE_Active value is 1. Then aggregate(only) numeric features of this filtered data-subset by grouping on SK_ID_CURR. Next, filter, bureau, on CREDIT_ACTIVE_Closed = 1 . And again aggregate the subset on numeric features. Merge all these with bureau_agg (NOT bureau.) It is as if we are trying to extract the behaviour of those whose credits are active and those whose credits are closed.
###Code
# 8.0 In which cases credit is active? Filter data
active = bureau[bureau['CREDIT_ACTIVE_Active'] == 1]
active.head()
active.shape # (630607, 51)
# 8.1 Aggregate numercial features of the filtered subset over SK_ID_CURR
active_agg = active.groupby('SK_ID_CURR').agg(num_aggregations)
# 8.1.1
active_agg.head()
active_agg.shape # (251815, 27)
# 8.1.2 Rename multi-indexed columns
active_agg.columns = pd.Index(['ACTIVE_' + e[0] + "_" + e[1].upper() for e in active_agg.columns.tolist()])
active_agg.columns
# 9.0 Difference between length of two datasets
active_agg_set = set(active_agg.index)
bureau_agg_set = set(bureau_agg.index)
len(bureau_agg_set) # 305811
len(active_agg_set) # 251815
list(bureau_agg_set - active_agg_set)[:4] # Few examples: {131074, 393220, 262149, 262153]
# 9.1 Merge bureau_agg with active_agg over 'SK_ID_CURR'
bureau_agg = bureau_agg.join(
active_agg,
how='left',
on='SK_ID_CURR'
)
# 9.2
bureau_agg.shape # (305811, 89)
# 9.3 Obviouly some rows will hold NaN values for merged columns
bureau_agg.loc[[131074,393220,262149, 262153]]
# 9.4 Release memory
del active, active_agg
gc.collect()
# 10.0 Same steps for the CREDIT_ACTIVE_Closed =1 cases
# Bureau: Closed credits - using only numerical aggregations
closed = bureau[bureau['CREDIT_ACTIVE_Closed'] == 1]
closed_agg = closed.groupby('SK_ID_CURR').agg(num_aggregations)
closed_agg.columns = pd.Index(['CLOSED_' + e[0] + "_" + e[1].upper() for e in closed_agg.columns.tolist()])
bureau_agg = bureau_agg.join(closed_agg, how='left', on='SK_ID_CURR')
# 10.1
bureau_agg.shape # (305811, 116)
# 10.2
del closed, closed_agg, bureau
gc.collect()
# 10.3 SK_ID_CURR is index. Index is also saved by-default.
bureau_agg.to_csv("processed_bureau_agg.csv.zip", compression = "zip")
##################
###Output
_____no_output_____ |
DeepForge.ipynb | ###Markdown
Run the Two sections in different Notebooks Section-1 (Cross Validation Training)
###Code
import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt
sample = '../input/signature-forgery-detection/Signatures_2/Train_Set/001/Real/001_02.PNG'
sample = tf.io.read_file(sample)
sample = tf.image.decode_jpeg(sample)
sample.shape
shape = (400,800,3)
def load_img(path):
image_file = tf.io.read_file(path)
image = tf.image.decode_jpeg(image_file)
image = np.resize(image,shape)
return image
!find . -name "*.DS_Store" -type f -delete
train_path = '../input/signature-forgery-detection/Signatures_2/Train_Set/'
dataset = []
targets = []
real_count =0
forged_count = 0
persons = os.listdir(train_path)
for person in persons:
path = os.path.join(train_path,person)
real = os.path.join(path,'Real/')
real_files = os.listdir(real)
fraud = os.path.join(path,'Forged/')
fraud_files = os.listdir(fraud)
for j in range(len(real_files)):
for k in range(len(real_files)):
if j==k:
continue
real_count +=1
img1 = load_img(os.path.join(real,real_files[j]))
img2 = load_img(os.path.join(real,real_files[k]))
dataset.append([img1,img2])
targets.append(0.)
for j in range(len(real_files)):
for k in range(len(fraud_files)):
if j==k:
continue
forged_count+=1
img1 = load_img(os.path.join(real,real_files[j]))
img2 = load_img(os.path.join(fraud,fraud_files[k]))
dataset.append([img1,img2])
targets.append(1.)
print(real_count)
print(forged_count)
def Siamese_Model(input_shape=(400,800,3)):
input_one = tf.keras.layers.Input(shape=input_shape)
input_two = tf.keras.layers.Input(shape=input_shape)
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(32,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Conv2D(64,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Conv2D(64,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dropout(0.3))
cnn.add(tf.keras.layers.Dense(128))
distance_layer = tf.keras.layers.Lambda(lambda tensor: abs(tensor[0]-tensor[1]))
out1 = cnn(input_one)
out2 = cnn(input_two)
l1_distance = distance_layer([out1,out2])
final_out = tf.keras.layers.Dense(1,activation='sigmoid')(l1_distance)
model = tf.keras.Model([input_one,input_two],final_out)
return model
dataset = np.array(dataset)
targets = np.array(targets)
from sklearn.model_selection import KFold
kf = KFold(n_splits=5, shuffle=True,random_state=32)
cvscores = []
for train,val in kf.split(dataset,targets):
train = np.array(train)
val = np.array(val)
model = Siamese_Model()
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
model.fit([dataset[train,0],dataset[train,1]],targets[train],epochs =15,verbose=0)
score = model.evaluate([dataset[val,0],dataset[val,1]],targets[val],verbose=0)
print(score)
cvscores.append(score)
tf.keras.backend.clear_session()
cvscores = np.array(cvscores)
errors = cvscores[:,1]
mean = np.mean(errors)
median = np.median(errors)
std = np.std(errors)
print('Mean Median Std')
print(mean,median,std)
tf.keras.backend.clear_session()
###Output
_____no_output_____
###Markdown
Section-2 (Normal Training)
###Code
import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt
sample = '../input/signature-forgery-detection/Signatures_2/Train_Set/001/Real/001_02.PNG'
sample = tf.io.read_file(sample)
sample = tf.image.decode_jpeg(sample)
sample.shape
shape = (400,800,3)
def load_img(path):
image_file = tf.io.read_file(path)
image = tf.image.decode_jpeg(image_file)
image = np.resize(image,shape)
return image
!find . -name "*.DS_Store" -type f -delete
pairs = 20 # KEEP 10 FOR CROSS VALIDATION
train_path = '../input/signature-forgery-detection/Signatures_2/Train_Set'
dataset = []
targets = []
persons = os.listdir(train_path)
for person in persons:
path = os.path.join(train_path,person)
real = os.path.join(path,'Real/')
real_files = os.listdir(real)
fraud = os.path.join(path,'Forged/')
fraud_files = os.listdir(fraud)
for j in range(pairs//2):
ind1 = np.random.randint(0,len(real_files)-1)
ind2 = np.random.randint(0,len(real_files)-1)
ind3 = np.random.randint(0,len(fraud_files)-1)
img1 = load_img(os.path.join(real,real_files[ind1]))
img2 = load_img(os.path.join(real,real_files[ind2]))
img3 = load_img(os.path.join(fraud,fraud_files[ind3]))
dataset.append([img1,img2])
dataset.append([img1,img3])
dataset.append([img2,img3])
targets.append(0.)
targets.append(1.)
targets.append(1.)
def Siamese_Model(input_shape=(400,800,3)):
input_one = tf.keras.layers.Input(shape=input_shape)
input_two = tf.keras.layers.Input(shape=input_shape)
cnn = tf.keras.models.Sequential()
cnn.add(tf.keras.layers.Conv2D(32,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Conv2D(64,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Conv2D(64,(3,3),activation='relu',padding='same'))
cnn.add(tf.keras.layers.AveragePooling2D((2,2)))
cnn.add(tf.keras.layers.Flatten())
cnn.add(tf.keras.layers.Dropout(0.3))
cnn.add(tf.keras.layers.Dense(128))
distance_layer = tf.keras.layers.Lambda(lambda tensor: abs(tensor[0]-tensor[1]))
out1 = cnn(input_one)
out2 = cnn(input_two)
l1_distance = distance_layer([out1,out2])
final_out = tf.keras.layers.Dense(1,activation='sigmoid')(l1_distance)
model = tf.keras.Model([input_one,input_two],final_out)
return model
dataset = np.array(dataset)
targets = np.array(targets)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=5, min_lr=0.001)
model = Siamese_Model()
model.compile(optimizer=tf.keras.optimizers.Adam(0.001),loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit([dataset[:,0],dataset[:,1]],targets,epochs=30,callbacks=[reduce_lr],validation_split=0.1)
history_dict = history.history
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
plt.plot(acc)
plt.plot(val_acc)
plt.plot(loss)
plt.plot(val_loss)
###Output
_____no_output_____
###Markdown
Testing Manually
###Code
sample1 = load_img('../input/signature-forgery-detection/Signatures_2/Test_Set/006/Real/11_054.png') # True Signature of a person
sample2 = load_img('../input/signature-forgery-detection/Signatures_2/Test_Set/006/Forged/01_0102054.PNG') # Forged version of the signature
sample3 = load_img('../input/signature-forgery-detection/Signatures_2/Test_Set/006/Forged/01_0207054.PNG') # Another Forged Version of the Signature
sample1 = np.expand_dims(sample1,0)
sample2 = np.expand_dims(sample2,0)
sample3 = np.expand_dims(sample3,0)
ans = model.predict([sample1,sample3])
ans = np.around(ans,decimals=2)
if ans < 0.5:
print("Genuine Signature")
else:
print("Fruad Signature")
###Output
_____no_output_____
###Markdown
Test Accuracy
###Code
train_path = '../input/signature-forgery-detection/Signatures_2/Test_Set'
dataset = []
targets = []
persons = os.listdir(train_path)
for person in persons:
path = os.path.join(train_path,person)
real = os.path.join(path,'Real/')
real_files = os.listdir(real)
fraud = os.path.join(path,'Forged/')
fraud_files = os.listdir(fraud)
for j in range(pairs//2):
ind1 = np.random.randint(0,len(real_files)-1)
ind2 = np.random.randint(0,len(real_files)-1)
ind3 = np.random.randint(0,len(fraud_files)-1)
img1 = load_img(os.path.join(real,real_files[ind1]))
img2 = load_img(os.path.join(real,real_files[ind2]))
img3 = load_img(os.path.join(fraud,fraud_files[ind3]))
dataset.append([img1,img2])
dataset.append([img1,img3])
dataset.append([img2,img3])
targets.append(0.)
targets.append(1.)
targets.append(1.)
dataset = np.array(dataset)
targets = np.array(targets)
score = model.evaluate([dataset[:,0],dataset[:,1]],targets)
print("Test Accuracy ",end='')
print(score[1])
###Output
_____no_output_____ |
frontend/assets/backup/dev/frontend.ipynb | ###Markdown
Recording Screen
###Code
%run -i images2
###Output
./IMG/gameplay_2021_10_23_16_31_09_977.jpg
./IMG/gameplay_2021_10_23_16_31_11_996.jpg
./IMG/gameplay_2021_10_23_16_31_14_000.jpg
./IMG/gameplay_2021_10_23_16_31_16_013.jpg
./IMG/gameplay_2021_10_23_16_31_18_038.jpg
./IMG/gameplay_2021_10_23_16_31_20_082.jpg
./IMG/gameplay_2021_10_23_16_31_22_138.jpg
./IMG/gameplay_2021_10_23_16_31_24_736.jpg
./IMG/gameplay_2021_10_23_16_31_26_781.jpg
./IMG/gameplay_2021_10_23_16_31_28_828.jpg
./IMG/gameplay_2021_10_23_16_31_30_838.jpg
./IMG/gameplay_2021_10_23_16_31_32_846.jpg
./IMG/gameplay_2021_10_23_16_31_34_855.jpg
./IMG/gameplay_2021_10_23_16_31_36_902.jpg
./IMG/gameplay_2021_10_23_16_31_38_907.jpg
./IMG/gameplay_2021_10_23_16_31_40_913.jpg
./IMG/gameplay_2021_10_23_16_31_42_953.jpg
./IMG/gameplay_2021_10_23_16_31_44_957.jpg
./IMG/gameplay_2021_10_23_16_31_46_958.jpg
./IMG/gameplay_2021_10_23_16_31_49_013.jpg
###Markdown
Recording Keyboard The first step is learn how to record the keybord
###Code
# Testing the record
%run -i keyboard
#!python keyboard.py
#with open('log.txt') as f:
# lines = f.readlines()
#print(lines)
#!python video.py
###Output
_____no_output_____ |
nbs/00_connectors.gcp.ipynb | ###Markdown
Connectors for GCP> API details.
###Code
#hide
from nbdev.showdoc import *
###Output
_____no_output_____
###Markdown
GCS
###Code
# exports
import json
from io import BytesIO
import pandas as pd
from google.cloud import storage
class GCSConnector:
"""
Object: GCSConnector(Object)
Purpose: Connector to the GCS account
"""
def __init__(self, credentials, bucketname):
"""
Initialize Google Cloud Storage Connector to bucket
:param credentials: (str) JSON credentials filename
:param bucketname: (str) bucket name
"""
self._CREDENTIALS = credentials
self._BUCKETNAME = bucketname
self._gcsclient = storage.Client.from_service_account_json(self._CREDENTIALS)
self._bucket = self._gcsclient.get_bucket(self._BUCKETNAME)
def get_file(self, filename):
"""
Get file content from GCS
:param filename:
:return: (BytesIO) GCS File as byte
"""
blob = storage.Blob(filename, self._bucket)
content = blob.download_as_string()
return BytesIO(content)
def send_json(self, json_file, filename):
"""
:param json_file:
:param filename:
:return:
"""
self._bucket.blob(filename).upload_from_string(json.dumps(json_file, ensure_ascii=False))
def send_dataframe(self, df, filename, **kwargs):
"""
:param filename:
:param kwargs:
:return:
"""
self._bucket.blob(filename).upload_from_string(
df.to_csv(**kwargs), content_type="application/octet-stream")
def open_csv_as_dataframe(self, filename, **kwargs):
"""
:param filename:
:param kwargs:
:return:
"""
return pd.read_csv(self.get_file(filename=filename), **kwargs)
def open_json_as_dataframe(self, filename, **kwargs):
"""
:param filename:
:param kwargs:
:return:
"""
return pd.read_json(self.get_file(filename=filename), **kwargs)
def open_excel_as_dataframe(self, filename, **kwargs):
"""
:param filename:
:param kwargs:
:return:
"""
return pd.read_excel(self.get_file(filename=filename), **kwargs)
def file_exists(self, filename):
"""
Check if 'filename' file exists within bucket
:param filename:
:return: (Bool)
"""
return storage.Blob(filename, self._bucket).exists(self._gcsclient)
def list_files(self, prefix, delimiter=None):
return [blob.name for blob in self._bucket.list_blobs(prefix=prefix, delimiter=delimiter)]
show_doc(GCSConnector)
###Output
_____no_output_____
###Markdown
Big Query
###Code
# exports
import pandas_gbq
from google.cloud import bigquery
from google.oauth2 import service_account
class BQConnector:
"""
Object: BQConnector(Object)
Purpose: Connector to the Big Query account
"""
def __init__(self, credentials, project_id):
self.project_id = project_id
# Enable the Google Drive API
self.credentials = service_account.Credentials.from_service_account_file(
credentials
)
self.credentials = self.credentials.with_scopes(
[
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/cloud-platform'
]
)
self._client = bigquery.Client(credentials=self.credentials)
self._credentials_gbq = service_account.Credentials.from_service_account_file(credentials)
def read_df(self, bq_sql_query):
return self._client.query(bq_sql_query).to_dataframe()
def write_df(self, df_to_write, dataset, table, if_exists='replace'):
pandas_gbq.to_gbq(
df_to_write
, '{}.{}'.format(dataset, table)
, project_id=self.project_id
, if_exists=if_exists
, credentials=self._credentials_gbq
)
def run_job(self, sql_query):
self._client.query(sql_query).result()
show_doc(BQConnector)
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_connectors.gcp.ipynb.
Converted 01_nlp.fasttext.ipynb.
Converted 02_forecasting.dataprep.ipynb.
Converted 03_models.catboost.ipynb.
Converted index.ipynb.
|
nytac/corpora_stats/nytac_word_frequency.ipynb | ###Markdown
Get Word Frequency and Statistics on the New York Times Annotated Corpus
###Code
%load_ext autoreload
%autoreload 2
from collections import Counter
import csv
from pathlib import Path
import pickle
import string
import pandas as pd
import spacy
from tqdm import tqdm
import lxml.etree as ET
from annorxiver_modules.document_helper import dump_article_text
lemma_model = spacy.load("en_core_web_sm")
lemma_model.max_length = 9000000
###Output
_____no_output_____
###Markdown
Get the Listing of NYTAC documents
###Code
document_gen = list(
Path("../nyt_corpus/extracted_data")
.rglob("*.xml")
)
print(len(document_gen))
###Output
1855658
###Markdown
Parse the Corpus
###Code
document_list = [
f"{doc.stem}.xml"
for doc in document_gen
]
sentence_length = get_word_stats(
document_list=document_list,
document_folder="../nyt_corpus/extracted_data",
tag_path="//body/body.head/headline/hl1|//body/body.content/block/p",
output_folder="output/",
)
pickle.dump(
sentence_length,
open("nytac_sentence_length.pkl", "wb")
)
###Output
_____no_output_____ |
Assignment_for_newcomer/2_Python_library_basic_quiz.ipynb | ###Markdown
3๊ธฐ ๊ณผ์ - ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ธฐ์ด- ์ง๊ธ๊น์ง ๋ฐฐ์ด ๊ธฐ์ด ๋ฌธ๋ฒ์ ํ ๋๋ก ์๋์ ๋ฌธ์ ๋ค์ ํ์ด๋ณด์ธ์- **์ต๋ํ ๊ฐ๋จํ๊ฒ** ์์ค์ฝ๋๋ฅผ ์์ฑํด ์ฃผ์ธ์- '์ด๋ฆ_์ ๋ต.zip' ๋ก ์ ์ฅ ํ ๊ณผ์ ํด๋์ ์ ์ถํด์ฃผ์ธ์ ๊ฒ์ ๋๋ ๋ฉํ ์๊ฒ ์ง๋ฌธ์ ํตํด ์ค๋ฅ์์ด ์ ์ฒด์ฝ๋๋ฅผ ๊ผญ **์์ฑ**ํด ์ฃผ์ธ์. > 0๋ฒ. Numpy, Pandas, Matplotlib ํจํค์ง๋ฅผ import ํ์ธ์. > 1๋ฒ. 4x2 ํฌ๊ธฐ์ ์ ์๋ก๋ numpy array ๋ฅผ ๋ง๋ค๊ณ ์๋์ ๋๊ฐ์ง ํน์ฑ๊ณผ array๋ฅผ ์ถ๋ ฅํ์ธ์. - array ์ ๋ชจ๋ ์์๋ค์ int16 ํ์
์ ๊ฐ์ ธ์ผ ํฉ๋๋ค. - 1) array์ shape - 2) array์ dimension **์คํ ์** : [[12 32] [32 0] [52 462] [ 0 0]] (4, 2) 2 > 2๋ฒ. ๊ฐ ์์ ๊ฐ์ ์ฐจ์ด๊ฐ 10์ด ๋๋๋ก 100 ~ 200 ๋ฒ์์ **5X2** ์ ์ numpy array๋ฅผ ์์ฑํ๊ณ ์ถ๋ ฅํ์ธ์. **์คํ ์** : [[100 110] [120 130] [140 150] [160 170] [180 190]] > 3๋ฒ. ๋ค์ ์ ๊ณต๋ numpy array ์ ๋ชจ๋ ํ์์ ๋๋ฒ์งธ ์ด์ ์์๋ค์ ์ถ๋ ฅํ์ธ์**์คํ ์** : [[11 22 33] [44 55 66] [77 88 99]] [22 55 88]
###Code
arr = np.array([[11 ,22, 33], [44, 55, 66], [77, 88, 99]])
print(arr)
print()
# --- start code ---
# --- end code ---
###Output
_____no_output_____
###Markdown
> 4๋ฒ. ๋ค์ ์ ๊ณต๋ numpu array ์ ์์ ์ค ํ์ ํ์ด๋ฉด์ ์ง์ ์ด์ ์์นํ๋ ์์๋ค๋ฅผ ์ถ๋ ฅํ์ธ์. **์คํ ์ :** [[ 3 6 9 12] [15 18 21 24] [27 30 33 36] [39 42 45 48] [51 54 57 60]] [[ 6 12] [30 36] [54 60]]
###Code
arr = np.array([[3 ,6, 9, 12], [15 ,18, 21, 24], [27 ,30, 33, 36], [39 ,42, 45, 48], [51 ,54, 57, 60]])
print(arr)
print()
# --- start code ---
# --- end code ---
###Output
_____no_output_____
###Markdown
> 5๋ฒ. ์ฃผ์ด์ง ๋ numpy array์ ํฉ์ ๊ณ์ฐํ์ฌ ์ถ๋ ฅํ๊ณ , ๊ณ์ฐํ ํฉ numpy array๋ฅผ ์ ๊ณฑํ์ฌ ์ถ๋ ฅํ์ธ์. **์คํ ์ :** ํฉ : [[20 39 33] [25 25 28]] ์ ๊ณฑ : [[ 400 1521 1089] [ 625 625 784]]
###Code
arrone = np.array([[5, 6, 9], [21 ,18, 27]])
arrtwo = np.array([[15 ,33, 24], [4 ,7, 1]])
# --- start code ---
# --- end code ---
###Output
_____no_output_____
###Markdown
> 6๋ฒ. ๊ฐ ์์ ๊ฐ์ ์ฐจ์ด๊ฐ 1์ด ๋๋๋ก 10์์ 34 ์ฌ์ด์ ๋ฒ์์์ 8X3 ์ ์ numpy array ๋ฅผ ๋ง๋ค์ด ์ถ๋ ฅํ ๋ค์, numpy array ์ 4๊ฐ์ ๋์ผํ ํฌ๊ธฐ์ ํ์ numpy array ๋ก ๋ถํ ํด์ ์ถ๋ ฅํ์ธ์. **์คํ ์ :** [[10 11 12] [13 14 15] [16 17 18] [19 20 21] [22 23 24] [25 26 27] [28 29 30] [31 32 33]] [array([[10, 11, 12],[13, 14, 15]]), array([[16, 17, 18],[19, 20, 21]]), array([[22, 23, 24],[25, 26, 27]]), array([[28, 29, 30],[31, 32, 33]])] > 7๋ฒ. ์ฃผ์ด์ง numpy array์ ๋ํ์ฌ, ํ๊ณผ ์ด ๊ธฐ์ค์ผ๋ก ์ต๋๊ฐ๊ณผ ์ต์๊ฐ์ ๊ฐ๊ฐ ์ถ๋ ฅํ์ธ์. **์คํ ์ :** [[34 43 73] [82 22 12] [53 94 66]] [34 22 12] [34 12 53] [82 94 73] [73 82 94]
###Code
arr = np.array([[34,43,73],[82,22,12],[53,94,66]])
print(arr)
print()
# --- start code ---
# --- end code ---
###Output
_____no_output_____
###Markdown
> 8๋ฒ. ์ฃผ์ด์ง arr ์ ๋ํ์ฌ, ๋๋ฒ์งธ ํ์ ์ญ์ ํ๊ณ new_arr์ ์ฝ์
ํ์ฌ ์ถ๋ ฅํ์ธ์. **์คํ ์ :** [[34 43 73] [82 22 12] [53 94 66]] [[34 10 73] [82 10 12] [53 10 66]]
###Code
arr = np.array([[34,43,73],[82,22,12],[53,94,66]])
new_arr = np.array([[10,10,10]])
print (arr)
print()
# --- start code ---
# --- end code ---
###Output
_____no_output_____
###Markdown
> 9๋ฒ. 'company_sales_data.csv' ํ์ผ์ ๋ถ๋ฌ์ 'Month number' ์ ๋ํ 'Total profit'์ ์ ํ๋กฏ์ผ๋ก ์๊ฐํ ํ์ธ์. 'Total profit' ๋ฐ์ดํฐ๋ ๋งค๋ฌ ์ ๊ณต๋ฉ๋๋ค. ์ ํ๋กฏ์ ์๋์ ์์๋ค์ ์ ์ฉ์์ผ ํจ๊ป ์ถ๋ ฅํ์ธ์. ํ์ดํ = Company profit per month X ๋ผ๋ฒจ = Month Number Y ๋ผ๋ฒจ = Total profit **์คํ ์ :** [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex1.jpg) > 10๋ฒ. 'company_sales_data.csv' ํ์ผ์ ๋ถ๋ฌ์ ๋ชจ๋ ์ ํ์ ๋ฐ์ดํฐ๋ฅผ ์ฝ์ด์จ ๋ค multiline plot ์ผ๋ก ์๊ฐํ ํ์ธ์. ๊ฐ ์ ํ์ ๋ํด ๋งค์ ํ๋งค๋๋ ๋จ์ ์๋ฅผ ํ์ํ์ธ์. ์ฆ, ๊ฐ ์ ํ๋ง๋ค ๋ค๋ฅธ plot line ์ ๊ฐ์ ธ์ผํฉ๋๋ค. **์คํ ์** : [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex2.jpg) > 11๋ฒ. ์์ ๊ฐ์ csvํ์ผ์์ ๋งค๋ฌ toothpaste ํ๋งค๋ ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฌ์จ ๋ค scatter plot ์ผ๋ก ์๊ฐํ ํ์ธ์. plot์๋ ๊ฒฉ์๊ฐ ํ์๋์ด์ผ ํฉ๋๋ค - gridline style :โ--" **์คํ ์ :** [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex3.jpg)
###Code
###Output
_____no_output_____
###Markdown
> 12๋ฒ. ์์ ๊ฐ์ csvํ์ผ์์ ๋งค๋ฌ bathing soap ํ๋งค๋ ๋ฐ์ดํฐ๋ฅผ ๋ถ๋ฌ์จ ๋ค bar chart ๋ก ์๊ฐํ ํ์ธ์. plot์ jpg ํ์ผ๋ก ์ปดํจํฐ์ ์ ์ฅํ ๋ค (hint : savefig), ๊ณผ์ ๋ฅผ ์ ์ถํ ๋ ipynb ํ์ผ๊ณผ ํจ๊ป ์์ถํด์ ์ ์ถํด ์ฃผ์ธ์. **์คํ ์ :** [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex4.jpg) > 13๋ฒ. "iris.csv" ๋ฅผ ๋ถ๋ฌ์์ (150,5) ํฌ๊ธฐ์ numpy array๋ก ๋ง๋์ธ์.๊ทธ๋ฆฌ๊ณ 4๊ฐ์ ์์ฑ (sepal.length, sepal.width, petal.length, petal.width) ๋ฅผ **์ ๊ทํ** ํด์ ๋งจ์๋ถํฐ ๋ค์ฏ ํ๋ง ์ถ๋ ฅํ์ธ์. (์ฆ, ๊ฐ ์์ฑ์ ํ๊ท ์ 0, ํ์คํธ์ฐจ๊ฐ 1์ด ๋์ด์ผ ํฉ๋๋ค.) **์คํ ์ :** array([[-0.90068117, 1.01900435, -1.34022653, -1.3154443 ], [-1.14301691, -0.13197948, -1.34022653, -1.3154443 ], [-1.38535265, 0.32841405, -1.39706395, -1.3154443 ], [-1.50652052, 0.09821729, -1.2833891 , -1.3154443 ], [-1.02184904, 1.24920112, -1.34022653, -1.3154443 ]]) > 14๋ฒ. Iris ๋ฐ์ดํฐ์ 4๊ฐ์ง ์์ฑ์ boxplot ์ผ๋ก ์๊ฐํํ์ธ์. **์คํ ์ :** [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex5.jpg) > 15๋ฒ. x์ถ์ ํ๊ท ์ด 5, ํ์คํธ์ฐจ๊ฐ 3, y์ถ์ ํ๊ท ์ด 3, ํ์คํธ์ฐจ๊ฐ 2 ์ธ ์ํ์ 1000๊ฐ ๋ง๋ค์ด์ ์ด๋ฅผ ์ฐ์ ๋๋ก ์๊ฐํํ์ธ์. (hint : np.random.normal() ์ฌ์ฉ) ์ถ์ ๋น์จ (x์ถ๊ณผ y์ถ์ ๋๊ธ๊ธธ์ด) ์ ์ผ์ ํ๊ฒ ๋ง๋์ธ์ (hint : plt.axis()์ฌ์ฉ) **์คํ ์** : [[click]](https://github.com/OH-Seoyoung/Data_Analysis_Club_Assignment/blob/master/Assignment_for_newcomer/examples/ex6.jpg) > 16๋ฒ. ์๊ทธ๋ชจ์ด๋ ํจ์๋ ๋ก์ง์คํฑ ํจ์๋ก ์๋ ค์ ธ ์์ต๋๋ค. ์ด๋ ๋จธ์ ๋ฌ๋๋ฟ๋ง ์๋๋ผ ๋ฅ๋ฌ๋์์๋ ์ฌ์ฉ๋๋ ๋น์ ํ ํจ์์
๋๋ค. ์ค์ x ์ ์๊ทธ๋ชจ์ด๋ ๊ฐ์ ๋ฐํํ๋ ํจ์๋ฅผ ์ ์ํด์ฃผ์ธ์. ์๊ทธ๋ชจ์ด๋ ํจ์ ์์ ์๋์ ๊ฐ์ต๋๋ค. **์คํ ์** : 0.6456563062257954
###Code
# don't touch here
from IPython.display import display, Math, Latex
display(Math(r'sigmoid(x) = \frac{1}{1+e^{-x}}'))
import math # math ๋ผ์ด๋ธ๋ฌ๋ฆฌ ์ฌ์ฉํ๊ธฐ
def sigmoid(x):
# --- start code ------
# --- end code --------
return s
print(sigmoid(0.6))
###Output
_____no_output_____
###Markdown
> 17๋ฒ. ์๋ฌด jpg ์ฌ์งํ์ผ์ ๊ฐ์ ธ์์ numpy array๋ก ๋ถ๋ฌ์จ๋ค, ์ด๋ฏธ์ง์ shape์ ๋ฐ์ดํฐํ์
์ ์ถ๋ ฅํ์ธ์
###Code
from PIL import Image
img = np.array(Image.open('์ด๋ฏธ์ง ์ด๋ฆ์ ์ฐ์ธ์.jpg'))
# --- start code ------
# --- end code --------
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
> 18๋ฒ. ์์ ๊ฐ์ ๋ฐฉ๋ฒ์ผ๋ก ๋์ผํ ์ด๋ฏธ์ง๋ฅผ ํ๋ฐฑ์ผ๋ก ๋ถ๋ฌ์๋ณด์ธ์. ๊ทธ๋ฆฌ๊ณ shape์ ๋ฐ์ดํฐํ์
์ ์ถ๋ ฅํ์ธ์. ์ถ๋ ฅ๋ shape์ ํํ์ ํตํด ์ ์ ์๋ ์ปฌ๋ฌ์ด๋ฏธ์ง์ ํ๋ฐฑ์ด๋ฏธ์ง์ ์ฐจ์ด๋ฅผ ์ฃผ์์ ํตํด ์ค๋ช
ํ์ธ์.
###Code
# --- start code ------
# --- end code --------
plt.imshow(img_gray, cmap = 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
> 19๋ฒ. ์์ **ํ๋ฐฑ ์ด๋ฏธ์ง**๋ฅผ ์ฌ์ฉํ์ฌ ์๋ ์ธ๊ฐ์ง ์์ ๊ณ์ฐํ๊ณ , ๊ฐ๊ฐ์ ์ต๋๊ฐ๊ณผ ์ต์๊ฐ์ ์ถ๋ ฅํ์ธ์.
###Code
# don't touch here
from IPython.display import display, Math, Latex
print('X : ํ๋ฐฑ ์ด๋ฏธ์ง numpy array')
display(Math('X1 = 255 - X'))
display(Math('X2 = (100 / 255) * X + 100 '))
display(Math('X3 = 255 * (X / 255) ** 2 '))
# --- start code ------
###Output
_____no_output_____ |
stats/Java sorting.ipynb | ###Markdown
Initial code to strip the fields with Regex```import reimport pandas as pddata = pd.read_csv("java_sorting_24_7_17.txt", sep="|")def filter_data(data): data.columns= [re.sub(r'\s+(\S+)\s+', r'\1', x) for x in data.columns] for i in range(1, len(data.columns)): try: data.iloc[:,i] = data.iloc[:,i].apply(lambda x: re.sub(r'\s+(\S+)\s+', r'\1', x)) except Exception as e: print(e) data.loc[:, 'shuffle'] = data.loc[:, 'shuffle'].apply(lambda x: re.sub(r'\/(\d+)', r'\1',x)) return datadata = filter_data(data)```
###Code
# Using strip to filter the values in the txt
import pandas as pd
import numpy as np
def read_stats(data_file):
data = pd.read_csv(data_file, sep="|")
data.columns = [ x.strip() for x in data.columns]
# Filter integer indexes
str_idxs = [idx for idx,dtype in zip(range(0,len(data.dtypes)), data.dtypes) if dtype != 'int64' ]
# Strip fields
for i in str_idxs:
key = data.columns[i]
if data[key].dtype == np.dtype('str'):
data.loc[:,key] = [ x.strip() for x in data.loc[:, key]]
return data
data = read_stats("java_sorting_127.0.1.1_Di_1._Aug_07:39:03_UTC_2017.csv")
# data.to_csv("java_sorting_127.0.1.1_Di_1._Aug_07:39:03_UTC_2017.csv")
[x for x in zip(range(0, len(data.columns)),data.columns)]
import plotly
import plotly.plotly as py
import plotly.figure_factory as ff
from plotly.graph_objs import *
#plotly.offline.init_notebook_mode()
def filter_by(data, name, value):
data_length = len(data)
return [idx for idx in range(0, data_length) if data.loc[idx,name] == value]
# using ~/.plotly/.credentials
# plotly.tools.set_credentials_file(username="", api_key="")
algorithms = set(data.loc[:, 'name'])
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data, 'name', alg)
X = data.loc[idxs, 'elements']
Y = data.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
The same stats as before but with 9 Million dataThe **merge sort** algorithm we developed is a bit less than O(N). We couldn't find out in that run the worst case performance of O(n log(n)) [see](https://en.wikipedia.org/wiki/Merge_sort). The worst case of our **merge sort** (single threaded) is better than the worst case of the java platform Arrays.sort, however the stats are not independend the runs were not isolated.We loop through all sorting algorithms, the garbage collection of the previous algorithmmight affect the performance of the next one. The garbage collection of **merge sort** might change the performance of **Arrays.sort**
###Code
data2 = read_stats("java_sorting_127.0.1.1_Fr_4._Aug_23:59:33_UTC_2017.txt")
algorithms = set(data2.loc[:, 'name'])
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
alg = algorithms.pop()
idxs = filter_by(data2, 'name', alg)
X = data2.loc[idxs, 'elements']
Y = data2.loc[idxs, 'duration_ms']
plot_data = [Bar(x = X, y = Y, name=alg)]
layout = Layout(title= alg + ' performance (java) ',
xaxis=dict(title='Elements'),
yaxis=dict(title='Time'))
fig = Figure(data=plot_data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____
###Markdown
Better visualization
###Code
data2.loc[:,'name'] =[x.strip() for x in data2.loc[:,'name']]
algorithms = set(data2.loc[:, 'name'])
algorithms
import plotly.graph_objs as go
algorithms.remove('Linked Hashmap')
def get_bar(data, algorithm_name):
idxs = filter_by(data, 'name', algorithm_name)
X1 = data2.loc[idxs, 'elements']
Y1 = data2.loc[idxs, 'duration_ms']
return go.Bar(x=X1, y=Y1, name=algorithm_name)
plot_data = [get_bar(data2, name) for name in algorithms]
layout = go.Layout(title= 'Performance comparison',
xaxis=dict(title='Elements (32 bits / -2,147,483,648 to +2,147,483,647)'),
yaxis=dict(title='Time (ms)'),
barmode='stack')
fig = go.Figure(data=plot_data, layout=layout)
py.iplot(fig)
###Output
_____no_output_____ |
src/notebook/(2_1)StrokeColor_Skatch_A_Net_ipynb_.ipynb | ###Markdown
Connect Google Drive
###Code
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
Import
###Code
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import os
from tensorflow import keras
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPool2D
from tensorflow.keras.layers import ReLU
from tensorflow.keras.layers import Softmax
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.metrics import sparse_top_k_categorical_accuracy
from tensorflow.keras.callbacks import CSVLogger
from ast import literal_eval
###Output
_____no_output_____
###Markdown
Parameters and Work-Space Paths
###Code
# parameters
BATCH_SIZE = 200
EPOCHS = 50
STEPS_PER_EPOCH = 850
VALIDATION_STEPS = 100
EVALUATE_STEPS = 850
IMAGE_SIZE = 225
LINE_SIZE = 3
# load path
TRAIN_DATA_PATH = 'gdrive/My Drive/QW/Data/Data_10000/All_classes_10000.csv'
VALID_DATA_PATH = 'gdrive/My Drive/QW/Data/My_test_data/My_test_data.csv'
LABEL_DICT_PATH = 'gdrive/My Drive/QW/Data/labels_dict.npy'
# save path
CKPT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt'
LOSS_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/loss_plot_2_1.png'
ACC_PLOT_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/acc_plot_2_1.png'
LOG_PATH = 'gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/Log_2_1.log'
print('finish!')
###Output
finish!
###Markdown
Generator
###Code
def generate_data(data, batch_size, choose_recognized):
data = data.sample(frac = 1)
while 1:
# get columns' values named 'drawing', 'word' and 'recognized'
drawings = data["drawing"].values
drawing_recognized = data["recognized"].values
drawing_class = data["word"].values
# initialization
cnt = 0
data_X =[]
data_Y =[]
# generate batch
for i in range(len(drawings)):
if choose_recognized:
if drawing_recognized[i] == 'False': #Choose according to recognized value
continue
draw = drawings[i]
label = drawing_class[i]
stroke_vec = literal_eval(draw)
img = np.zeros([256, 256])
x = []
for j in range(len(stroke_vec)):
line = np.array(stroke_vec[j]).T
cv2.polylines(img, [line], False, 255-(13*min(j,10)), LINE_SIZE)
img = cv2.resize(img, (IMAGE_SIZE,IMAGE_SIZE), interpolation = cv2.INTER_NEAREST)
img = img[:,:, np.newaxis]
x = img
y = labels2nums_dict[label]
data_X.append(x)
data_Y.append(y)
cnt += 1
if cnt==batch_size: #generate a batch when cnt reaches batch_size
cnt = 0
yield (np.array(data_X), np.array(data_Y))
data_X = []
data_Y = []
print('finish!')
###Output
finish!
###Markdown
Callbacks
###Code
# define a class named LossHitory
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = {'batch':[], 'epoch':[]}
self.accuracy = {'batch':[], 'epoch':[]}
self.val_loss = {'batch':[], 'epoch':[]}
self.val_acc = {'batch':[], 'epoch':[]}
def on_batch_end(self, batch, logs={}):
self.losses['batch'].append(logs.get('loss'))
self.accuracy['batch'].append(logs.get('acc'))
self.val_loss['batch'].append(logs.get('val_loss'))
self.val_acc['batch'].append(logs.get('val_acc'))
def on_epoch_end(self, batch, logs={}):
self.losses['epoch'].append(logs.get('loss'))
self.accuracy['epoch'].append(logs.get('acc'))
self.val_loss['epoch'].append(logs.get('val_loss'))
self.val_acc['epoch'].append(logs.get('val_acc'))
def loss_plot(self, loss_type, loss_fig_save_path, acc_fig_save_path):
iters = range(len(self.losses[loss_type]))
plt.figure('acc')
plt.plot(iters, self.accuracy[loss_type], 'r', label='train acc')
plt.plot(iters, self.val_acc[loss_type], 'b', label='val acc')
plt.grid(True)
plt.xlabel(loss_type)
plt.ylabel('acc')
plt.legend(loc="upper right")
plt.savefig(acc_fig_save_path)
plt.show()
plt.figure('loss')
plt.plot(iters, self.losses[loss_type], 'g', label='train loss')
plt.plot(iters, self.val_loss[loss_type], 'k', label='val loss')
plt.grid(True)
plt.xlabel(loss_type)
plt.ylabel('loss')
plt.legend(loc="upper right")
plt.savefig(loss_fig_save_path)
plt.show()
# create a object from LossHistory class
History = LossHistory()
print("finish!")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
CKPT_PATH,
verbose = 1,
monitor='val_acc',
mode = 'max',
save_best_only=True)
print("finish!")
ReduceLR = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=3,
min_delta=0.005, mode='max', cooldown=3, verbose=1)
csv_logger = CSVLogger(LOG_PATH, separator=',', append=True)
###Output
_____no_output_____
###Markdown
Load Data
###Code
# load train data and valid data
# labels_dict and data path
# labels convert into nums
labels_dict = np.load(LABEL_DICT_PATH)
labels2nums_dict = {v: k for k, v in enumerate(labels_dict)}
# read csv
train_data = pd.read_csv(TRAIN_DATA_PATH)
valid_data = pd.read_csv(VALID_DATA_PATH)
print('finish!')
###Output
finish!
###Markdown
Model
###Code
x_input = Input(shape=(IMAGE_SIZE,IMAGE_SIZE,1), name='Input')
x = Conv2D(64, (15,15), strides=3, padding='valid',name='Conv2D_1')(x_input)
x = BatchNormalization(name='BN_1')(x)
x = ReLU(name='ReLU_1')(x)
x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_1')(x)
x = Conv2D(128, (5,5), strides=1, padding='valid',name='Conv2D_2')(x)
x = BatchNormalization(name='BN_2')(x)
x = ReLU(name='ReLU_2')(x)
x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_2')(x)
x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_3')(x)
x = BatchNormalization(name='BN_3')(x)
x = ReLU(name='ReLU_3')(x)
x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_4')(x)
x = BatchNormalization(name='BN_4')(x)
x = ReLU(name='ReLU_4')(x)
x = Conv2D(256, (3,3), strides=1, padding='same',name='Conv2D_5')(x)
x = BatchNormalization(name='BN_5')(x)
x = ReLU(name='ReLU_5')(x)
x = MaxPool2D(pool_size=(3,3),strides=2, name='Pooling_5')(x)
x_shape = x.shape[1]
x = Conv2D(512, (int(x_shape),int(x_shape)), strides=1, padding='valid',name='Conv2D_FC_6')(x)
x = BatchNormalization(name='BN_6')(x)
x = ReLU(name='ReLU_6')(x)
x = Dropout(0.5,name='Dropout_6')(x)
x = Conv2D(512, (1,1), strides=1, padding='valid',name='Conv2D_FC_7')(x)
x = BatchNormalization(name='BN_7')(x)
x = ReLU(name='ReLU_7')(x)
x = Dropout(0.5,name='Dropout_7')(x)
x = Conv2D(340, (1,1), strides=1, padding='valid',name='Conv2D_FC_8')(x)
x = Flatten(name='Flatten')(x)
x_output = Softmax(name='Softmax')(x)
MODEL = keras.models.Model(inputs=x_input, outputs=x_output)
MODEL.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Input (InputLayer) (None, 225, 225, 1) 0
_________________________________________________________________
Conv2D_1 (Conv2D) (None, 71, 71, 64) 14464
_________________________________________________________________
BN_1 (BatchNormalization) (None, 71, 71, 64) 256
_________________________________________________________________
ReLU_1 (ReLU) (None, 71, 71, 64) 0
_________________________________________________________________
Pooling_1 (MaxPooling2D) (None, 35, 35, 64) 0
_________________________________________________________________
Conv2D_2 (Conv2D) (None, 31, 31, 128) 204928
_________________________________________________________________
BN_2 (BatchNormalization) (None, 31, 31, 128) 512
_________________________________________________________________
ReLU_2 (ReLU) (None, 31, 31, 128) 0
_________________________________________________________________
Pooling_2 (MaxPooling2D) (None, 15, 15, 128) 0
_________________________________________________________________
Conv2D_3 (Conv2D) (None, 15, 15, 256) 295168
_________________________________________________________________
BN_3 (BatchNormalization) (None, 15, 15, 256) 1024
_________________________________________________________________
ReLU_3 (ReLU) (None, 15, 15, 256) 0
_________________________________________________________________
Conv2D_4 (Conv2D) (None, 15, 15, 256) 590080
_________________________________________________________________
BN_4 (BatchNormalization) (None, 15, 15, 256) 1024
_________________________________________________________________
ReLU_4 (ReLU) (None, 15, 15, 256) 0
_________________________________________________________________
Conv2D_5 (Conv2D) (None, 15, 15, 256) 590080
_________________________________________________________________
BN_5 (BatchNormalization) (None, 15, 15, 256) 1024
_________________________________________________________________
ReLU_5 (ReLU) (None, 15, 15, 256) 0
_________________________________________________________________
Pooling_5 (MaxPooling2D) (None, 7, 7, 256) 0
_________________________________________________________________
Conv2D_FC_6 (Conv2D) (None, 1, 1, 512) 6423040
_________________________________________________________________
BN_6 (BatchNormalization) (None, 1, 1, 512) 2048
_________________________________________________________________
ReLU_6 (ReLU) (None, 1, 1, 512) 0
_________________________________________________________________
Dropout_6 (Dropout) (None, 1, 1, 512) 0
_________________________________________________________________
Conv2D_FC_7 (Conv2D) (None, 1, 1, 512) 262656
_________________________________________________________________
BN_7 (BatchNormalization) (None, 1, 1, 512) 2048
_________________________________________________________________
ReLU_7 (ReLU) (None, 1, 1, 512) 0
_________________________________________________________________
Dropout_7 (Dropout) (None, 1, 1, 512) 0
_________________________________________________________________
Conv2D_FC_8 (Conv2D) (None, 1, 1, 340) 174420
_________________________________________________________________
Flatten (Flatten) (None, 340) 0
_________________________________________________________________
Softmax (Softmax) (None, 340) 0
=================================================================
Total params: 8,562,772
Trainable params: 8,558,804
Non-trainable params: 3,968
_________________________________________________________________
###Markdown
TPU Complie
###Code
model = MODEL
model.compile(loss='sparse_categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('finish')
###Output
finish
###Markdown
Train
###Code
print('start training')
# callbacks = [History, cp_callback]
history = model.fit_generator(generate_data(train_data, BATCH_SIZE, True),
steps_per_epoch = STEPS_PER_EPOCH,
epochs = EPOCHS,
validation_data = generate_data(valid_data, BATCH_SIZE, False) ,
validation_steps = VALIDATION_STEPS,
verbose = 1,
initial_epoch = 0,
callbacks = [History,cp_callback,ReduceLR,csv_logger]
)
print("finish training")
History.loss_plot('epoch', LOSS_PLOT_PATH, ACC_PLOT_PATH)
print('finish!')
###Output
start training
Epoch 1/50
849/850 [============================>.] - ETA: 0s - loss: 3.8941 - acc: 0.1969
Epoch 00001: val_acc improved from -inf to 0.39465, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 387s 456ms/step - loss: 3.8926 - acc: 0.1971 - val_loss: 2.6247 - val_acc: 0.3946
Epoch 2/50
849/850 [============================>.] - ETA: 0s - loss: 2.6675 - acc: 0.3848
Epoch 00002: val_acc improved from 0.39465 to 0.47900, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 444ms/step - loss: 2.6675 - acc: 0.3848 - val_loss: 2.1809 - val_acc: 0.4790
Epoch 3/50
849/850 [============================>.] - ETA: 0s - loss: 2.3318 - acc: 0.4522
Epoch 00003: val_acc improved from 0.47900 to 0.54445, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 2.3316 - acc: 0.4522 - val_loss: 1.8988 - val_acc: 0.5444
Epoch 4/50
849/850 [============================>.] - ETA: 0s - loss: 2.1677 - acc: 0.4867
Epoch 00004: val_acc improved from 0.54445 to 0.56575, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 2.1676 - acc: 0.4868 - val_loss: 1.8030 - val_acc: 0.5657
Epoch 5/50
849/850 [============================>.] - ETA: 0s - loss: 2.0528 - acc: 0.5103
Epoch 00005: val_acc improved from 0.56575 to 0.58080, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 443ms/step - loss: 2.0527 - acc: 0.5104 - val_loss: 1.7436 - val_acc: 0.5808
Epoch 6/50
849/850 [============================>.] - ETA: 0s - loss: 1.9624 - acc: 0.5287
Epoch 00006: val_acc improved from 0.58080 to 0.60580, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 443ms/step - loss: 1.9621 - acc: 0.5287 - val_loss: 1.6372 - val_acc: 0.6058
Epoch 7/50
849/850 [============================>.] - ETA: 0s - loss: 1.8950 - acc: 0.5424
Epoch 00007: val_acc improved from 0.60580 to 0.62155, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 443ms/step - loss: 1.8948 - acc: 0.5425 - val_loss: 1.5491 - val_acc: 0.6216
Epoch 8/50
849/850 [============================>.] - ETA: 0s - loss: 1.8435 - acc: 0.5566
Epoch 00008: val_acc improved from 0.62155 to 0.62295, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 443ms/step - loss: 1.8434 - acc: 0.5567 - val_loss: 1.5613 - val_acc: 0.6229
Epoch 9/50
849/850 [============================>.] - ETA: 0s - loss: 1.8053 - acc: 0.5638
Epoch 00009: val_acc improved from 0.62295 to 0.63075, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 376s 443ms/step - loss: 1.8055 - acc: 0.5638 - val_loss: 1.5016 - val_acc: 0.6308
Epoch 10/50
849/850 [============================>.] - ETA: 0s - loss: 1.7765 - acc: 0.5722
Epoch 00010: val_acc improved from 0.63075 to 0.64190, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 444ms/step - loss: 1.7762 - acc: 0.5723 - val_loss: 1.4643 - val_acc: 0.6419
Epoch 11/50
849/850 [============================>.] - ETA: 0s - loss: 1.7327 - acc: 0.5814
Epoch 00011: val_acc improved from 0.64190 to 0.64475, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 379s 446ms/step - loss: 1.7328 - acc: 0.5813 - val_loss: 1.4487 - val_acc: 0.6448
Epoch 12/50
849/850 [============================>.] - ETA: 0s - loss: 1.7013 - acc: 0.5872
Epoch 00012: val_acc improved from 0.64475 to 0.66140, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 376s 443ms/step - loss: 1.7013 - acc: 0.5872 - val_loss: 1.3741 - val_acc: 0.6614
Epoch 13/50
849/850 [============================>.] - ETA: 0s - loss: 1.6855 - acc: 0.5926
Epoch 00013: val_acc did not improve from 0.66140
850/850 [==============================] - 377s 443ms/step - loss: 1.6855 - acc: 0.5926 - val_loss: 1.3968 - val_acc: 0.6534
Epoch 14/50
849/850 [============================>.] - ETA: 0s - loss: 1.6563 - acc: 0.5971
Epoch 00014: val_acc improved from 0.66140 to 0.66705, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.6563 - acc: 0.5971 - val_loss: 1.3629 - val_acc: 0.6671
Epoch 15/50
849/850 [============================>.] - ETA: 0s - loss: 1.6341 - acc: 0.6046
Epoch 00015: val_acc improved from 0.66705 to 0.66855, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.6343 - acc: 0.6046 - val_loss: 1.3530 - val_acc: 0.6686
Epoch 16/50
849/850 [============================>.] - ETA: 0s - loss: 1.6199 - acc: 0.6069
Epoch 00016: val_acc did not improve from 0.66855
850/850 [==============================] - 376s 442ms/step - loss: 1.6202 - acc: 0.6069 - val_loss: 1.3713 - val_acc: 0.6575
Epoch 17/50
849/850 [============================>.] - ETA: 0s - loss: 1.6173 - acc: 0.6075
Epoch 00017: val_acc improved from 0.66855 to 0.67000, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
Epoch 00017: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
850/850 [==============================] - 379s 446ms/step - loss: 1.6176 - acc: 0.6074 - val_loss: 1.3229 - val_acc: 0.6700
Epoch 18/50
849/850 [============================>.] - ETA: 0s - loss: 1.5386 - acc: 0.6271
Epoch 00018: val_acc improved from 0.67000 to 0.69380, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 379s 445ms/step - loss: 1.5386 - acc: 0.6272 - val_loss: 1.2598 - val_acc: 0.6938
Epoch 19/50
849/850 [============================>.] - ETA: 0s - loss: 1.5188 - acc: 0.6306
Epoch 00019: val_acc improved from 0.69380 to 0.69660, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.5188 - acc: 0.6306 - val_loss: 1.2129 - val_acc: 0.6966
Epoch 20/50
849/850 [============================>.] - ETA: 0s - loss: 1.4950 - acc: 0.6357
Epoch 00020: val_acc did not improve from 0.69660
850/850 [==============================] - 375s 441ms/step - loss: 1.4949 - acc: 0.6358 - val_loss: 1.2308 - val_acc: 0.6928
Epoch 21/50
849/850 [============================>.] - ETA: 0s - loss: 1.4978 - acc: 0.6373
Epoch 00021: val_acc improved from 0.69660 to 0.69905, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 443ms/step - loss: 1.4976 - acc: 0.6373 - val_loss: 1.2007 - val_acc: 0.6990
Epoch 22/50
849/850 [============================>.] - ETA: 0s - loss: 1.4804 - acc: 0.6399
Epoch 00022: val_acc improved from 0.69905 to 0.70405, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 376s 443ms/step - loss: 1.4806 - acc: 0.6399 - val_loss: 1.1987 - val_acc: 0.7040
Epoch 23/50
849/850 [============================>.] - ETA: 0s - loss: 1.4638 - acc: 0.6436
Epoch 00023: val_acc did not improve from 0.70405
850/850 [==============================] - 374s 440ms/step - loss: 1.4637 - acc: 0.6436 - val_loss: 1.2002 - val_acc: 0.7013
Epoch 24/50
849/850 [============================>.] - ETA: 0s - loss: 1.4643 - acc: 0.6436
Epoch 00024: val_acc did not improve from 0.70405
Epoch 00024: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
850/850 [==============================] - 375s 441ms/step - loss: 1.4643 - acc: 0.6436 - val_loss: 1.2037 - val_acc: 0.7014
Epoch 25/50
849/850 [============================>.] - ETA: 0s - loss: 1.4363 - acc: 0.6510
Epoch 00025: val_acc improved from 0.70405 to 0.71090, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 444ms/step - loss: 1.4364 - acc: 0.6510 - val_loss: 1.1670 - val_acc: 0.7109
Epoch 26/50
849/850 [============================>.] - ETA: 0s - loss: 1.4184 - acc: 0.6538
Epoch 00026: val_acc did not improve from 0.71090
850/850 [==============================] - 375s 441ms/step - loss: 1.4181 - acc: 0.6539 - val_loss: 1.1788 - val_acc: 0.7091
Epoch 27/50
849/850 [============================>.] - ETA: 0s - loss: 1.4016 - acc: 0.6568
Epoch 00027: val_acc improved from 0.71090 to 0.71460, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 380s 447ms/step - loss: 1.4013 - acc: 0.6569 - val_loss: 1.1414 - val_acc: 0.7146
Epoch 28/50
849/850 [============================>.] - ETA: 0s - loss: 1.3937 - acc: 0.6610
Epoch 00028: val_acc did not improve from 0.71460
850/850 [==============================] - 376s 442ms/step - loss: 1.3936 - acc: 0.6611 - val_loss: 1.1373 - val_acc: 0.7125
Epoch 29/50
849/850 [============================>.] - ETA: 0s - loss: 1.3896 - acc: 0.6597
Epoch 00029: val_acc improved from 0.71460 to 0.71685, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 379s 445ms/step - loss: 1.3896 - acc: 0.6598 - val_loss: 1.1344 - val_acc: 0.7169
Epoch 30/50
849/850 [============================>.] - ETA: 0s - loss: 1.3840 - acc: 0.6609
Epoch 00030: val_acc improved from 0.71685 to 0.71710, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.3838 - acc: 0.6610 - val_loss: 1.1308 - val_acc: 0.7171
Epoch 31/50
849/850 [============================>.] - ETA: 0s - loss: 1.3710 - acc: 0.6644
Epoch 00031: val_acc improved from 0.71710 to 0.72155, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.3711 - acc: 0.6644 - val_loss: 1.1181 - val_acc: 0.7215
Epoch 32/50
849/850 [============================>.] - ETA: 0s - loss: 1.3580 - acc: 0.6681
Epoch 00032: val_acc did not improve from 0.72155
Epoch 00032: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
850/850 [==============================] - 376s 442ms/step - loss: 1.3578 - acc: 0.6681 - val_loss: 1.1584 - val_acc: 0.7098
Epoch 33/50
849/850 [============================>.] - ETA: 0s - loss: 1.3556 - acc: 0.6703
Epoch 00033: val_acc did not improve from 0.72155
850/850 [==============================] - 376s 442ms/step - loss: 1.3554 - acc: 0.6704 - val_loss: 1.1211 - val_acc: 0.7193
Epoch 34/50
849/850 [============================>.] - ETA: 0s - loss: 1.3362 - acc: 0.6718
Epoch 00034: val_acc did not improve from 0.72155
850/850 [==============================] - 377s 443ms/step - loss: 1.3360 - acc: 0.6718 - val_loss: 1.1250 - val_acc: 0.7190
Epoch 35/50
849/850 [============================>.] - ETA: 0s - loss: 1.3302 - acc: 0.6732
Epoch 00035: val_acc improved from 0.72155 to 0.72275, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 378s 445ms/step - loss: 1.3304 - acc: 0.6732 - val_loss: 1.1230 - val_acc: 0.7228
Epoch 36/50
849/850 [============================>.] - ETA: 0s - loss: 1.3245 - acc: 0.6756
Epoch 00036: val_acc improved from 0.72275 to 0.72670, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 380s 447ms/step - loss: 1.3247 - acc: 0.6756 - val_loss: 1.0932 - val_acc: 0.7267
Epoch 37/50
849/850 [============================>.] - ETA: 0s - loss: 1.3298 - acc: 0.6737
Epoch 00037: val_acc did not improve from 0.72670
850/850 [==============================] - 376s 443ms/step - loss: 1.3301 - acc: 0.6736 - val_loss: 1.1155 - val_acc: 0.7211
Epoch 38/50
849/850 [============================>.] - ETA: 0s - loss: 1.3323 - acc: 0.6736
Epoch 00038: val_acc did not improve from 0.72670
Epoch 00038: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
850/850 [==============================] - 377s 443ms/step - loss: 1.3322 - acc: 0.6736 - val_loss: 1.1010 - val_acc: 0.7235
Epoch 39/50
849/850 [============================>.] - ETA: 0s - loss: 1.3270 - acc: 0.6735
Epoch 00039: val_acc improved from 0.72670 to 0.73410, saving model to gdrive/My Drive/QW/Notebook/Quick Draw/Thesis_pre_research/(2-1)StrokeColor_Skatch-A-Net/best_model_2_1.ckpt
850/850 [==============================] - 377s 444ms/step - loss: 1.3272 - acc: 0.6734 - val_loss: 1.0813 - val_acc: 0.7341
Epoch 40/50
849/850 [============================>.] - ETA: 0s - loss: 1.3100 - acc: 0.6791
Epoch 00040: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3098 - acc: 0.6792 - val_loss: 1.1073 - val_acc: 0.7233
Epoch 41/50
849/850 [============================>.] - ETA: 0s - loss: 1.3206 - acc: 0.6757
Epoch 00041: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3205 - acc: 0.6758 - val_loss: 1.1091 - val_acc: 0.7260
Epoch 42/50
849/850 [============================>.] - ETA: 0s - loss: 1.3115 - acc: 0.6784
Epoch 00042: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3116 - acc: 0.6784 - val_loss: 1.1056 - val_acc: 0.7268
Epoch 43/50
849/850 [============================>.] - ETA: 0s - loss: 1.3003 - acc: 0.6803
Epoch 00043: val_acc did not improve from 0.73410
Epoch 00043: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
850/850 [==============================] - 376s 442ms/step - loss: 1.3003 - acc: 0.6803 - val_loss: 1.1134 - val_acc: 0.7251
Epoch 44/50
849/850 [============================>.] - ETA: 0s - loss: 1.2990 - acc: 0.6813
Epoch 00044: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 442ms/step - loss: 1.2992 - acc: 0.6813 - val_loss: 1.0807 - val_acc: 0.7263
Epoch 45/50
849/850 [============================>.] - ETA: 0s - loss: 1.3236 - acc: 0.6775
Epoch 00045: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3237 - acc: 0.6775 - val_loss: 1.0877 - val_acc: 0.7267
Epoch 46/50
849/850 [============================>.] - ETA: 0s - loss: 1.3123 - acc: 0.6787
Epoch 00046: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3120 - acc: 0.6787 - val_loss: 1.0856 - val_acc: 0.7257
Epoch 47/50
849/850 [============================>.] - ETA: 0s - loss: 1.3018 - acc: 0.6800
Epoch 00047: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 441ms/step - loss: 1.3015 - acc: 0.6800 - val_loss: 1.0814 - val_acc: 0.7313
Epoch 48/50
849/850 [============================>.] - ETA: 0s - loss: 1.2945 - acc: 0.6830
Epoch 00048: val_acc did not improve from 0.73410
Epoch 00048: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05.
850/850 [==============================] - 375s 441ms/step - loss: 1.2945 - acc: 0.6831 - val_loss: 1.0808 - val_acc: 0.7329
Epoch 49/50
849/850 [============================>.] - ETA: 0s - loss: 1.2904 - acc: 0.6821
Epoch 00049: val_acc did not improve from 0.73410
850/850 [==============================] - 374s 441ms/step - loss: 1.2903 - acc: 0.6822 - val_loss: 1.1028 - val_acc: 0.7255
Epoch 50/50
849/850 [============================>.] - ETA: 0s - loss: 1.2929 - acc: 0.6814
Epoch 00050: val_acc did not improve from 0.73410
850/850 [==============================] - 375s 442ms/step - loss: 1.2927 - acc: 0.6814 - val_loss: 1.0952 - val_acc: 0.7276
finish training
###Markdown
Evaluate
###Code
def top_3_accuracy(X, Y):
return sparse_top_k_categorical_accuracy(X, Y, 3)
def top_5_accuracy(X, Y):
return sparse_top_k_categorical_accuracy(X, Y, 5)
model_E = MODEL
model_E.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.train.AdamOptimizer(),
metrics=['accuracy',top_3_accuracy, top_5_accuracy])
model_weights_path = CKPT_PATH
model_E.load_weights(model_weights_path)
print('finish')
result = model_E.evaluate_generator(
generate_data(valid_data, BATCH_SIZE, False),
steps = EVALUATE_STEPS,
verbose = 1
)
print('loss:', result[0])
print('top1 accuracy:', result[1])
print('top3 accuracy:', result[2])
print('top3 accuracy:', result[3])
###Output
850/850 [==============================] - 170s 200ms/step
loss: 1.1010285833302667
top1 accuracy: 0.7247588239697849
top3 accuracy: 0.8787941166232613
top3 accuracy: 0.9123647066424875
|
Notebooks/successful-models/submission.ipynb | ###Markdown
Data Mining Challange: *Reddit Gender Text-Classification*The full description of the challange and its solution can be found in this [Github page](https://inphyt.github.io/DataMiningChallange/), while all the relevant notebooks are publicly available in the associated [Github repository](https://github.com/InPhyT/DataMiningChallange). Modules
###Code
# Numpy & matplotlib for notebooks
%pylab inline
# Pandas
import pandas as pd # Data analysis and manipulation
# Sklearn
from sklearn.preprocessing import StandardScaler # to standardize features by removing the mean and scaling to unit variance (z=(x-u)/s)
from sklearn.neural_network import MLPClassifier # Multi-layer Perceptron classifier which optimizes the log-loss function using LBFGS or sdg.
from sklearn.model_selection import train_test_split # to split arrays or matrices into random train and test subsets
from sklearn.model_selection import KFold # K-Folds cross-validator providing train/test indices to split data in train/test sets.
from sklearn.decomposition import PCA, TruncatedSVD # Principal component analysis (PCA); dimensionality reduction using truncated SVD.
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB # Naive Bayes classifier for multinomial models
from sklearn.feature_extraction.text import CountVectorizer # Convert a collection of text documents to a matrix of token counts
from sklearn.metrics import roc_auc_score as roc # Compute Area Under the Receiver Operating Characteristic Curve from prediction scores
from sklearn.metrics import roc_curve, auc # Compute ROC; Compute Area Under the Curve (AUC) using the trapezoidal rule
# Matplotlib
import matplotlib # Data visualization
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
# Seaborn
import seaborn as sns # Statistical data visualization (based on matplotlib)
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Data Collection
###Code
# Import the test dataset and create a list of authors
test_data = pd.read_csv("../input/final-dataset/test_data.csv", encoding="utf8")
a_test = []
for author, group in test_data.groupby("author"):
a_test.append(author)
# Load predictions on validation
# MLP on doc2vec
x1 = np.load("../input/final-dataset/y_scoremlpClf.npy") #y_D2V-mlpClf.npy
# XGB on countvectorized texts
x2 = np.load("../input/final-dataset/y_predict_XGB.npy")
# MLP on binary countvectorized subreddits
x3 = np.load("../input/final-dataset/y_score_MLPs.npy")
# Load predictions of all models
y = np.load("../input/final-dataset/y_valid.npy") # common validation y of previous steps
# Load predicted test doc2vec
t1 = np.load("../input/final-dataset/y_testD2V.npy")
# Load predicted countvectorized test texts
t2 = np.load("../input/final-dataset/y_predict_testXGBnS.npy") # #y_testXGBnS.npy
# Load predicted countvectorized test subreddits
t3 = np.load("../input/final-dataset/y_testMLPs.npy")
###Output
_____no_output_____
###Markdown
Validation Data Manipulation
###Code
a = np.vstack((x3,x2,x1))
t = np.vstack((t3,t2,t1))
X = a.T # transpose
T = t.T # transpose
###Output
_____no_output_____
###Markdown
Validation Data Visualization
###Code
# Plot the test data along the 2 dimensions of largest variance
def plot_LSA(test_data, test_labels, savepath="PCA_demo.csv", plot=True):
lsa = TruncatedSVD(n_components=2)
lsa.fit(test_data)
lsa_scores = lsa.transform(test_data)
colors = ['orange','blue']
if plot:
plt.scatter(lsa_scores[:,0], lsa_scores[:,1], s=8, alpha=.8, c=test_labels, cmap=matplotlib.colors.ListedColormap(colors))
orange_patch = mpatches.Patch(color='orange', label='M')
blue_patch = mpatches.Patch(color='blue', label='F')
plt.legend(handles=[orange_patch, blue_patch], prop={'size': 20})
fig = plt.figure(figsize=(8, 8))
plot_LSA(X, y)
plt.show()
###Output
_____no_output_____
###Markdown
Model Definition & Training
###Code
# Logistic regression
lrClf = LogisticRegression(class_weight = "balanced",solver = "saga",C = 0.00005) #modello
# Model fit
lrClf.fit(X, y)
###Output
_____no_output_____
###Markdown
Final Prediction & Submission
###Code
# Final prediction
y_scorel = lrClf.predict_proba(T)[:,1]
# Create test dictionary
test = {'author': a_test,
'gender': y_scorel
}
# Create DataFrame
df = pd.DataFrame(test, columns = ['author', 'gender'])
# Create submission csv file
df.to_csv(r'../working/Submission.csv', index = False)
###Output
_____no_output_____ |
base/auxils/plotting_tool/PriceElectricityHourly.ipynb | ###Markdown
Import Required Packages
###Code
# Imports
import os
import datetime
import glob
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
###Output
_____no_output_____
###Markdown
Input data from User
###Code
#Market analysed: 'Investment','FullYear','DayAhead','Balancing' (choose one or several)
markets=['DayAhead']
output='PriceElectricityHourly'
first_timestep="2012-01-02"
#Meaning of SSS and TTT in the data: 'DaysHours','Hours5min','WeeksHours'
meaning_SSS_TTT='DaysHours'
#Time size of each time step in TTT for creating timestamp
size_timestep="3600s"
###Output
_____no_output_____
###Markdown
Plot Settings
###Code
# Set plotting specifications
%matplotlib inline
plt.rcParams.update({'font.size': 21})
plt.rcParams['xtick.major.pad']='12'
plt.rc('legend', fontsize=16)
y_limit = 1.1
lw = 3
###Output
_____no_output_____
###Markdown
Read Input Files
###Code
data=pd.DataFrame()
for market in markets:
csvfiles = []
for file in glob.glob("./input/results/" + market + "/*.csv"):
csvfiles.append(file)
csvfiles=[file.replace('./input\\','') for file in csvfiles]
csvfiles=[file.replace('.csv','') for file in csvfiles]
csvfiles=[file.split('_') for file in csvfiles]
csvfiles = np.asarray(csvfiles)
csvfiles=pd.DataFrame.from_records(csvfiles)
csvfiles.rename(columns={0: 'Output', 1: 'Scenario',2: 'Year',3:'Subset'}, inplace=True)
scenarios=csvfiles.Scenario.unique().tolist()
years=csvfiles.Year.unique().tolist()
subsets=csvfiles.Subset.unique().tolist()
for scenario in scenarios:
for year in years:
for subset in subsets:
file = "./input/results/"+ market + "/"+ output + "_" + scenario + "_" + year + "_" + subset + ".csv"
if os.path.isfile(file):
df=pd.read_csv(file,encoding='utf8')
df['Scenario'] = scenario
df['Market'] = market
#Renaming columns just in case timeconversion was required
df.rename(columns = {'G':'GGG', 'C':'CCC', 'Y':'YYY','TTT_NEW':'TTT','SSS_NEW':'SSS'}, inplace = True)
data=data.append(df)
#Timestamp addition
full_timesteps = pd.read_csv('./input/full_timesteps_'+meaning_SSS_TTT+'.csv')
full_timesteps.Key=full_timesteps['SSS']+full_timesteps['TTT']
number_periods=len(full_timesteps.Key.unique())
full_timesteps['timestamp']= pd.date_range(first_timestep, periods = number_periods, freq =size_timestep)
dict_timestamp=dict(zip(full_timesteps.Key, full_timesteps.timestamp))
data['timestamp']=data['SSS']+data['TTT']
data['timestamp']=data['timestamp'].map(dict_timestamp)
###Output
C:\Users\s151529\AppData\Local\Continuum\anaconda3\lib\site-packages\ipykernel_launcher.py:3: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
Additional set declaration
###Code
ccc = list(data.CCC.unique())
rrr = list(data.RRR.unique())
#tech_type = list(data.TECH_TYPE.unique())
#commodity = list(data.COMMODITY.unique())
#fff = list(data.FFF.unique())
sss = list(full_timesteps.SSS.unique())
ttt = list(full_timesteps.TTT.unique())
###Output
_____no_output_____
###Markdown
Time step selection
###Code
# Seasons to investigate
# season_names = ['S01', 'S07', 'S20', 'S24', 'S28', 'S38', 'S42', 'S43']
# Make a list of every nth element of sss (1 <= nth <= number of elements in sss)
nth = 1
s = sss[0::nth]
# Or select seasons by names
# s = season_names
# Terms to investigate
# term_names = ['T005', 'T019', 'T033', 'T047', 'T061', 'T075', 'T089', 'T103', 'T117', 'T131', 'T145', 'T159']
# Make a list of every nth element of ttt (1 <= nth <= number of elements in ttt)
nth = 1
t = ttt[0::nth]
# Or select terms by name
# t = term_names
###Output
_____no_output_____
###Markdown
Make Directories
###Code
# Make output folder
if not os.path.isdir('output'):
os.makedirs('output')
# Make CurtailmentHourly folder
if not os.path.isdir('output/' + output):
os.makedirs('output/' + output)
# Make market folder
for market in markets:
if not os.path.isdir('output/' + output + '/'+ market +'/Country_wise'):
os.makedirs('output/' + output + '/'+ market +'/Country_wise')
# Make country folder
if not os.path.isdir('output/' + output + '/'+ market +'/Country_wise'):
os.makedirs('output/' + output + '/'+ market +'/Country_wise')
# Make country wise folders
for c in ccc:
if not os.path.isdir('output/' + output + '/'+ market +'/Country_wise/' + c):
os.makedirs('output/' + output + '/'+ market +'/Country_wise/' + c)
###Output
_____no_output_____
###Markdown
Plotting
###Code
# Make data frames to plot
data_plot = data[(data.SSS.isin(s)) & (data.TTT.isin(t))]
###Output
_____no_output_____
###Markdown
Data export
###Code
table = pd.pivot_table(data, values='Val', index=['Market','Scenario','YYY','timestamp'],
... columns=['RRR'], aggfunc=np.sum, fill_value=0).reset_index()
for market in markets:
for scenario in scenarios:
for year in years:
table_export=table[table.columns.difference(['Market','Scenario','YYY'])].loc[(table['Market'] == market) & (table['Scenario'] == scenario) & (table['YYY'].astype(str) == year)]
table_export=table_export.set_index(['timestamp'])
table_export.to_csv('Output/'+output+'/'+output+'_'+market+'_'+scenario+'_'+year+'.csv')
###Output
_____no_output_____ |
XML to Dataframe.ipynb | ###Markdown
First, we make a get request to obtain the Wikipedia page on Mars in XML format, using the Wikipedia API.
###Code
url = 'https://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=xml&exintro=&titles=Mars'
xml_data = requests.get(url).content
# Create a BeautifulSoup object from the xml
soup = BeautifulSoup(xml_data, "lxml")
# Prettify the BeautifulSoup object
pretty_soup = BeautifulSoup.prettify(soup)
# Print the response
print(pretty_soup)
# with open('Mars.xml', 'w') as file:
# file.write(pretty_soup)
###Output
_____no_output_____
###Markdown
We wish to extract the data above and put into a (pandas) dataframe.
###Code
import xml.etree.ElementTree as ET
import pandas as pd
class XML2DataFrame:
def __init__(self, xml_data):
self.root = ET.XML(xml_data)
def parse_element(self, element, parsed=None):
if parsed is None:
parsed = dict()
for key in element.keys():
parsed[key] = element.attrib.get(key)
if element.text:
parsed[element.tag] = element.text
for child in list(element): # RECURSION for nested tags
self.parse_element(child, parsed)
return parsed
def parse_root(self, root):
return [self.parse_element(child) for child in iter(root)] # list(element) vs iter(root) ?
def process_data(self):
structure_data = self.parse_root(self.root)
return pd.DataFrame(structure_data)
# Citation: http://www.austintaylor.io/lxml/python/pandas/xml/dataframe/2016/07/08/convert-xml-to-pandas-dataframe/
xml2df = XML2DataFrame(xml_data)
xml_dataframe = xml2df.process_data()
xml_dataframe.iloc[:,0:5]
# Access intro of Wikipedia article on Mars
xml_dataframe.iloc[0,1]
###Output
_____no_output_____
###Markdown
Multiple XML files Now for the case of multiple XML files, we loop through each file.
###Code
earth_pages = 'https://en.wikipedia.org/w/api.php?action=query&generator=allpages&gaplimit=100&gapfrom=Earth&format=xml&gapfilterredir=nonredirects'
earth_data = requests.get(earth_pages).content
# Create a BeautifulSoup object from the xml
earth_soup = BeautifulSoup(earth_data, "lxml")
earth_soup
earth_tags = earth_soup.find_all('page')
earth_tags
id_list = []
for link in earth_tags:
id_list.append(int(link.get('pageid')))
id_list
base_url = 'https://en.wikipedia.org/w/api.php?action=query&prop=extracts&format=xml&exintro=&'
for pageid in id_list:
query = 'pageids=%i' % pageid
# perform a GET request using the base_url and query
xml_data = requests.get(base_url+query).content
xml2df = XML2DataFrame(xml_data)
xml_dataframe_2 = xml2df.process_data()
xml_dataframe = pd.concat([xml_dataframe,xml_dataframe_2], ignore_index=True, join='inner')
xml_dataframe.iloc[:,0:5]
###Output
_____no_output_____ |
dl4j-examples/tutorials/08. RNNs- Sequence Classification of Synthetic Control Data.zepp.ipynb | ###Markdown
NoteView the README.md [here](https://github.com/deeplearning4j/dl4j-examples/tree/overhaul_tutorials/tutorials/README.md) to learn about installing, setting up dependencies and importing notebooks in Zeppelin BackgroundRecurrent neural networks (RNN's) are used when the input is sequential in nature. Typically RNN's are much more effective than regular feed forward neural networks for sequential data because they can keep track of dependencies in the data over multiple time steps. This is possible because the output of a RNN at a time step depends on the current input and the output of the previous time step. RNN's can also be applied to situations where the input is sequential but the output isn't. In these cases the output of the last time step of the RNN is typically taken as the output for the overall observation. For classification, the output of the last time step will be the predicted class label for the observation. In this notebook we will show how to build a RNN using the MultiLayerNetwork class of deeplearning4j (DL4J). This tutorial will focus on applying a RNN for a classification task. We will be using the MNIST data, which is a dataset that consists of images of handwritten digits, as the input for the RNN. Although the MNIST data isn't time series in nature, we can interpret it as such since there are 784 inputs. Thus, each observation or image will be interpreted to have 784 time steps consisting of one scalar value for a pixel. Note that we use a RNN for this task for purely pedagogical reasons. In practice, convolutional neural networks (CNN's) are better suited for image classification tasks. Imports
###Code
import org.deeplearning4j.eval.Evaluation
import org.deeplearning4j.nn.api.OptimizationAlgorithm
import org.deeplearning4j.nn.conf.MultiLayerConfiguration
import org.deeplearning4j.nn.conf.NeuralNetConfiguration
import org.deeplearning4j.nn.conf.Updater
import org.deeplearning4j.nn.multilayer.MultiLayerNetwork
import org.deeplearning4j.nn.weights.WeightInit
import org.deeplearning4j.nn.conf.layers.{DenseLayer, GravesLSTM, OutputLayer, RnnOutputLayer}
import org.deeplearning4j.nn.conf.distribution.UniformDistribution
import org.deeplearning4j.nn.conf.layers.GravesLSTM
import org.deeplearning4j.nn.conf.layers.RnnOutputLayer
import org.deeplearning4j.datasets.datavec.SequenceRecordReaderDataSetIterator
import org.deeplearning4j.optimize.listeners.ScoreIterationListener
import org.datavec.api.split.NumberedFileInputSplit
import org.datavec.api.records.reader.impl.csv.CSVSequenceRecordReader
import org.nd4j.linalg.dataset.DataSet
import org.nd4j.linalg.lossfunctions.LossFunctions.LossFunction
import org.nd4j.linalg.api.ndarray.INDArray
import org.nd4j.linalg.activations.Activation
import org.nd4j.linalg.dataset.api.iterator.DataSetIterator
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.apache.commons.io.IOUtils
import java.nio.charset.Charset
import java.util.Random
import java.net.URL
###Output
_____no_output_____
###Markdown
Download the datasetUCI has a number of datasets available for machine learning, make sure you have enough space on your local disk. The UCI synthetic control dataset can be found at [http://archive.ics.uci.edu/ml/datasets/synthetic+control+chart+time+series](http://archive.ics.uci.edu/ml/datasets/synthetic+control+chart+time+series). The code below will check if the data already exists and download the file.
###Code
val dataPath = new File(cache, "/uci_synthetic_control/")
if(!dataPath.exists()) {
val url = "https://archive.ics.uci.edu/ml/machine-learning-databases/synthetic_control-mld/synthetic_control.data"
println("Downloading file...")
val data = IOUtils.toString(new URL(url), Charset.defaultCharset())
val lines = data.split("\n")
var lineCount = 0;
var index = 0
val linesList = scala.collection.mutable.ListBuffer.empty[String]
println("Extracting file...")
for (line <- lines) {
val count = new java.lang.Integer(lineCount / 100)
var newLine: String = null
newLine = line.replaceAll("\\s+", ", " + count.toString() + "\n")
newLine = line + ", " + count.toString()
linesList.add(newLine)
lineCount += 1
}
util.Random.shuffle(linesList)
for (line <- linesList) {
val outPath = new File(dataPath, index + ".csv")
FileUtils.writeStringToFile(outPath, line, Charset.defaultCharset())
index += 1
}
println("Done.")
} else {
println("File already exists.")
}
###Output
_____no_output_____
###Markdown
Iterating from diskNow that we've saved our dataset to a CSV sequence format, we need to set up a `CSVSequenceRecordReader` and iterator that will read our saved sequences and feed them to our network. If you have already saved your data to disk, you can run this code block (and remaining code blocks) as much as you want without preprocessing the dataset again. Convenient!
###Code
val batchSize = 128
val numLabelClasses = 6
// training data
val trainRR = new CSVSequenceRecordReader(0, ", ")
trainRR.initialize(new NumberedFileInputSplit(dataPath.getAbsolutePath() + "/%d.csv", 0, 449))
val trainIter = new SequenceRecordReaderDataSetIterator(trainRR, batchSize, numLabelClasses, 1)
// testing data
val testRR = new CSVSequenceRecordReader(0, ", ")
testRR.initialize(new NumberedFileInputSplit(dataPath.getAbsolutePath() + "/%d.csv", 450, 599))
val testIter = new SequenceRecordReaderDataSetIterator(testRR, batchSize, numLabelClasses, 1)
###Output
_____no_output_____
###Markdown
Configuring a RNN for ClassificationOnce everything needed is imported we can jump into the code. To build the neural network, we can use a set up like what is shown below. Because there are 784 timesteps and 10 class labels, nIn is set to 784 and nOut is set to 10 in the MultiLayerNetwork configuration.
###Code
val conf = new NeuralNetConfiguration.Builder()
.seed(123) //Random number generator seed for improved repeatability. Optional.
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.iterations(1)
.weightInit(WeightInit.XAVIER)
.updater(Updater.NESTEROVS)
.learningRate(0.005)
.gradientNormalization(GradientNormalization.ClipElementWiseAbsoluteValue) //Not always required, but helps with this data set
.gradientNormalizationThreshold(0.5)
.list()
.layer(0, new GravesLSTM.Builder().activation(Activation.TANH).nIn(1).nOut(10).build())
.layer(1, new RnnOutputLayer.Builder(LossFunctions.LossFunction.MCXENT)
.activation(Activation.SOFTMAX).nIn(10).nOut(numLabelClasses).build())
.pretrain(false).backprop(true).build();
val model: MultiLayerNetwork = new MultiLayerNetwork(conf)
model.setListeners(new ScoreIterationListener(20))
###Output
_____no_output_____
###Markdown
Training the classifierTo train the model, pass the training iterator to the model's `fit()` method. We can use a loop to train the model using a prespecified number of epochs or passes through the training data.
###Code
val numEpochs = 1
(1 to numEpochs).foreach(_ => model.fit(trainIter) )
###Output
_____no_output_____
###Markdown
Model EvaluationOnce training is complete we only a couple lines of code to evaluate the model on a test set. Using a test set to evaluate the model typically needs to be done in order to avoid overfitting on the training data. If we overfit on the training data, we have essentially fit to the noise in the data. An `Evaluation` class has more built-in methods if you need to extract a confusion matrix, and other tools are also available for calculating the Area Under Curve (AUC).
###Code
val evaluation = model.evaluate(testIter)
// print the basic statistics about the trained classifier
println("Accuracy: "+evaluation.accuracy())
println("Precision: "+evaluation.precision())
println("Recall: "+evaluation.recall())
###Output
_____no_output_____ |
evolucao_pais.ipynb | ###Markdown
Evoluรงรฃo do paรญs
###Code
#loc[df['tipo'].isin(['ubs', 'unidade_servico_apoio_diagnose_terapia', 'nucleos_apoio_saude_familia', 'hospital_geral', 'hospital_especializado', 'clinicas_ambulatorios_especializados'])]\
df.groupby('tipo').sum().reset_index()
df.groupby('tipo').sum().reset_index().to_csv('evolucao_pais.csv', index=False)
###Output
_____no_output_____
###Markdown
Anรกlise de recursos por regiรฃo do paรญs
###Code
df1 = df.drop(columns=['uf', 'municipio', 'pop_municipio', 'pop_uf', '6cod_municipio'])
df1.head()
###Output
_____no_output_____ |
notebook/analysis_data_Chloe.ipynb | ###Markdown
Notebook use to analyse one specific file.You can click `shift` + `enter` to run one cell, you can also click run in top menu.To run all the cells, you can click `kernel` and `Restart and run all` in the top menu.
###Code
import time
tp1 = time.time()
# Some magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
%reload_ext autoreload
# Ignore warnings in notebook
import warnings
warnings.filterwarnings('ignore')
# Matplotlib to plot the data
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.patches as patches
plt.rcParams['figure.figsize'] = 8,8
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some module needed in the notebook
import numpy as np
import javabridge
import bioformats
from itkwidgets import view
from sklearn.externals import joblib
###Output
_____no_output_____
###Markdown
The following path should direct to the folder "utils", on Window env it should have slash " / " and not backslash " \ " .
###Code
# Create a temporary python PATH to the module that we are using for the analysis
import sys
sys.path.insert(0, "/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/ChromosomeDetectionChloe/utils")
from chromosome_dsb import *
# Need to create a javabridge to use bioformats to open proprietary format
javabridge.start_vm(class_path=bioformats.JARS)
###Output
_____no_output_____
###Markdown
In the path_data variable you should enter the path to your data:
###Code
path_data = '/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/data_chloe/test_batch/exp1/'
position, time_point = load_data.stage_position(path_data)
###Output
_____no_output_____
###Markdown
Set Parameters
###Code
# Size kernel for background substraction, should be a little larger than the object of interest
back_sub_FOCI = 5
back_sub_Nucleus = 20
# LOCI detection:
# Smallest object (in pixels) to be detected
smaller = 1
# Largest object to be detected
largest = 5
# Threshold above which to look for
threshold = 12000
###Output
_____no_output_____
###Markdown
Find "Skeleton" of gonad
###Code
skelete = load_data.skeleton_coord(position,time_point)
###Output
_____no_output_____
###Markdown
Load Image In the path_img you can enter the name of your specific image "/....dv"
###Code
path_img = path_data + '/2017-04-12_RAD51-HTP3_cku80-exo1_002_visit_13_D3D_ALX.dv'
image, meta, directory = load_data.load_bioformats(path_img)
###Output
_____no_output_____
###Markdown
Plot "Skeleton" of gonad
###Code
data = np.concatenate((position,time_point[:, np.newaxis]), axis=1)
sort_data = data[np.argsort(data[:,2])]
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax.scatter(skelete[:,0], skelete[:,1], s=0.5)
stage_pos = ax.scatter(sort_data[:,0], sort_data[:,1])
working_on = ax.scatter(meta["PositionX"], meta["PositionY"], s=300, color = "r")
plt.legend([stage_pos, working_on], ["Stage Positions",
"Image currently working on"],
loc=0,fontsize='large')
img = image[:,:,:,3]
###Output
_____no_output_____
###Markdown
Optionally you can visualyze your data
###Code
#view(visualization.convert_view(img))
###Output
_____no_output_____
###Markdown
Find the nucleus in the image First need to load the classifier (clf) and scaler.
###Code
clf = joblib.load("/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/ChromosomeDetectionChloe/clf_scaler/clf")
scaler = joblib.load("/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/ChromosomeDetectionChloe/clf_scaler/scaler")
tp_1 = time.time()
result = search.rolling_window(img, clf, scaler)
tp_2 = time.time()
print(tp_2-tp_1)
bbox_ML = search.non_max_suppression(result, probaThresh=0.8, overlapThresh=0.3)
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
ax.imshow(np.amax(img,axis=0), vmax=img.max()/2, alpha = 0.8)
for coord in bbox_ML:
circles1 = patches.Circle((coord[0]+35,coord[1]+35),30, linewidth=3,edgecolor='r',facecolor='none')
ax.add_patch(circles1)
###Output
_____no_output_____
###Markdown
Background Substraction
###Code
FOCI_ch, _ = img_analysis.background_correct(image, ch=1, size=back_sub_FOCI)
Nucleus_ch, _ = img_analysis.background_correct(image, ch=3, size=back_sub_Nucleus)
visualization.plot_background(image, FOCI_ch, Nucleus_ch)
###Output
_____no_output_____
###Markdown
Finding the Blobs/FOCI
###Code
blobs = img_analysis.find_blob(FOCI_ch, meta, directory, smaller = smaller,
largest = largest, thresh = threshold,
plot=True)
###Output
_____no_output_____
###Markdown
Binarization of the Channel with nucleus
###Code
binary = img_analysis.binarization(Nucleus_ch)
###Output
_____no_output_____
###Markdown
Optionaly, you can visualyze the result of the binarization
###Code
#view(visualization.convert_view(binary))
###Output
_____no_output_____
###Markdown
Load the position of the different nucleus
###Code
#bbox_ML = np.load("/Users/Espenel/Desktop/Mini-Grant-Image-analysis/2018/Chloe/13/bbox_3D.npy")
###Output
_____no_output_____
###Markdown
Mask FOCI that are not on the nucleus
###Code
masked = search.find_foci(blobs, FOCI_ch, Nucleus_ch, binary, bbox_ML)
###Output
_____no_output_____
###Markdown
Mask FOCI that are not on a nucleus found by the Machine Learning
###Code
res, bb_mask = search.binary_select_foci(bbox_ML, Nucleus_ch, masked)
###Output
_____no_output_____
###Markdown
Find and remove FOCI that were counted twice
###Code
num, cts, dup_idx, mask = search.find_duplicate(res, bb_mask)
visualization.plot_result(img, res, bbox_ML,\
cts, num, meta, directory, save = False)
dist_tip = img_analysis.distance_to_tip(bbox_ML, skelete, meta)
chro_pos = np.squeeze(np.dstack((bbox_ML[:,0]+35,
bbox_ML[:,1]+35, bbox_ML[:,4])))
df = img_analysis.final_table(meta, bbox_ML, \
dist_tip, cts, num, \
directory, save = False)
df
to_save = {'back_sub_ch1' : back_sub_ch1,
'back_sub_ch2' : back_sub_ch2,
'back_sub_ch3' : back_sub_ch3,
'small_object' : smaller,
'large_object' : largest,
'threshold' : thresh}
log.log_file(directory, meta, **to_save)
tp2 = time.time()
print("It took {}sec".format(int(tp2-tp1)))
###Output
_____no_output_____ |
02. Linear and Logistic Regression/Lab/.ipynb_checkpoints/Linear and Logistic Regression Lab-checkpoint.ipynb | ###Markdown
Linear and Logistic Regression Lab Getting acquainted with the tools. Performing some common tasks and creating our first models You will receive labs in this format. Edit the file to make everything work.You can add some cells as you wish. Some cells are read-only - you won't be able to edit them.**Notes:** 1. **DO NOT** copy everything in a new file. Edit this one (.ipynb), save it and submit it. **DO NOT** rename the file.2. Be careful what is asked of you - all problems have checks that you need to pass in order to get the points.3. There are tests that you can see, as well as hidden tests. You'll have to perform well on both the visible and the hidden tests. **In this assignment only**, there are no hidden tests. This is just for your convenience.4. If you have used other files, upload them too. You don't need to upload any files supplied with the lab assignment.5. Each lab is scored on a scale from 0 to 10. You can get partial credit (e. g. 5 / 10). Problem 1. Read the data (1 point)The dataset comes from [here](https://archive.ics.uci.edu/ml/machine-learning-databases/00222/). It contains information about the marketing of a Portuguese bank.The data you need to read is the `bank.csv` file in the `data` folder (use ";" as the column separator). The `bank-names.txt` file contains information about the dataset. Read it and you'll get some information about what it contains.Read the dataset using `pandas` (you can use the library with the alias `pd`). Save it in the `bank_data` variable.
###Code
bank_data = None
# YOUR CODE HERE
bank_data = pd.read_csv("data/bank.csv", sep=";")
np.random.seed(42)
assert_is_not_none(bank_data)
assert_equal(bank_data.shape, (4521, 17))
###Output
_____no_output_____
###Markdown
Problem 2. Separate features and labels (2 points) Separate the explanatory variables and the output variable (it's called `y` in this case). Create two new variables.
###Code
bank_features = None # explanatory features - 16 total
bank_output = None # output feature
# YOUR CODE HERE
bank_features = bank_data.drop("y", axis = 1)
bank_output = bank_data.y
assert_equal(bank_features.shape, (4521, 16))
assert_equal(bank_output.shape, (4521,))
###Output
_____no_output_____
###Markdown
Problem 3. Convert categorical variables (1 + 1 points)Convert all categorical variables in `bank_features` into indicator variables (dummies). Save the result in the same variable. (1 point)
###Code
# YOUR CODE HERE
bank_features = pd.get_dummies(bank_features)
assert_equal(bank_features.shape, (4521, 51))
###Output
_____no_output_____
###Markdown
Convert the `bank_output` variable to an indicator variable. This can be done in many ways. Look up how in StackOverflow if you get stuck.The goal is to **rewrite the column** (replace the values): it should be numeric, and be equal to 1 if the original value was "yes" and 0 otherwise. (1 point)
###Code
# YOUR CODE HERE
bank_output = bank_output.map(dict(yes=1, no=0))
assert_equal(bank_output.dtype, np.int64)
###Output
_____no_output_____
###Markdown
Problem 4. Perform logistic regression on the original features (1 point)Perform logistic regression. Save the model in the variable `bank_model`. Use all the data. This is not generally recommended but we'll think of a workaround next time.Pass a large number for the parameter `C = 1e6` (which is equivalent to `C = 1000000`).
###Code
bank_model = None
# YOUR CODE HERE
bank_model = LogisticRegression(C = 1e6)
bank_model.fit(bank_features, bank_output)
assert_is_not_none(bank_model)
assert_equal(bank_model.C, 1e6)
###Output
_____no_output_____
###Markdown
Problem 5. Get an estimate of the model performance (1 point)Use `bank_model.score()` to get an accuracy score. We'll talk about what it represents later in the course. Save the resulting score in the variable `accuracy_score`. To generate the score, use all data. Once again, this is not what we do usually but it's a good start anyway.
###Code
accuracy_score = None
# YOUR CODE HERE
accuracy_score = bank_model.score(bank_features, bank_output)
print(accuracy_score)
assert_almost_equal(accuracy_score, 0.9042247290422473, delta = 0.05)
###Output
_____no_output_____
###Markdown
We have to make a note here. If we explore how the output classes are distributed, we can see that "class 1" is about 11.5% of all samples, i.e. very few clients actually subscribed after the call, which is expected. This means the data is **highly imbalanced**. In this case, accuracy is not a good measure of the overall model performance. We have to look at other scoring measures to get a better estimate of what's going on.But once again, we're just getting started.
###Code
# There's nothing to do here, just execute the cell and view the plot and print results.
# Cells like these are here only for your convenience and to help you understand the task better
plt.bar([0, 1], [len(bank_output[bank_output == 0]), len(bank_output[bank_output == 1])])
plt.xticks([0, 1])
plt.xlabel("Class")
plt.ylabel("Count")
plt.show()
print("Positive cases: {:.3f}% of all".format(bank_output.sum() / len(bank_output) * 100))
###Output
_____no_output_____
###Markdown
Problem 6. More features (1 point)The score is pretty high. But can we improve it? One way to try and improve it is to use polynomial features. As we saw, this creates all possible multiples of input features. In the real world, this corresponds to **feature interaction**.Create a model for quadratic features (`degree = 2`). Save it in the variable `quad_feature_transformer`. Also, set `interaction_only` to True: let's suppose we don't want to square each feature. This means that we have all single features $x_1, x_2, \dots$ and all interactions $x_1x_2, x_1x_3, \dots$ but no $x_1^2, x_2^2, \dots$Using it, transform all `bank_features`. Save them in the variable `bank_features_quad`.Note how the number of features exploded: from 51 we get more than 1300.
###Code
quad_feature_transformer = None
bank_features_quad = None
# YOUR CODE HERE
quad_feature_transformer = PolynomialFeatures(degree = 2, interaction_only = True)
bank_features_quad = quad_feature_transformer.fit_transform(bank_features)
assert_equal(quad_feature_transformer.degree, 2)
assert_equal(quad_feature_transformer.interaction_only, True)
assert_equal(bank_features_quad.shape, (4521, 1327))
###Output
_____no_output_____
###Markdown
Problem 7. Train a model on the quadratic features (1 point)You know the drill. Fit a logistic regression model with all data in `bank_features_quad` and `bank_output`. Use `C = 1e6`. Save it in `bank_model_quad`. Score it and save the score in the variable `accuracy_score_quad`.
###Code
bank_model_quad = None
accuracy_score_quad = None
# YOUR CODE HERE
bank_model_quad = LogisticRegression(C = 1e6)
bank_model_quad.fit(bank_features_quad, bank_output)
accuracy_score_quad = bank_model_quad.score(bank_features_quad, bank_output)
print("Accuracy: {:.3f}".format(accuracy_score_quad))
assert_is_not_none(bank_model_quad)
assert_equal(bank_model_quad.C, 1e6)
assert_equal(len(bank_model_quad.coef_[0]), bank_features_quad.shape[1]) # This is a simple check that the model has been trained
assert_almost_equal(accuracy_score_quad, 0.9, delta = 0.1)
###Output
_____no_output_____
###Markdown
Interesting... we have many more features but the accuracy actually dropped a little. We would observe the same behaviour if we took polynomials of degree 3: more than 20 000 features and accuracy less than 0.87.This is our first example of model selection. Why is the seemingly more complex model less accurate? There are two main reasons:* As we said, the default score (accuracy) is not good for this dataset, so its values aren't too relevant.* The number of features is alarmingly high. This leads to what we call "overfitting": our model is too complex. We can't quite catch it with this scoring scheme but we will be able to do that later.We can try a lot of things: test our model better, improve our scoring schemes, come up with better features, etc. In general, we need to take care of several things:* Are all parameters relevant? Can we discard some of them and how?* How do we deal with imbalanced data?* Is logistic regression the best type of model overall? Are there models that do better on this data?* What are the best hyperparameters for the model? We chose `C = 1e6` arbitrarily.We'll continue to do this next time. Let's try just one more thing. Problem 8. Perform normalization and compare results (1 point)We saw very strange results. A part of the problem might be that our data isn't normalized.Use the `MinMaxScaler` to scale all values in `bank_features_quad`. Save them in `bank_features_quad_scaled`. This will take several seconds.Perform a logistic regression on the new, scaled features: `bank_features_quad_scaled` and `bank_output`. Use the same parameters to score it.You should observe that the score improved the score significantly.
###Code
bank_model_quad_scaled = None
accuracy_score_quad_scaled = None
# YOUR CODE HERE
scaler = MinMaxScaler()
bank_features_quad_scaled = scaler.fit_transform(bank_features_quad)
bank_model_quad_scaled = LogisticRegression(C = 1e6)
bank_model_quad_scaled.fit(bank_features_quad_scaled, bank_output)
accuracy_score_quad_scaled = bank_model_quad_scaled.score(bank_features_quad_scaled, bank_output)
assert_is_not_none(bank_model_quad)
assert_equal(bank_model_quad.C, 1e6)
assert_equal(len(bank_model_quad.coef_[0]), bank_features_quad.shape[1])
assert_almost_equal(accuracy_score_quad_scaled, 0.969033399690334, delta = 0.05)
###Output
_____no_output_____ |
2020_08_03/ๅ
จ่ฟๆฅ็ฝ็ป็ๆๅๆฐๅญ่ฏๅซ(MNIST).ipynb | ###Markdown
ๆฐๆฎๅ ่ฝฝ ๅๅปบdataset- ๅ ่ฝฝMNISTๆฐๆฎ- ่ฟ่กๆฐๆฎ้ขๅค็, ่ฝฌๆขไธบtensor ๅๅปบdataloader- ๅฐdatasetไผ ๅ
ฅdataloader, ่ฎพ็ฝฎbatchsize
###Code
# ๅฐๆฐๆฎ้ๅไธ่ฝฝๅฐๆๅฎ็ฎๅฝไธ,่ฟ้็transform่กจ็คบ๏ผๆฐๆฎๅ ่ฝฝๆถๆ้่ฆๅ็้ขๅค็ๆไฝ
# ๅ ่ฝฝ่ฎญ็ป้ๅ(Train)
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=torchvision.transforms.ToTensor(),
download=True)
# ๅ ่ฝฝๆต่ฏ้ๅ(Test)
test_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=transforms.ToTensor())
print(train_dataset) # ่ฎญ็ป้
print(test_dataset) # ๆต่ฏ้
batch_size = 100
# ๆ นๆฎๆฐๆฎ้ๅฎไนๆฐๆฎๅ ่ฝฝๅจ
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# ๆฅ็ๆฐๆฎ
examples = iter(test_loader)
example_data, example_target = examples.next() # 100*1*28*28
for i in range(9):
plt.subplot(3,3,i+1).set_title(example_target[i])
plt.imshow(example_data[i][0], 'gray')
plt.tight_layout()
plt.show()
###Output
###Markdown
็ฝ็ป็ๆๅปบ
###Code
# ่พๅ
ฅ่็นๆฐๅฐฑไธบๅพ็็ๅคงๅฐ๏ผ28ร28ร1
input_size = 784
# ็ฑไบๆฐๅญไธบ 0-9๏ผๅ ๆญคๆฏ10ๅ็ฑป้ฎ้ข๏ผๅ ๆญค่พๅบ่็นๆฐไธบ 10
num_classes = 10
# ็ฝ็ป็ๅปบ็ซ
class NeuralNet(nn.Module):
# ่พๅ
ฅๆฐๆฎ็็ปดๅบฆ๏ผไธญ้ดๅฑ็่็นๆฐ๏ผ่พๅบๆฐๆฎ็็ปดๅบฆ
def __init__(self, input_size, hidden_size, num_classes):
super(NeuralNet, self).__init__()
self.input_size = input_size
self.l1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.l2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.relu(self.l1(x))
out = self.l2(out)
return out
model = NeuralNet(input_size, 500, num_classes).to(device)
model
# ็ฎๅๆต่ฏๆจกๅ็่พๅบ
examples = iter(test_loader)
example_data, _ = examples.next() # 100*1*28*28
model(example_data.reshape(example_data.size(0),-1)).shape
###Output
_____no_output_____
###Markdown
ๅฎไนๆๅคฑๅฝๆฐๅไผๅๅจ
###Code
# ๅฎไนๅญฆไน ็
learning_rate = 0.001
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
ๆจกๅ็่ฎญ็ปไธๆต่ฏ
###Code
num_epochs = 10
n_total_steps = len(train_loader)
LossList = [] # ่ฎฐๅฝๆฏไธไธชepoch็loss
AccuryList = [] # ๆฏไธไธชepoch็accury
for epoch in range(num_epochs):
# -------
# ๅผๅง่ฎญ็ป
# -------
model.train() # ๅๆขไธบ่ฎญ็ปๆจกๅ
totalLoss = 0
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1, 28*28).to(device) # ๅพ็ๅคงๅฐ่ฝฌๆข
labels = labels.to(device)
# ๆญฃๅไผ ๆญไปฅๅๆๅคฑ็ๆฑๅ
outputs = model(images)
loss = criterion(outputs, labels)
totalLoss = totalLoss + loss.item()
# ๅๅไผ ๆญ
optimizer.zero_grad() # ๆขฏๅบฆๆธ
็ฉบ
loss.backward() # ๅๅไผ ๆญ
optimizer.step() # ๆ้ๆดๆฐ
if (i+1) % 300 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, n_total_steps, totalLoss/(i+1)))
LossList.append(totalLoss/(i+1))
# ---------
# ๅผๅงๆต่ฏ
# ---------
model.eval()
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.reshape(-1, 28*28).to(device)
labels = labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1) # ้ขๆต็็ปๆ
total += labels.size(0)
correct += (predicted == labels).sum().item()
acc = 100.0 * correct / total # ๅจๆต่ฏ้ไธๆป็ๅ็กฎ็
AccuryList.append(acc)
print('Accuracy of the network on the {} test images: {} %'.format(total, acc))
print("ๆจกๅ่ฎญ็ปๅฎๆ")
# ็ปๅถloss็ๅๅ
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(13,7))
axes.plot(LossList, 'k--')
# ็ปๅถloss็ๅๅ
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(13,7))
axes.plot(AccuryList, 'k--')
###Output
_____no_output_____
###Markdown
ไฝฟ็จๅฎ้
ไพๅญ่ฟ่ก้ช่ฏ
###Code
# ๆต่ฏๆ ทไพ
examples = iter(test_loader)
example_data, example_targets = examples.next()
# ๅฎ้
ๅพ็
for i in range(9):
plt.subplot(3, 3, i+1)
plt.imshow(example_data[i][0], cmap='gray')
plt.show()
# ็ปๆ็้ขๆต
images = example_data.reshape(-1, 28*28).to(device)
labels = example_targets.to(device)
# ๆญฃๅไผ ๆญไปฅๅๆๅคฑ็ๆฑๅ
outputs = model(images)
# ๅฐ Tensor ็ฑปๅ็ๅ้ example_targets ่ฝฌไธบ numpy ็ฑปๅ็๏ผๆนไพฟๅฑ็คบ
print("ไธ้ขไธๅผ ๅพ็็็ๅฎ็ปๆ๏ผ", example_targets[0:9].detach().numpy())
# ๅฐๅพๅฐ้ขๆต็ปๆ
# ็ฑไบ้ขๆต็ปๆๆฏ Nร10 ็็ฉ้ต๏ผๅ ๆญคๅฉ็จ np.argmax ๅฝๆฐๅๆฏ่กๆๅคง็้ฃไธชๅผ๏ผๆไธบ้ขๆตๅผ
print("ไธ้ขไธๅผ ๅพ็็้ขๆต็ปๆ๏ผ", np.argmax(outputs[0:9].detach().numpy(), axis=1))
###Output
_____no_output_____ |
19-Timedelta.ipynb | ###Markdown
PYTHON Pandas - TimedeltaTimedelta, gรผnler, saatler, dakikalar, saniyeler gibi fark birimlerinde ifade edilen zamanlardaki farklฤฑlฤฑklardฤฑr. Hem olumlu hem de olumsuz olabilirler.Aลaฤฤฑda gรถsterildiฤi gibi รงeลitli argรผmanlar kullanarak Timedelta nesneleri oluลturabiliriz StringBir dize cรผmle geรงirerek, bir timedelta nesnesi oluลturabiliriz.
###Code
import pandas as pd
pd.Timedelta('2 days 2 hours 15 minutes 30 seconds')
###Output
_____no_output_____
###Markdown
IntegerTam sayฤฑ kullanarak ta bir Timedalta nesnesi oluลturabiliriz.
###Code
import pandas as pd
pd.Timedelta(6,unit='h')
###Output
_____no_output_____
###Markdown
Data OffsetsVeri uzaklฤฑklar gibi - hafta, gรผn, saat, dakika, saniye, milisaniye, mikrosaniye, nanosaniye yapฤฑmฤฑnda da kullanฤฑlabilir.
###Code
import pandas as pd
pd.Timedelta(days=2)
###Output
_____no_output_____
###Markdown
to_timedelta()Girdi bir seri ise, girdi skaler, bir skaler ise seri oluลturacaktฤฑr, aksi takdirde bir Timedeltaindex รงฤฑkacaktฤฑr
###Code
import pandas as pd
pd.Timedelta(days=2)
###Output
_____no_output_____
###Markdown
OperationsSeri/ Veri รงerรงeveleri รผzerinde รงalฤฑลabilir ve datetime64[ns] serisindeki รงฤฑkarma iลlemleri aracฤฑlฤฑฤฤฑyla timedelta64[ns] serisini veya zaman damgalarฤฑnฤฑ oluลturabilirsiniz.ลimdi Timedelta ve datetime nesneleriyle bir dataframe oluลturalฤฑm ve รผzerinde bazฤฑ aritmetik iลlemler gerรงekleลtirelim โ
###Code
import pandas as pd
s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
td = pd.Series([ pd.Timedelta(days=i) for i in range(3) ])
df = pd.DataFrame(dict(A = s, B = td))
df
###Output
_____no_output_____
###Markdown
Ekleme ฤฐลlemi
###Code
import pandas as pd
s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
td = pd.Series([ pd.Timedelta(days=i) for i in range(3) ])
df = pd.DataFrame(dict(A = s, B = td))
df['C']=df['A']+df['B']
df
###Output
_____no_output_____
###Markdown
รฤฑkarma ฤฐลlemi
###Code
import pandas as pd
s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
td = pd.Series([ pd.Timedelta(days=i) for i in range(3) ])
df = pd.DataFrame(dict(A = s, B = td))
df['C']=df['A']-df['B']
df['D']=df['C']-df['B']
df
###Output
_____no_output_____ |
spacy_multiprocess.ipynb | ###Markdown
Turbo-charge your spaCy NLP pipeline> Tips and tricks to significantly speed up text processing using multi-core spaCy custom pipelines.Consider you have a large tabular dataset on which you want to apply some non-trivial NLP transformations, such as stopword removal followed by lemmatizing (i.e. reducing to root form) the words in a text. [spaCy](https://spacy.io/usage) is an industrial strength NLP library designed for just such a task.In the example shown below, the [New York Times Kaggle dataset](https://www.kaggle.com/nzalake52/new-york-times-articles) is used to showcase how to significantly speed up a spaCy NLP pipeline. the goal is to take in an article's text, and speedily return a list of lemmas with unnecessary words, called "stopwords", removed.Pandas DataFrames provide a convenient interface to work with tabular data of this nature. First, import the necessary modules shown.
###Code
import re
import pandas as pd
import spacy
from tqdm.notebook import tqdm
tqdm.pandas()
###Output
_____no_output_____
###Markdown
Input variablesThe tabular data is stored in a tab-separated file obtained by running the preprocessing notebook `preprocessing.ipynb` on the raw text data from Kaggle and stored in the `data/` directory. A curated stopword file is also provided in this location.Additionally, during initial testing, we can limit the size of the DataFrame being worked on (to around $2000$ samples) for faster execution. For the final run, disable the limit by setting it to zero.
###Code
inputfile = "data/nytimes.tsv"
stopwordfile = "data/stopwords/stopwords.txt"
limit = 0
###Output
_____no_output_____
###Markdown
Load spaCy modelFor lemmatization, a simple spaCy model can be initialized. Since we will not be doing any specialized tasks such as dependency parsing and named entity recognition in this exercise, these components are disabled.spaCy has a `sentencizer` component that can instead be enabled - this simply performs tokenization and sentence boundary detection, following which lemmas can be extracted as token properties.
###Code
nlp = spacy.load('en_core_web_sm', disable=['tagger', 'parser', 'ner'])
nlp.add_pipe(nlp.create_pipe('sentencizer'))
###Output
_____no_output_____
###Markdown
A method is defined to read in stopwords from a text file and convert it to a set in Python (for efficient lookup).
###Code
def get_stopwords():
"Return a set of stopwords read in from a file."
with open(stopwordfile) as f:
stopwords = []
for line in f:
stopwords.append(line.strip("\n"))
# Convert to set for performance
stopwords_set = set(stopwords)
return stopwords_set
stopwords = get_stopwords()
###Output
_____no_output_____
###Markdown
Read in New York Times DatasetThe pre-processed version of the NYT news dataset is read in as a Pandas DataFrame. The columns are named `date`, `headline` and `content` - the text present in the content column is what will be preprocessed to remove stopwords and generate token lemmas.
###Code
def read_data(inputfile):
"Read in a tab-separated file with date, headline and news content"
df = pd.read_csv(inputfile, sep='\t', header=None,
names=['date', 'headline', 'content'])
df['date'] = pd.to_datetime(df['date'], format="%Y-%m-%d")
return df
df = read_data(inputfile)
df.head()
###Output
_____no_output_____
###Markdown
Define text cleanerSince the news article data comes from a raw HTML dump, it is very messy and contains a host of unnecessary symbols, social media handles, URLs and other artifacts. An easy way to clean it up is to use a regex that parses only alphanumeric strings and hyphens (so as to include hyphenated words) that are between a given length (3 and 50). This filters each document down to only meaningful text for the lemmatizer.
###Code
def cleaner(df):
"Extract relevant text from DataFrame using a regex"
# Regex pattern for only alphanumeric, hyphenated text with 3 or more chars
pattern = re.compile(r"[A-Za-z0-9\-]{3,50}")
df['clean'] = df['content'].str.findall(pattern).str.join(' ')
if limit > 0:
return df.iloc[:limit, :].copy()
else:
return df
df_preproc = cleaner(df)
df_preproc.head(3)
###Output
_____no_output_____
###Markdown
Now that we have just the clean, alphanumeric tokens left over, these can be further cleaned up by removing stopwords before proceeding to lemmatization. Option 1. Work directly on the data using `pandas.Series.apply`The straightforward way to process this text is to use an existing method, in this case the `lemmatize` method shown below, and apply it to the `clean` column of the DataFrame. Lemmatization is done using the spaCy's underlying [`Doc` representation](https://spacy.io/usage/spacy-101annotations) of each token, which contains a `lemma_` property. Stopwords are removed simultaneously with the lemmatization process, as each of these steps involves iterating through the same list of tokens.
###Code
def lemmatize(text):
"""Perform lemmatization and stopword removal in the clean text
Returns a list of lemmas
"""
doc = nlp(text)
lemma_list = [str(tok.lemma_).lower() for tok in doc
if tok.is_alpha and tok.text.lower() not in stopwords]
return lemma_list
###Output
_____no_output_____
###Markdown
The resulting lemmas as stored as a list in a separate column `preproc` as shown below.
###Code
%%time
df_preproc['preproc'] = df_preproc['clean'].progress_apply(lemmatize)
df_preproc[['date', 'content', 'preproc']].head(3)
###Output
_____no_output_____
###Markdown
Applying this method to the `clean` column of the DataFrame and timing it shows that it takes almost a minute to run on $8800$ news articles. Option 2. Use `nlp.pipe`Can we do better? in the [spaCy documentation](https://spacy.io/api/languagepipe), it is stated that "processing texts as a stream is usually more efficient than processing them one-by-one". This is done by calling a language pipe, which internally divides the data into batches to reduce the number of pure-Python function calls. This means that the larger the data, the better the performance gain that can be obtained by `nlp.pipe`.To use the language pipe to stream texts, a separate lemmatizer method is defined that directly works on a spaCy `Doc` object. This method is then called in batches to work on a *sequence* of `Doc` objects that are streamed through the pipe as shown below.
###Code
def lemmatize_pipe(doc):
lemma_list = [str(tok.lemma_).lower() for tok in doc
if tok.is_alpha and tok.text.lower() not in stopwords]
return lemma_list
def preprocess_pipe(texts):
preproc_pipe = []
for doc in tqdm(nlp.pipe(texts, batch_size=20), total=len(df_preproc)):
preproc_pipe.append(lemmatize_pipe(doc))
return preproc_pipe
###Output
_____no_output_____
###Markdown
Just as before, a new column is created by passing data from the `clean` column of the existing DataFrame. Note that unlike in workflow $1$, we do not use the `apply` method - instead, the column of data (an iterable) is directly passed as an argument to the preprocessor pipe method.
###Code
%%time
df_preproc['preproc_pipe'] = preprocess_pipe(df_preproc['clean'])
df_preproc[['date', 'content', 'preproc_pipe']].head(3)
###Output
_____no_output_____
###Markdown
Timing this workflow shows barely any improvement, but it still takes almost a minute on the entire set of $8800$ news articles. One would expect that as we work on bigger and bigger datasets, the timing gain using `nlp.pipe` would become more noticeable (on average). Option 3. Parallelize the work using joblibWe can do still better! The previous workflows sequentially worked through each news document to produce the lemma lists, which were then appended to the DataFrame as a new column. Because each row's output is completely independent of the other, this is an *embarassingly parallel* problem, making it ideal for using multiple cores.The `joblib` library is recommended by spaCy for processing blocks of an NLP pipeline in parallel. Make sure that you `pip install joblib` before running the below section.To parallelize the workflow, a few more helper methods must be defined. * **Chunking:** The news article content is a list of (long) strings where each document represents a single article's text. This data must be fed in "chunks" to each worker process started by `joblib`. Each call of the `chunker` method returns a generator that only contains that particular chunk's text as a list of strings. During lemmatization, each new chunk is retrieved based on the iterator index (with the previous chunks being "forgotten").* **Flattening:** Once joblib creates a set of worker processes that work on each chunk, each worker returns a "list of list" containing lemmas for each document. These lists are then combined by the executor to provide a deeply nested final "list of list of lists". To ensure that the length of the output from the executor is the same as the actual number of articles, a "flatten" method is defined to combine the result into a list of lists containing lemmas. For example, if the executor returns a final result `[[[a, b, c], [d, e, f]], [[g, h, i], [j, k, l]]]`, a flattened version of this result would be `[[a, b, c], [d, e, f], [g, h, i], [j, k, l]]`.In addition to the above methods, a similar `nlp.pipe` method is used as in workflow $2$, on each chunk of texts. Each of these methods is wrapped into a `preprocess_parallel` method that defines the number of worker processes to be used ($7$ in this case), breaks the input data into chunks and returns a flattened result that can then be appended to the DataFrame.
###Code
from joblib import Parallel, delayed
from functools import partial
def chunker(iterable, total_length, chunksize):
return (iterable[pos: pos + chunksize] for pos in range(0, total_length, chunksize))
def flatten(list_of_lists):
"Flatten a list of lists to a combined list"
return [item for sublist in list_of_lists for item in sublist]
def process_chunk(texts):
preproc_pipe = []
for doc in nlp.pipe(texts, batch_size=20):
preproc_pipe.append(lemmatize_pipe(doc))
return preproc_pipe
def preprocess_parallel(texts, chunksize=100):
executor = Parallel(n_jobs=7, backend='multiprocessing', prefer="processes")
do = delayed(process_chunk)
tasks = (do(chunk) for chunk in chunker(texts, len(df_preproc), chunksize=chunksize))
result = executor(tasks)
return flatten(result)
%%time
df_preproc['preproc_parallel'] = preprocess_parallel(df_preproc['clean'], chunksize=1000)
df_preproc[['date', 'content', 'preproc_parallel']].head(3)
###Output
_____no_output_____
###Markdown
Timing this parallelized workflow shows significant performance gains (almost **3x** reduction in run time)! As the number of documents becomes larger, the additional overhead of starting multiple worker threads with `joblib` is quickly paid for, and this method can significantly outperform the sequential methods. Effect of chunk size and batch sizeNote that in the parallelized workflow, two parameters need to be specified - the optimum number can vary depending on the dataset. The `chunksize` controls the number of chunks being worked on by each process. In this example, for $8800$ documents, a chunksize of $1000$ is used. Too small a chunksize would mean that a large number of worker threads would spawn (each one waiting for other threads to complete), which can slow down execution. Generally, a chunksize of around $1/10^{th}$ of the total number of documents can be used as a starting point (assuming that all chunks fit into memory at any given time).The batch size is parameter specific to `nlp.pipe`, and again, a good value depends on the data being worked on. For reasonably long-sized text such as news articles, it makes sense to keep the batch size reasonably small (so that each batch doesn't contain *really* long texts), so in this case $20$ was chosen for the batch size. For other cases (e.g. Tweets) where each document is much shorter in length, a larger batch size can be used.**It is recommended to experiment with either parameter to see which combination produces the best performance**. Bonus: Use sets over lists for lookups wherever possibleNote that in the `get_stopwords()` method defined earlier on, the list of stopwords read in from the stopword file was converted to a set before using it in the lemmatizer method for stopword removal via lookups. This is a very useful trick in general, but specifically for stopword removal, the use of sets becomes **all the more important**. Why? In any realistic stopword list, such as this one for a news dataset, it's reasonable to expect *several hundred* stopwords. This is because for downstream tasks such as topic modelling or sentiment analysis, there are a number of domain-specific words that need to be removed (very common verbs, useless abbreviations such as timezones, days of the week, etc.). Each word in each and every document needs to be compared against every word in the stopword list, which is an expensive operation over tens of thousands of documents.It's well known that sets have $O(1)$ (i.e. consant) lookup time as opposed to lists, which have $O(n)$ lookup time. In the `lemmatize()` method, since we're checking each word for membership in the set of stopwords, we would expect sets to be much better than lists. To test this, we can rerun workflow $1$ but this time, use a stopword *list* instead.
###Code
stopwords = list(stopwords)
%%time
df_preproc['preproc_stopword_list'] = df_preproc['clean'].progress_apply(lemmatize)
df_preproc[['date', 'content', 'preproc_stopword_list']].head(3)
###Output
_____no_output_____ |
jupyter_demo/wikidata phenomizer demo.ipynb | ###Markdown
Wikidata PhenomizerThe phenomizer tool takes an ontology file and an association file. We'll use the HPO and the HPO's disease->phenotype association file, but supplemented with Wikidata-derived disease->phenotype associations setup **The Wikidata-phenomizer repo contains a python script for generating this association file**
###Code
!git clone [email protected]:SuLab/Wikidata-phenomizer.git
import os
os.chdir("Wikidata-phenomizer/")
!wget -N http://compbio.charite.de/jenkins/job/hpo.annotations/lastStableBuild/artifact/misc/phenotype_annotation.tab
!python phenomizer.py
###Output
/home/gstupp/projects/phenomizer/jupyter_demo/venv/lib/python3.5/site-packages/pandas/core/frame.py:6211: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version
of pandas will change to not sort by default.
To accept the future behavior, pass 'sort=False'.
To retain the current behavior and silence the warning, pass 'sort=True'.
sort=sort)
number of hpo annotations: 159162
number of wikidata annotations: 472
number overlap annotations: 307
top unique disease-phenotypes in wd:
NGLY1-deficiency 71
Colchicine poisoning 20
Mercury poisoning 16
lead poisoning 14
toxocariasis 8
Q fever 8
hymenolepiasis 7
tick-borne encephalitis 7
acute lymphocytic leukemia 6
Japanese encephalitis 6
Name: DB_Name, dtype: int64
###Markdown
**The Boqa repo contains the Java tool for running Phenomizer**
###Code
os.chdir("..")
!git clone [email protected]:SuLab/boqa.git
os.chdir("boqa/")
!wget -N http://purl.obolibrary.org/obo/hp.obo
hpo_ids = "HP:0001263,HP:0001252,HP:0000522,HP:0012804,HP:0000559,HP:0011968,HP:0009830,HP:0001265,HP:0002167,HP:0000970,HP:0040129"
ass_file = "../Wikidata-phenomizer/phenotype_annotation_wd.tab"
!java -jar target/boqa-0.0.3-SNAPSHOT.jar -hpo {hpo_ids} -obo hp.obo -af {ass_file}
###Output
init new BoqaService
Dec 27, 2018 12:44:48 PM sonumina.boqa.server.BOQACore <init>
INFO: Starting sonumina.boqa.server.BOQACore
Dec 27, 2018 12:44:48 PM ontologizer.go.OBOParser doParse
INFO: Got 14090 terms and 17755 relations in 127 ms
Dec 27, 2018 12:44:48 PM sonumina.boqa.server.BOQACore <init>
INFO: OBO file "hp.obo" parsed
Dec 27, 2018 12:44:49 PM ontologizer.go.Ontology assignLevel1TermsAndFixRoot
INFO: Ontology contains a single level-one term (All (HP:0000001)
Dec 27, 2018 12:44:49 PM sonumina.boqa.server.BOQACore <init>
INFO: Ontology graph with 14090 terms created
Dec 27, 2018 12:44:49 PM ontologizer.association.PafLineScanner processParsedAssociation
WARNING: PafLineScanner: Line 159532: Expected that dbObject "ORPHA:244305" maps to symbol "hypophosphatemic nephrolithiasis/osteoporosis 1 (ORPHA:244305)" but it maps to "hypophosphatemic nephrolithiasis/osteoporosis 2 (ORPHA:244305)"
Skipping association of item "NGLY1-deficiency (ORPHA:404454)" to HP:0008051 because term is obsolete!
(Are the obo file and the association file in sync?)
Dec 27, 2018 12:44:49 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: 159978 associations parsed, 159977 of which were kept while 0 malformed lines had to be ignored.
Dec 27, 2018 12:44:49 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: A further 1 associations were skipped due to various reasons whereas 0 of those where explicitly qualified with NOT, 1 referred to obsolete terms and 0 didn't match the requested evidence codes
Dec 27, 2018 12:44:49 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: PAF-parse: A total of 8278 terms are directly associated to 10859 items.
Dec 27, 2018 12:44:49 PM sonumina.boqa.server.BOQACore <init>
INFO: Got ontology and associations in 0.83 seconds
Dec 27, 2018 12:44:49 PM sonumina.boqa.server.BOQACore init
INFO: Setting up BOQA
Dec 27, 2018 12:44:49 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: 10860 items shall be considered
Dec 27, 2018 12:44:49 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: Available evidences: PCS->7312,IEA->46892,TAS->105325,ICE->10,
Dec 27, 2018 12:44:50 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: 10860 items passed criterias (supplied evidence codes)
Dec 27, 2018 12:44:50 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Determining differences
1200054 differences detected (110.50220994475139 per item)
Dec 27, 2018 12:44:50 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Determining differences with frequencies for maximal 5 terms
Dec 27, 2018 12:44:52 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Done with differences!
Dec 27, 2018 12:44:52 PM sonumina.boqa.server.BOQACore init
INFO: Sort terms
Dec 27, 2018 12:44:53 PM sonumina.boqa.server.BOQACore <init>
INFO: Starting sonumina.boqa.server.BOQACore
Dec 27, 2018 12:44:53 PM ontologizer.go.OBOParser doParse
INFO: Got 14090 terms and 17755 relations in 85 ms
Dec 27, 2018 12:44:53 PM sonumina.boqa.server.BOQACore <init>
INFO: OBO file "hp.obo" parsed
Dec 27, 2018 12:44:53 PM ontologizer.go.Ontology assignLevel1TermsAndFixRoot
INFO: Ontology contains a single level-one term (All (HP:0000001)
Dec 27, 2018 12:44:53 PM sonumina.boqa.server.BOQACore <init>
INFO: Ontology graph with 14090 terms created
Dec 27, 2018 12:44:54 PM ontologizer.association.PafLineScanner processParsedAssociation
WARNING: PafLineScanner: Line 159532: Expected that dbObject "ORPHA:244305" maps to symbol "hypophosphatemic nephrolithiasis/osteoporosis 1 (ORPHA:244305)" but it maps to "hypophosphatemic nephrolithiasis/osteoporosis 2 (ORPHA:244305)"
Skipping association of item "NGLY1-deficiency (ORPHA:404454)" to HP:0008051 because term is obsolete!
(Are the obo file and the association file in sync?)
Dec 27, 2018 12:44:54 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: 159978 associations parsed, 159977 of which were kept while 0 malformed lines had to be ignored.
Dec 27, 2018 12:44:54 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: A further 1 associations were skipped due to various reasons whereas 0 of those where explicitly qualified with NOT, 1 referred to obsolete terms and 0 didn't match the requested evidence codes
Dec 27, 2018 12:44:54 PM ontologizer.association.AssociationParser importAssociationFileFromPaf
INFO: PAF-parse: A total of 8278 terms are directly associated to 10859 items.
Dec 27, 2018 12:44:54 PM sonumina.boqa.server.BOQACore <init>
INFO: Got ontology and associations in 0.619 seconds
Dec 27, 2018 12:44:54 PM sonumina.boqa.server.BOQACore init
INFO: Setting up BOQA
Dec 27, 2018 12:44:54 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: 10860 items shall be considered
Dec 27, 2018 12:44:54 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: Available evidences: PCS->7312,IEA->46892,TAS->105325,ICE->10,
Dec 27, 2018 12:44:54 PM sonumina.boqa.calculation.BOQA provideGlobals
INFO: 10860 items passed criterias (supplied evidence codes)
Dec 27, 2018 12:44:55 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Determining differences
1200054 differences detected (110.50220994475139 per item)
Dec 27, 2018 12:44:55 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Determining differences with frequencies for maximal 5 terms
Dec 27, 2018 12:44:57 PM sonumina.boqa.calculation.DiffVectors initDiffVectors
INFO: Done with differences!
Dec 27, 2018 12:44:57 PM sonumina.boqa.server.BOQACore init
INFO: Sort terms
itemName|score
CONE-ROD DYSTROPHY, X-LINKED, 1 (OMIM:304020)|0.5618521679105415
CONGENITAL DISORDER OF DEGLYCOSYLATION; CDDG (OMIM:615273)|0.32472793279466233
NGLY1-deficiency (ORPHA:404454)|0.07001706350516379
Cyclic neutropenia (ORPHA:2686)|0.020875879762404497
NEUROPATHY, HEREDITARY SENSORY AND AUTONOMIC, TYPE II (OMIM:201300)|0.009914522181086032
METACHROMATIC LEUKODYSTROPHY DUE TO SAPOSIN B DEFICIENCY (OMIM:249900)|0.0046590651687770665
YUAN-HAREL-LUPSKI SYNDROME; YUHAL (OMIM:616652)|0.0025038227609720147
MITOCHONDRIAL DNA DEPLETION SYNDROME 14 (OMIM:616896)|0.0017445706391903551
severe acute respiratory syndrome (ORPHA:140896)|8.395633227508864E-4
CHROMOSOME 3pter-p25 DELETION SYNDROME (OMIM:613792)|6.124569153543484E-4
|
presentations/SciPy 2017.ipynb | ###Markdown
IPython-Unittest Test support for Jupyter/IPython through cell magics Joรฃo Felipe Nicolaci Pimentel ([email protected]) `pip install ipython_unittest` https://github.com/JoaoFelipe/ipython-unittest Before I start 1- Survey about computational experiments http://scipy.npimentel.net 2- Poster about Provenance in Python scripts noWorkflow: collecting; managing; and provenance from Python scripts** - last row, behind the screen
###Code
%load_ext ipython_unittest.dojo
def add(x, y):
return x + y
%%unittest -p 1
assert add(1, 1) == 2
assert add(1, 2) == 3
assert add(2, 2) == 4
import unittest
import sys
class JupyterTest(unittest.TestCase):
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
suite = unittest.TestLoader().loadTestsFromTestCase(JupyterTest)
unittest.TextTestRunner(verbosity=1, stream=sys.stdout).run(suite)
%%unittest -u
"add 1 + 1 returns 2"
assert add(1, 1) == 2
"add 1 + 2 returns 3"
assert add(1, 2) == 3
"add 2 + 2 returns 4"
assert add(2, 2) == 4
###Output
_____no_output_____
###Markdown
Other magics:
###Code
%%write javascript test.js
var assert = require('assert');
describe('Array', function() {
describe('#indexOf()', function() {
it('should return -1 when the value is not present', function() {
assert.equal(-1, [1,2,3].indexOf(4));
});
});
});
%%external --previous 1
mocha test.js
%%unittest_main
class JupyterTest(unittest.TestCase):
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
%%unittest_testcase
def test_add_1_1_returns_2(self):
self.assertEqual(add(1, 1), 2)
def test_add_1_2_returns_3(self):
self.assertEqual(add(1, 2), 3)
def test_add_2_2_returns_4(self):
self.assertEqual(add(2, 2), 4)
###Output
_____no_output_____ |
src/w3techs/top-n.ipynb | ###Markdown
Notebook for adding top_sitesThese come in batches from Matthias. Last time, all the data were from the same date.Assuming that continues, just set the DATE below to that date PLUS one day.
###Code
import pandas as pd
MEASUREMENTS_TIME = pd.Timestamp('2021-01-02')
from os.path import join
from os import listdir
from src.shared.utils import get_country
from datetime import datetime
###Output
_____no_output_____
###Markdown
Parsing the dataframes
###Code
def parse_df (df: pd.DataFrame) -> pd.DataFrame:
'''
Returns a df with columns
name, marketshare
'''
def percentify (x):
try:
n = x.split('%')[0]
return float(n)/100
except:
return 0
# name of columns where percentages are
perc_col_name = [c for c in df.columns if c.startswith('Percentage')][0]
df['marketshare'] = df[perc_col_name].apply(percentify)
# if this is a heirarchical csv,
# get top-level entries only
if 'Rank' in df.columns:
df['top-level'] = df['Rank'].apply(lambda x: str(x).endswith('.0'))
df = df[df['top-level']==True]
# get names from 1st column
n = df.columns[1]
else:
# get names from 0th column
n = df.columns[0]
# get jurisdictions
df['name'] = df[n]
# remove 'and territories' for server locations
df['name'] = df['name'].apply(lambda x: x.split(' and territories')[0])
df['jurisdiction_alpha2'] = df['name'].apply(get_country)
return df[['name', 'marketshare', 'jurisdiction_alpha2']]
ex_fn = listdir('top-sites')[1]
ex_df = pd.read_csv(join('top-sites', ex_fn))
parse_df(ex_df)
###Output
_____no_output_____
###Markdown
Extracting market/top-n from filenames
###Code
dfs = []
for my_dir in listdir('top-sites'):
fn = my_dir.split('.csv')[0]
if fn.split('-')[1]=='hierarchy':
market, h, top_n, date_str = fn.split('-')
date = datetime.strptime(date_str, '%Y%M')
print(market, top_n, date)
df = pd.read_csv(join('top-sites', my_dir))
df = parse_df(df)
df['measurement_scope'] = top_n
df['market'] = market
df['date'] = date
dfs.append(df)
df = pd.concat(dfs)
df.market.unique()
dfs = []
for my_dir in listdir('top-sites'):
fn = my_dir.split('.csv')[0]
market = fn.split('-')[0]
# if we don't already have this data from the heirarchical files
if (market not in df.market.unique()):
market, top_n, date_str = fn.split('-')
date = datetime.strptime(date_str, '%Y%M')
print(market, top_n, date)
t_df = pd.read_csv(join('top-sites', my_dir))
t_df = parse_df(t_df)
t_df['measurement_scope'] = top_n
t_df['market'] = market
t_df['date'] = date
dfs.append(t_df)
concat = pd.concat(dfs)
concat = pd.concat([concat, df])
concat.market.unique()
###Output
_____no_output_____
###Markdown
Simple analyseshh,, hh,,
###Code
concat.to_csv('out/top-sites-combined.csv')
df = pd.read_csv('out/top-sites-combined.csv').drop('Unnamed: 0', axis=1)
df
df.market = df.market.replace({
'dns_servers': 'dns-server',
'server_locations': 'server-location',
'data_center': 'data-centers',
'ssl_certificate': 'ssl-certificate',
'web_hosting': 'web-hosting',
'reverse_proxy': 'proxy',
'top_level_domains': 'top-level-domain',
})
###Output
_____no_output_____
###Markdown
TODO: Write to databaseNOTE FOR README: In 'heirarchical' files, i took the top-level only (e.g., 'DigiCert Group' vs. 'DigiCert' + its other subsidiaries).
###Code
from config import config
postgres_config = config['postgres']
import psycopg2
conn = psycopg2.connect(**postgres_config)
cur = conn.cursor()
from imp import reload
# conn.commit()
# provider marketshare for each
import src.w3techs.types
reload(src.w3techs.types)
from src.w3techs.types import ProviderMarketshare
from src.shared.types import Alpha2
for i, row in df.iterrows():
try:
alpha2 = Alpha2(row.jurisdiction_alpha2)
except:
alpha2 = None
marketshare = ProviderMarketshare(
row['name'],
None,
alpha2,
row.measurement_scope,
row.market,
float(row['marketshare']),
pd.Timestamp(row.date))
marketshare.write_to_db(cur, conn, commit=False)
conn.commit()
###Output
_____no_output_____
###Markdown
find pop weighted gini for top 1k and top 10k
###Code
from src.w3techs.collect import included_markets
import src.w3techs.utils as utils
# time = pd.Timestamp(df.date.unique()[0], tz='America/Los_Angeles')
for measurement_scope in ['top_1k', 'top_10k']:
print(measurement_scope)
for market in included_markets:
pop_weighted_gini = utils.population_weighted_gini(
cur, measurement_scope, market, MEASUREMENTS_TIME)
if pop_weighted_gini:
print(f'[X] {market}')
pop_weighted_gini.write_to_db(cur, conn)
else:
print(f'[ ] {market}')
###Output
top_1k
[X] web-hosting
[X] ssl-certificate
[X] proxy
[X] data-centers
[X] dns-server
[X] server-location
[X] top-level-domain
top_10k
[X] web-hosting
[X] ssl-certificate
[X] proxy
[X] data-centers
[X] dns-server
[X] server-location
[X] top-level-domain
|
Forecasting-workshop/2a_Amazon_Forecast_Model.ipynb | ###Markdown
Bike-Share Demand Forecasting 2a: Modelling with [Amazon Forecast](https://aws.amazon.com/forecast/)์ด์ [1_Data_Preparation](1_Data_Preparation.ipynb) ๋
ธํธ๋ถ์์ ์ํํ bike-share ์์ ์์ธก ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด 3๊ฐ์ง ๋ฐฉ๋ฒ์ ์ดํด๋ด
๋๋ค.1. AWS "Managed AI"์๋น์ค ([Amazon Forecast] (https://aws.amazon.com/forecast/))์ผ๋ก ์ผ๋ฐ์ /๊ท๊ฒฉํ๋ ๋น์ฆ๋์ค ๋ฌธ์ ๋ฅผ ๋ค๋ฃน๋๋ค.2. SageMaker์ built-in๋ ์๊ณ ๋ฆฌ์ฆ ([DeepAR] (https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html))์ ์ฌ์ฉํ์ฌ 1๋ฒ๊ณผ ๋์ผํ ๋น์ฆ๋์ค ๋ฌธ์ ๋ฅผ ๋ค๋ฃน๋๋ค.3. custom SageMaker ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ์ฌ ๋ถ๊ฐ์ ์ธ ์ฐจ๋ณ์ SageMaker์ ๊ธฐ๋ฅ์ ํ์ฉํ๋ฉด์ ํต์ฌ ๋ชจ๋ธ๋ง์ ์ํํฉ๋๋ค.**์ด ๋
ธํธ๋ถ์ AWS ์ฝ์์ ํตํด Amazon Forecast ์๋น์ค๋ฅผ ์ ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ ์ฃผ์ง๋ง ๋์ ๋์ผํ ์์
์ ๋ชจ๋ API๋ฅผ ํตํด ์ํ ํ ์ ์์ต๋๋ค.** Dependencies and configuration๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ฅผ ๋ก๋ฉํ ๋ค์, ์ค์ ๊ฐ์ ์ ์ํ๊ณ , AWS SDKs์ ์ฐ๊ฒฐํฉ๋๋ค.
###Code
# Basic data configuration is initialised and stored in the Data Preparation notebook
# ...We just retrieve it here:
%store -r
assert bucket, "Variable `bucket` missing from IPython store"
assert data_prefix, "Variable `data_prefix` missing from IPython store"
assert target_train_filename, "Variable `target_train_filename` missing from IPython store"
assert target_test_filename, "Variable `target_test_filename` missing from IPython store"
assert related_filename, "Variable `related_filename` missing from IPython store"
%load_ext autoreload
%autoreload 1
# Built-Ins:
from datetime import datetime, timedelta
# External Dependencies:
import boto3
from IPython.core.display import display, HTML
import pandas as pd
# Local Dependencies:
%aimport util
session = boto3.Session()
region = session.region_name
forecast = session.client(service_name="forecast")
forecast_query = session.client(service_name="forecastquery")
s3 = session.client(service_name="s3")
###Output
_____no_output_____
###Markdown
Overview์๋ ์์ฝ ๋ด์ฉ์ ๋ณด๋ฉด, Amazon Forecast์ ์ ์ฒด ์ํฌํ๋ก์ฐ๋ ์ ํ์ ์ธ Batch ML ๋ชจ๋ธ ํ์ต ์ ๊ทผ ๋ฐฉ์์
๋๋ค.์์์ ์ด๊ธฐํํ forecast์ SDK๋ AWS Console์์ ์ํํ๋ ๋ชจ๋ ๋จ๊ณ์ Amazon Forecast ์ํ์ ํ๋ก๊ทธ๋๋ฐ ๋ฐฉ์์ผ๋ก ์ง์ํฉ๋๋ค. ์ฌ๊ธฐ์์๋ **AWS Console** ๋ฐฉ์์ ์ฌ์ฉ๋ฐฉ๋ฒ์ ์๋ ค๋๋ฆฝ๋๋ค, Step 1: Selecting the Amazon Forecast domainAmazon Forecast๋ ๋ช๊ฐ์ง **domains** (documented [here](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-domains-ds-types.html))๋ค์ ์ ์ํฉ๋๋ค.domain์ ํน์ use case์ ๋ง๋๋ก ์กฐ์ ๋ **๊ธฐ๋ณธ ๋ฐ์ดํฐ ์คํค๋ง**์ featureํ ๋ชจ๋ธ ์ํคํ
์ฒ๋ค์ ์ ๊ณตํฉ๋๋ค. ๋ํ custom ๋ฐ์ดํฐ ํ๋๋ฅผ ์ถ๊ฐํ ์ ์์ต๋๋ค. ํ์ง๋ง, ์ผ๋ฐ์ ์ผ๋ก ๊ธฐ๋ณธ domain model์์ ๊ตฌ์กฐ๋ฅผ ํ์ฉํ ์ ์๋ ์ฅ์ ์ด ๋ง์์๋ก, ๋ ๋์ ๋ชจ๋ธ ์ฑ๋ฅ์ ์ป์ ์ ์์ต๋๋ค.์ด๋ฒ ์์ ์์๋ [`RETAIL`](https://docs.aws.amazon.com/forecast/latest/dg/retail-domain.html) domain์ ์ฌ์ฉํฉ๋๋ค. ์ถ๊ฐ๋ก, [`METRICS`](https://docs.aws.amazon.com/forecast/latest/dg/metrics-domain.html) ๋๋ ๋ค๋ฅธ domain์ด ๋ ์ข์ ๊ฒฐ๊ณผ๊ฐ ๋์ฌ ์ ์์ต๋๋ค. ์๊ฐ์ด ๋์๋ฉด, ๋ค๋ฅธ domain์ ๋ํ ์คํ๋ ํด๋ณด๋ฉด์, ์ฑ๋ฅ ๊ฐ์ ์ด ๋์๋์ง ํ์ธํด๋ณด์๊ธฐ ๋ฐ๋๋๋ค. Step 2: Preparing the data[domain documentation](https://docs.aws.amazon.com/forecast/latest/dg/retail-domain.html)์์ ์ ๊ณตํ ํ์๊ฐ ์๋ ํ์ ํ๋๊ฐ ๋ฌด์์ธ์ง ์ ์ ์์ต๋๋ค. ๋ฐ์ดํฐ๋ฅผ ์ฝ๊ฐ ์กฐ์ ํ ๋ค์ ๋ค์ S3๋ก ์
๋ก๋ํ์๋ฉด ๋ฉ๋๋ค.
###Code
target_train_df = pd.read_csv(f"./data/{target_train_filename}")
target_test_df = pd.read_csv(f"./data/{target_test_filename}")
related_df = pd.read_csv(f"./data/{related_filename}")
###Output
_____no_output_____
###Markdown
Retail domain ์์๋ Target timeseries ๋ฐ์ดํฐ์
์๋ ์ปฌ๋ผ ์ด๋ฆ์ `timestamp`, `item_id`, `demand`๋ก ์ ํด์ ธ ์์ผ๋ฉฐ, ๋ค๋ฅธํ๋๋ ์์ด์ผ ํฉ๋๋ค.์ฐ๋ฆฌ๊ฐ ์ด์ ๋
ธํธ๋ถ์์ ์์
ํ ๋ฐ์ดํฐ์
์ customer_typeํ๋๋ฅผ item_id๋ก ๋ณ๊ฒฝํ๋ ๊ฒ ์ธ์๋ ์ด๋ฏธ ์ ๊ธฐ์ค์ ์ ํฉํ๊ฒ ์์
์ ํ์ต๋๋ค.
###Code
target_train_df.rename(columns={ "customer_type": "item_id" }, inplace=True)
target_test_df.rename(columns={ "customer_type": "item_id" }, inplace=True)
target_train_df.head()
###Output
_____no_output_____
###Markdown
Retail ๋๋ฉ์ธ์์ Related timeseries๋:1. (์ฐ๋ฆฌ๊ฐ ์ด๋ฏธ ๊ฐ์ง๊ณ ์๋) `timestamp`์ด ํฌํจ๋์ด ์์ผ๋ฉฐ,2. (๋ ์จ ์ ๋ณด๊ฐ customer_type์ ๋ฐ๋ผ ๋ค๋ฅด์ง ์์ง๋ง) `item_id` ๊ฐ ์ถ๊ฐ๋์ด์ผ ํฉ๋๋ค.3. ์ฌ๋ฌ optional ํ domain ํ๋๋ฅผ ๊ธฐ๋ณธ์ ์ผ๋ก ์ ์ํ๊ณ ์์ง๋ง, ์ด ๊ฐ๋ค์ ํ์ฌ ๋ฐ์ดํฐ์
๊ณผ ํฌ๊ฒ ๊ด๋ จ์ด ์๊ธฐ์ ๋ณ๊ฒฝํฉ๋๋ค.์ถ๊ฐ์ ์ผ๋ก ์ผ๋ฐ์ ์ธ ๋ฐ์ดํฐ์
์ ๋ํด์๋ ์๋ ์ฌํญ์ ๊ณ ๋ คํ์๊ธฐ ๋ฐ๋๋๋ค:4. Forecast์์ ์ฌ์ฉํ๋ [reserved field names](https://docs.aws.amazon.com/forecast/latest/dg/reserved-field-names.html) (including `temp`)๋ ์ปฌ๋ผ์ ์ฌ์ฉํ ์ ์์ต๋๋ค.5. ์ฌ์ฉ์๊ฐ ์ถ๊ฐํ๋ fields์ [schema](https://docs.aws.amazon.com/forecast/latest/dg/API_SchemaAttribute.html)๋ `string`, `integer`, `float`, `timestamp` ํ์
์ผ๋ก ๊ตฌ์ฑํ ์ ์์ต๋๋ค.boolean ํ๋์ ๋ํด์๋ string ํํ๋ก ๋ฐ์ดํฐ๋ฅผ ๋ก๋ํ๋ฉด ๋์ผํ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์์ต๋๋ค.๋ฐ๋ผ์ ๋ค์๊ณผ ๊ฐ์ด ๋ฐ์ดํฐ๋ฅผ ์ค๋นํฉ๋๋ค.* Related timeseries ๋ฐ์ดํฐ์ `item_id` ์ถ๊ฐํฉ๋๋ค. (2๋ฒ ํญ๋ชฉ)* ์ปฌ๋ผ `temp`๋ฅผ `temperature`๋ก ์ด๋ฆ์ ๋ณ๊ฒฝํฉ๋๋ค.(4๋ฒ ํญ๋ชฉ)
###Code
# Duplicate data for each item_id in the target dataframe:
related_peritem_dfs = []
item_ids = target_train_df["item_id"].unique()
for item_id in item_ids:
df = related_df.copy()
df["item_id"] = item_id
related_peritem_dfs.append(df)
related_df = pd.concat(related_peritem_dfs).sort_values(["timestamp", "item_id"]).reset_index(drop=True)
# Rename any reserved columns to keep Forecast happy:
related_df.rename(columns={ "temp": "temperature" }, inplace=True)
related_df.head()
###Output
_____no_output_____
###Markdown
...Amazon Forecast๋ก ๊ฐ์ ธ์ฌ ์ค๋น๊ฐ ๋ S3์ ๋ฐ์ดํฐ๋ฅผ ์ ์ฅํฉ๋๋ค.
###Code
print("Writing dataframes to file...")
!mkdir -p ./data/amzforecast
target_train_df.to_csv(
f"./data/amzforecast/{target_train_filename}",
index=False
)
target_test_df.to_csv(
f"./data/amzforecast/{target_test_filename}",
index=False
)
related_df.to_csv(
f"./data/amzforecast/{related_filename}",
index=False
)
print("Uploading dataframes to S3...")
s3.upload_file(
Filename=f"./data/amzforecast/{target_train_filename}",
Bucket=bucket,
Key=f"{data_prefix}amzforecast/{target_train_filename}"
)
print(f"s3://{bucket}/{data_prefix}amzforecast/{target_train_filename}")
s3.upload_file(
Filename=f"./data/amzforecast/{target_test_filename}",
Bucket=bucket,
Key=f"{data_prefix}amzforecast/{target_test_filename}"
)
print(f"s3://{bucket}/{data_prefix}amzforecast/{target_test_filename}")
s3.upload_file(
Filename=f"./data/amzforecast/{related_filename}",
Bucket=bucket,
Key=f"{data_prefix}amzforecast/{related_filename}"
)
print(f"s3://{bucket}/{data_prefix}amzforecast/{related_filename}")
print("Done")
###Output
_____no_output_____
###Markdown
Step 3: Create a Dataset Group์ด๋ฏธ ์ด์ ์ ์ ํํ `region` ์์ Amazon Forecast console์ ์ฝ๋๋ค. ์๋์ ํ์ด์ง๊ฐ ํ์๋๊ฑฐ๋, ์ด์ ์ ์๋น์ค๋ฅผ ์ฌ์ฉํ ์ ์ด ์์ผ๋ฉด ๋ค๋ฅธ ๋์๋ณด๋๊ฐ ํ์๋ ์ ์์ต๋๋ค. ์๋์ ๊ฐ์ landing ํ์ด์ง๋ ๋๋ ์ผ์ชฝ ๋ฉ๋ด์ ์๋ "Dataset Groups"์์ "Create Dataset Group"์ ํด๋ฆญํฉ๋๋ค.> **Create dataset group** - Dataset group name : **`bikeshare_dataset_group`** - Forecasting domain : **`Retail`****Next** ๋ฒํผ์ ํด๋ฆญํฉ๋๋ค. Step 4: Create a Target Dataset์๋์ ๊ฐ์ ํ์์ผ๋ก target ๋ฐ์ดํฐ์
์ ์์ฑํ๋ ๋ฉ์์ง๊ฐ ํ์๋ฉ๋๋ค. (๊ทธ๋ ์ง ์์ ๊ฒฝ์ฐ ๋์๋ณด๋์์ target ๋ฐ์ดํฐ์ธ์ ์์ฑํ๋๋ก ์ ํํ ์ ์์ต๋๋ค. )> **Create target time series dataset** - Dataset name : **`bikeshare_target_dataset`** - Frequency of your data : **`hourly`** - Data schema : **`Re-order the columns in the data schema`**์ ๋ณ๊ฒฝ ํ์ "Next"๋ฅผ ํด๋ฆญํฉ๋๋ค.์ฐ์ , ๋ฐ์ดํฐํ๋ ์์ ๊ตฌ์กฐ๋ฅผ ๊ฒํ ํฉ๋๋ค:
###Code
target_train_df.head()
###Output
_____no_output_____
###Markdown
Step 5: Import target timeseries data๋ค์์ผ๋ก *dataset import job*์ ์ํํฉ๋๋ค.(๋ง์ผ ๋ฐ๋ก ์ํํ ์ ์๋ค๋ฉด, ๋์๋ณด๋์์ ์ ํํ ์ ์์ต๋๋ค.)> **Import target times series data** - Dataset import name : **`bikeshare_target_import`** - Timestamp format : **`yyyy-MM-dd HH:mm:ss`** (๋ณ๊ฒฝ์ด ํ์์์ด default ๊ฐ ์ฌ์ฉ) - Custom IAM role ARN : **`์๋ ์ถ๋ ฅ๋ ๊ฐ์ copyํด์ ์ฌ์ฉํฉ๋๋ค.`** - Data location : **`์๋ ์ถ๋ ฅ๋ ๊ฐ์ copyํด์ ์ฌ์ฉํฉ๋๋ค.`** **"Start Import"** ํด๋ฆญํ๊ฒ ๋๋ฉด, ๋ค์ Forecast ๋์๋ณด๋๋ก ๋์๊ฐ๊ฒ ๋ฉ๋๋ค.
###Code
iam = boto3.client('iam')
role = iam.list_roles(PathPrefix='/service-role/')
iam_arn= role['Roles'][0]['Arn']
print("Custom IAM role ARN ๊ฐ์ ์๋ ํ ์ค์ copyํด์ ์ฌ์ฉํ์ธ์. \n{}".format(iam_arn) )
print("Data location ๊ฐ์ ์๋ ํ ์ค์ copyํด์ ์ฌ์ฉํ์ธ์")
print(f"s3://{bucket}/{data_prefix}amzforecast/{target_train_filename}")
###Output
_____no_output_____
###Markdown
* Amazon Forecast๋ ํ์ฅ ๊ฐ๋ฅํ ๋ฐฉ์์ผ๋ก ์์
์ ์ฒ๋ฆฌํ๊ธฐ ์ํด ๋ฆฌ์์ค๋ฅผ spin up ํ ๋ค์ ๋ฐ์ดํฐ ์
์ ์ ํจ์ฑ์ ๊ฒ์ฌํ๋ ๊ณผ์ ์ ์ํํ๋ฏ๋ก Dataset import ์์
์ ์๋ฃํ๋๋ฐ ๋ช ๋ถ์ ์๊ฐ์ด ๊ฑธ๋ฆฝ๋๋ค. (ํด๋น ๋ฐ์ดํฐ์
์์ 10~15๋ถ ์์)* Target data import๊ฐ ์์
๋๋ ๋์ ๋ณ๋ ๋๊ธฐ ์์ด ๋ค์ ๋จ๊ณ์ Related data import ์ํ์ ๋ฐ๋ก ํ์๋ฉด ๋ฉ๋๋ค.* Target data๋ฅผ importํ ๋ค์ "predictor" (forecast ๋ชจ๋ธ)์ ํ์ต์ด ๊ฐ๋ฅํฉ๋๋ค.ํ์ง๋ง, related data๋ฅผ ํ์ฉํ๋ค๋ฉด ๋์ฑ ์ฑ๋ฅ์ ๋์ผ ์ ์๊ธฐ์ related data๊ฐ import ๋ ๋ค์์ 'predictor'๋ฅผ ์ํํฉ๋๋ค. Step 6: Create and import Related Timeseries Dataset๋ค์ ๋จ๊ณ๋ related dataset์ createํ๊ณ import ํ๋ ๊ณผ์ ์
๋๋ค.์๋ related data์ ๊ตฌ์กฐ๋ฅผ ๊ฒํ ํฉ๋๋ค.
###Code
related_df.head()
###Output
_____no_output_____
###Markdown
> **Create related time series dataset** - Dataset name : **`bikeshare_related_dataset`** - Frequency of your data : **`hourly`** - Data schema : **`์๋ schema๋ฅผ copyํ์ฌ ๊ธฐ์กด ๋ด์ฉ์ ์ง์ด ํ ๋ถ์ฌ๋ฃ๊ธฐ ํฉ๋๋ค.`**```json{ "Attributes": [ { "AttributeName": "timestamp", "AttributeType": "timestamp" }, { "AttributeName": "season", "AttributeType": "float" }, { "AttributeName": "holiday", "AttributeType": "string" }, { "AttributeName": "weekday", "AttributeType": "float" }, { "AttributeName": "workingday", "AttributeType": "string" }, { "AttributeName": "weathersit", "AttributeType": "float" }, { "AttributeName": "temperature", "AttributeType": "float" }, { "AttributeName": "atemp", "AttributeType": "float" }, { "AttributeName": "hum", "AttributeType": "float" }, { "AttributeName": "windspeed", "AttributeType": "float" }, { "AttributeName": "item_id", "AttributeType": "string" } ]}```API docs๋ ๊ฐ [SchemaAttribute](https://docs.aws.amazon.com/forecast/latest/dg/API_SchemaAttribute.html) ๊ฐ ํฌํจํ ์ ์๋ low-level์ ์์ธ ๋ด์ฉ๊ณผ ์ ์ฒด [Schema](https://docs.aws.amazon.com/forecast/latest/dg/API_Schema.html) ๊ฐ์ฒด๋ฅผ ์ ๊ณตํฉ๋๋ค. ๋ฐ์ดํฐ์
์์
์ด ์๋ฃ๋๋ฉด, ๋ค์ ๋ฐ์ดํฐ์
import job์ ์ํํฉ๋๋ค.> **Import related times series data** - Dataset import name : **`bikeshare_related_import`** - Timestamp format : **`yyyy-MM-dd HH:mm:ss`** (๋ณ๊ฒฝ์ด ํ์์์ด default ๊ฐ ์ฌ์ฉ) - Custom IAM role ARN : **`arn:aws:iam::XXXXXXXX:role/service-role/ForecastDemoLab-XXX`** (๋ณ๊ฒฝ์ด ํ์์์ด default ๊ฐ ์ฌ์ฉ) - Data location : **`์๋ ์ถ๋ ฅ๋ ๊ฐ์ copyํด์ ์ฌ์ฉํฉ๋๋ค.`**"Start import" ๋ฅผ ์ํํ๊ฒ ๋๋ฉด, ๋ฐ์ดํฐ๊ฐ ๋ก๋๋๋ ๋์ ๋์๋ณด๋ ํ๋ฉด์ผ๋ก ๋์๊ฐ๊ฒ ๋ฉ๋๋ค.
###Code
print("Data location ๊ฐ์ ์๋ ํ ์ค์ copyํด์ ์ฌ์ฉํ์ธ์")
print(f"s3://{bucket}/{data_prefix}amzforecast/{related_filename}")
###Output
_____no_output_____
###Markdown
Step 7: While the datasets import...๋ฐ์ดํฐ์ ๋ณผ๋ฅจ์ ๋ฐ๋ผ import๋ ์๋ถ ์ด์ ๊ฑธ๋ฆด ์ ์์ต๋๋ค.์๊ฐ์ด ์ค๋ ๊ฑธ๋ฆฌ๋ ๊ฒฝ์ฐ์๋ [SageMaker์์ ๋ชจ๋ธ์ ํ์ตํ๋ ๋
ธํธ๋ถ](2b_SageMaker_Built-In_DeepAR.ipynb)์ ์ํํด ๋ณด๋ ๊ฒ๋ ํ๋์ ๋ฐฉ๋ฒ์
๋๋ค.์ฐธ๊ณ : ์ผ๋ฐ์ ์ผ๋ก๋ ๋์๋ณด๋์์ ์ค์๊ฐ ์
๋ฐ์ดํธ๋ฅผ ํ์ง๋ง, ์ต์ ์ํ๋ฅผ ๋ณด๊ธฐ ์ํด Amazon Forecast์ ๋์๋ณด๋๋ฅผ ์๋ก๊ณ ์นจํด์ผ ํ ์๋ ์์ต๋๋ค. Step 8: Train a "Prophet" predictor์๋ ๋์๋ณด๋์์ Target๊ณผ Related ๋ฐ์ดํฐ import๊ฐ ์๋ฃ๋๋ฉด, predictor ํ์ต์ ์์ํ ์ค๋น๊ฐ ๋์์ต๋๋ค.> ** Train predictor** - Predictor name: **`bikeshare_prophet_predictor`** - Forecast horizon: **`336 (2 weeks at 24hrs/day)`** - Forecast frequency: **`hour`**(์๋ณธ ๋ฐ์ดํฐ์ frequency์ ๋ง๊ฒ ์ค๋นํฉ๋๋ค) - Algorithm selection: **`Manual`** - Algorithm: **`Prophet`** - Country for holidays: **`United States`**(ํ์ฌ ๋ฐ์ดํฐ์
์ ๋ฏธ๊ตญ ๊ธฐ์ค์
๋๋ค) - Number of backtest windows: **`4`** - Backtest window offset: **`1080`**(์๋ ์ฐธ์กฐ)์ฒซ ๋ฒ์งธ predictor๋ Facebook์ [Prophet](https://facebook.github.io/prophet/) ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํฉ๋๋ค. ์ด ์๊ณ ๋ฆฌ์ฆ์ additive-component regression ๊ธฐ๋ฐ์ผ๋ก ํฌ๊ฒ ์ธ๊ธฐ๋ฅผ ์ป์ ์คํ์์ค ํ๋ ์์ํฌ์
๋๋ค. ([paper](https://peerj.com/preprints/3190/)).๋จผ์ ํ
์คํธ ๋ฐ์ดํฐ๋ฅผ ๋ฐ์ดํฐ์
์ข
๋จ์์ ์ง๋ฅธ target series ๋ฐ์ดํฐ์ ํฌ๊ธฐ๋ฅผ ๊ฒํ ํฉ๋๋ค.
###Code
n_train_samples = len(target_train_df["timestamp"].unique())
n_test_samples = len(target_test_df["timestamp"].unique())
n_related_samples = len(related_df["timestamp"].unique())
print(f" {n_train_samples} training samples")
print(f"+ {n_test_samples} testing samples")
print(f"= {n_related_samples} total samples (related dataset)")
assert (
n_train_samples + n_test_samples == n_related_samples
), "Mismatch between target train+test timeseries and related timeseries coverage"
###Output
_____no_output_____
###Markdown
[`BackTestWindowOffset`](https://docs.aws.amazon.com/forecast/latest/dg/API_EvaluationParameters.htmlforecast-Type-EvaluationParameters-BackTestWindowOffset) ํ๋ผ๋ฏธํฐ๋ ๋ง์ง๋ง forecast validation window๊ฐ ์์ํ๋ ์์น๋ก ์ค์ ํ๋ฉฐ, ์ธ๋ถ ํ
์คํธ๋ฅผ ์ํด ๋ณ๋ ๋ฐ์ดํฐ๊ฐ ์๋ค๋ ๊ฐ์ ํ์, ๊ธฐ๋ณธ์ ์ผ๋ก๋ `ForecastHorizon`๊ณผ ๋์ผํ ๊ฐ์ด ์๋์ผ๋ก ๋ณด์ฌ์ง๋๋ค.์ด ์์ ์์๋ ๋ณ๋ ํ
์คํธ ์
์ ์ค๋นํ๊ธฐ ๋๋ฌธ์, ํ
์คํธ ์
์ผ๋ก ์ค๋น๋ ์ํ ์ ๋งํผ์ ๊ฐ์ ๋๋ ค์ผ ํฉ๋๋ค. (์ ์ฝ๋ ์
์ฐธ๊ณ )์ง๊ธ๊น์ง์ ์ค์ ๊ฐ์ด ๋์ผํ๋ค๋ ๊ฐ์ ํ์์ , ๊ฐ์ 336 + 744 = **1,080** ๋ก ๋ณ๊ฒฝํฉ๋๋ค.*NumberOfBacktestWindows* ํ๋ผ๋ฏธํฐ๋ Amazon Forecast์์ [๋ชจ๋ธ์ ์ ํ๋](https://docs.aws.amazon.com/forecast/latest/dg/metrics.html)๋ฅผ ํ๊ฐํ๊ธฐ ์ํด ์ฌ์ฉํ๋ ๋ถ๋ฆฌ๋ window์ ๊ฐ์์
๋๋ค. ์ด๋ฅผ ํตํด ๋ฐ์ดํฐ์
์ ๋ง์ง๋ง ๋ถ๋ถ์ window๋ง ์ฌ์ฉํ๋ ๊ฒ๋ณด๋ค ๋ค์ํ validation ๋ฐ์ดํฐ์
์ผ๋ก ์ฑ๋ฅ์ ์ธก์ ํ๊ธฐ์ ๋์ฑ ๊ฐ๋ ฅํ ๋ชจ๋ธ์ ์์ฑํ ์ ์์ต๋๋ค. Step 9: Train a "DeepAR+" predictorDeepAR+๋ Amazon Forecast์ "์๊ทธ๋์ฒ" ์๊ณ ๋ฆฌ์ฆ ์
๋๋ค. ์ด ์๊ณ ๋ฆฌ์ฆ์ [SageMaker์ built-in ์๊ณ ๋ฆฌ์ฆ์ธ DeepAR](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar.html)๊ณผ ๋์ผํ ๋ฅ๋ฌ๋ ๊ธฐ๋ฐ์ ์๊ณ์ด ๋ชจ๋ธ๋ง [approach](https://docs.aws.amazon.com/sagemaker/latest/dg/deepar_how-it-works.html)์ ๊ธฐ๋ฐ์ผ๋ก ํ์ง๋ง Amazon Forecast์์๋ ๋ช๊ฐ์ง ์ ์ฉ ํ์ฅ ๊ธฐ๋ฅ๊ณผ ๊ฐ์ ์ฌํญ์ ๊ตฌํํ์ฌ DeepAR+๋ฅผ ์ ๊ณตํฉ๋๋ค. ์์์ ์ํํ Prophet predictor๊ฐ ์๋ฃ๋๋ ๊ฒ์ ๊ธฐ๋ค๋ฆด ํ์๊ฐ ์์ด, ๋ค๋ฅธ predictor์ ํ์ต ์์
์ ์์ํ ์ ์์ต๋๋ค. ์ผ์ชฝ ๋ฉ๋ด์์ Predictors๋ฅผ ํด๋ฆญํ ๋ค์, "Train new predictor" ๋ฒํผ์ ํด๋ฆญํ๋ฉด ๋ฉ๋๋ค.> ** Train predictor** - Predictor name: **`bikeshare_deeparplus_predictor`** - Forecast horizon: **`336 (2 weeks at 24hrs/day)`** - Forecast frequency: **`hour`**(์๋ณธ ๋ฐ์ดํฐ์ frequency์ ๋ง๊ฒ ์ค๋นํฉ๋๋ค) - Algorithm selection: **`Manual`** - Algorithm: **`Deep_AR_Plus`** - Country for holidays: **`United States`**(ํ์ฌ ๋ฐ์ดํฐ์
์ ๋ฏธ๊ตญ ๊ธฐ์ค์
๋๋ค) - Number of backtest windows: **`4`** - Backtest window offset: **`1080`**(์๋ ์ฐธ์กฐ)ํ์ต์ ์์ํ ๋ค์, ๋ค์ "Predcitors" ํ๋ฉด์ผ๋ก ๋์์์ 2๊ฐ ํ์ต ์ค์ธ predictors์ ์ํ๋ฅผ ํ์ธํฉ๋๋ค. Step 10: Create forecasts (and maybe custom predictors?)๋ค๋ฅธ ๋ชจ๋ธ์ fitํ๊ธฐ๋ฅผ ์ํ๋ค๋ฉด (์, AutoML ๋ชจ๋ธ ์ ํ ๋๋ ARIMA์ ๊ฐ์ baseline ์ํคํ
์ฒ ์ค ํ๋๋ฅผ ์ฌ์ฉ), ์์ ์ ์ฌํ ๋ฐฉ์์ผ๋ก ๋ ๋ง์ ํ์ต ์์
์ ์ํํ์
๋ ๋ฉ๋๋ค.๋ค์ ๋จ๊ณ๋ ๊ฐ predictor์ ๋ํด "forecast"๋ฅผ ์์ฑํฉ๋๋ค. ์ด ๊ณผ์ ์์ ๋ชจ๋ธ์ ์คํํ๊ณ ์์ธก ์ ๋ขฐ ๊ตฌ๊ฐ์ ์ถ์ถํฉ๋๋ค.ํ๋ จ์ด ์๋ฃ๋ ๋๋ง๋ค forecast ์์ฑ์ ์์ํ ์ ์์ผ๋ฉฐ, Prophet์ ํ์ต์ด ๋น๊ต์ ๋น ๋ฅด๊ฒ ๊ธ๋๊ธฐ ๋๋ฌธ์ ์ง๊ธ ์ฏค ์ด์ฉํ ์ ์์ต๋๋ค. predictor๋ฅผ ํ์ตํ๊ณ forecast๋ฅผ ์์ฑํ๋ ๊ฒ์ด ์ค๋ ๊ฑธ๋ฆด ์ ์๊ธฐ ๋๋ฌธ์, ๋ค๋ฅธ SageMaker ๋ชจ๋ธ๋ก ํ์ต์ ์ํํ๋ ๋ฐฉ๋ฒ๋ ๊ฐ๋ฅํฉ๋๋ค. forecast๋ฅผ ์์ฑํ๊ธฐ ์ํด, ์ผ์ชฝ ๋ฉ๋ด์์ "Forecasts"๋ฅผ ํด๋ฆญํฉ๋๋ค.๊ทธ๋ฆฌ๊ณ , "Create a Forecast" ๋ฒํผ์ ํด๋ฆญํฉ๋๋ค. ๊ฐ forecast ์ค์ ์ ์๋์ ๊ฐ์ด ํฉ๋๋ค.> ** Create a forecast** - Forecast name: **`bikeshare_prophet_forecast`**, **`bikeshare_deeparplus_forecast`**, etc - Predictor: **`Dropdown์์ ์ ํ`** (predictor ํ์ต์ด ๋๋์ง ์์ผ๋ฉด dropdown์์ ๋ํ๋์ง ์์ต๋๋ค.) - Forecast types : **`.10, .50, .90, mean`** AWS Console์์๋ forecast ์์ฑ์ด ๋์๋ง์ ๋ชฉ๋ก์์ ํด๋น ํญ๋ชฉ์ ์ ํํ ์ ์์ผ๋ฉฐ, ์๋ **Forecast ARN** ๋ฅผ ์
๋ ฅํด์ผ ํฉ๋๋ค.์ด ๋
ธํธ๋ถ์์๋ **Forecast ARN** ๊ฐ์ AWS SDKs๋ฅผ ์ด์ฉํ์ฌ ๊ฐ์ ธ์ฌ ์ ์๋๋ก ๊ตฌํํ์๊ธฐ ๋๋ฌธ์ forecast ์์
์ด ์๋ฃ๋ ํ ์๋ cell ๋ถํฐ ๋๋ ์ ์ฒด cell์ ์ฌ์คํ(Run All)ํ๋ฉด ๊ฒฐ๊ณผ๊ฐ์ ํ์ธํ ์ ์์ต๋๋ค.**์ฃผ์ ์ฌํญ์ forecast ARN๊ณผ predictor ARN์ ์๋ก ๋ค๋ฆ
๋๋ค.!** ์ผ์ชฝ ๋ฉ๋ด์ "Forecasts"์์ forecasts๋ฅผ ์์ฑํ ๋ชฉ๋ก์ ์ ๊ทผํ ์ ์์ต๋๋ค.
###Code
forecast= boto3.client('forecast')
forecast_result = forecast.list_forecasts(
Filters=[
{
'Key': 'Status',
'Value': 'ACTIVE',
'Condition': 'IS'
},
]
)
bikeshare_prophet_forecast = ""
bikeshare_deeparplus_forecast = ""
try:
for f_result in forecast_result['Forecasts']:
if f_result['ForecastName'] =='bikeshare_prophet_forecast':
bikeshare_prophet_forecast = f_result['ForecastArn']
print('bikeshare_prophet_forecast Status : ACTIVE')
except:
print('bikeshare_prophet_forecast Status : CREATE_IN_PROGRESS or Nothing')
try:
for f_result in forecast_result['Forecasts']:
if f_result['ForecastName'] =='bikeshare_deeparplus_forecast':
bikeshare_deeparplus_forecast = f_result['ForecastArn']
print('bikeshare_deeparplus_forecast Status : ACTIVE')
except:
print('bikeshare_deeparplus_forecast Status : CREATE_IN_PROGRESS or Nothing')
forecast_arns = {
# Each example should look something like this:
# "a_nice_name": "arn:aws:forecast:[REGION?]:[ACCOUNT?]:forecast/[FORECASTNAME?]"
"bikeshare_prophet_forecast": bikeshare_prophet_forecast, # TODO ,
"bikeshare_deeparplus_forecast": bikeshare_deeparplus_forecast# TODO
# More entries if you created other forecasts with different settings too?
}
###Output
_____no_output_____
###Markdown
Step 11: Review model accuracy metrics*์ ๋ขฐ๊ตฌ๊ฐ* ๋ด ํ๋ฅ ์ ์ธ forecasts ๊ฐ์ ์์ฑํ๊ธฐ ๋๋ฌธ์, ๊ฒฐ๊ณผ๋ฅผ ํ๊ฐํ๋ ๋ฐฉ์์ RMSE ์ ์๋ฅผ ๋น๊ตํ๋ ๊ฒ๊ณผ ๊ฐ์ด ๋จ์ํ์ง ์์ต๋๋ค. ์ฌ๊ธฐ์, ์๋ ๋ ๊ฐ์ metrics ๊ฐ์ **trade-off**๊ฐ ์์ต๋๋ค.* accuracy : ์ค์ ๊ฐ๋ค์ด ์ ์๋ ์ ๋ขฐ๊ตฌ๊ฐ/ํ๋ฅ ๋ถํฌ ๋ด ์กด์ฌํ๋์ง ์ฌ๋ถ ํ๊ฐ* precision : ์ ์๋ ์ ๋ขฐ ๊ตฌ๊ฐ์ด ์ผ๋ง๋ ์ข์์ง๋์ง ํ๊ฐPredictor์ metrics๋ ํ์ต ๋ฐ์ดํฐ์
๋ด backtesting windows๋ฅผ ์ด์ฉํ ๊ฒฐ๊ณผ๊ฐ์ด๋ฉฐ, AWS Console์์๋ ์ง์ ํ์ธํ ์ ์์ต๋๋ค.* ์ผ์ชฝ ๋ฉ๋ด์์ "Predictors"๋ก ์ด๋* ํ์ต์ด ์๋ฃ๋ predictor ์ค ๊ฒํ ๋ฅผ ์ํ๋ predictor ์ ํ* scroll์ ์กฐ๊ธ ๋ด๋ฆฌ๋ฉด "Predictor metrics" ์์ ๋ด์ฉ ํ์ธ์๋ screenshot ์์ ์์ ๋ณผ ์ ์๋ฏ์ด ๊ฐ๊ฐ์ ์์ธก window์ ํ๊ท ์ผ๋ก ์์ฝํ RMSE๊ณผ 10%, 50%, 90% 3๊ฐ์ง ํ๊ฐ ์ง์ ์์์ ํ๊ท ๋ฐ ๊ฐ์ค quantile ์์ค๊ฐ์ ๋ณผ ์ ์์ต๋๋ค.** ์ด๋ฌํ metrics ๊ธฐ๋ฐ์ผ๋ก ์ด๋ค predictor๊ฐ ๊ฐ์ฅ ์ฑ๋ฅ์ด ์ข์๊น์? ๋ค๋ฅธ prediction windows์ ๋ฐ๋ผ accuracy์์ ์ด๋ค ํจํด์ด ์์ต๋๊น?** Step 12: Visualise and evaluate forecast quality์ผ์ชฝ ๋ฉ๋ด์ ์๋ "Forecast Lookup"์์ forecast์ ๊ฒฐ๊ณผ๋ฅผ ์ง์ ํ์ธํ ์ ์์ต๋๋ค.์ด ๋
ธํธ๋ถ์์๋ Forecast Query API๋ฅผ ์ด์ฉํ์ฌ ํ๋ก๊ทธ๋๋ฐ ๋ฐฉ์์ผ๋ก ๊ฒฐ๊ณผ๋ฅผ ๋ค์ด๋ก๋๋ฐ์ ํ ๊ทธ๋ํ๋ก ์๊ฐํํด์ ๋ณด์ฌ ์ค๋๋ค. ๋ค์ํ ์๊ฐํ ๋ฐฉ์์ด๋ custom ํ๊ฐ metrics๋ฅผ ์ด์ฉํ์ฌ ๊ตฌ์ฑํ ์ ์์ต๋๋ค.๋ชจ๋ธ ํ์ต์ ์ํด ์๋ณธ ์์ค ๋ฐ์ดํฐ์ timestamps๋ฅผ ์ดํดํ์ง๋ง, inference ์์ ๋์ฑ ์๊ฒฉํ ์๊ตฌ์ฌํญ์ ๊ฐ๋๋ค๋ ์ ์์ ์ ํฉํ ISO ํ์์ผ๋ก ์์๊ณผ ๋์ timestamps๋ฅผ ์์ฑํ ํ์๊ฐ ์์ต๋๋ค.
###Code
first_test_ts = target_test_df["timestamp"].iloc[0]
# Remember we predict to 2 weeks horizon
# [Python 3.6 doesn't have fromisoformat()]
test_end_dt = datetime(
int(first_test_ts[0:4]),
int(first_test_ts[5:7]),
int(first_test_ts[8:10]),
int(first_test_ts[11:13]),
int(first_test_ts[14:16]),
int(first_test_ts[17:])
) + timedelta(days=14, hours=-1)
# Forecast wants a slightly different timestamp format to the dataset:
fcst_start_date = first_test_ts.replace(" ", "T")
fcst_end_date = test_end_dt.isoformat()
print(f"Forecasting\nFrom: {fcst_start_date}\nTo: {fcst_end_date}")
forecasts = {
predictor_name: {
"forecast_arn": forecast_arn,
"forecasts": {
item_id: forecast_query.query_forecast(
ForecastArn=forecast_arn,
StartDate=fcst_start_date,
EndDate=fcst_end_date,
Filters={ "item_id": item_id }
)
for item_id in item_ids }
}
for (predictor_name, forecast_arn) in forecast_arns.items() if forecast_arn is not ''}
###Output
_____no_output_____
###Markdown
Amazon Forecast์ ๋ค์ํ SageMakers ๋ชจ๋ธ๋ค์ด ๋ค์ํ ํ์์ผ๋ก ๊ฒฐ๊ณผ๋ฅผ ์ถ๋ ฅํ๊ธฐ ๋๋ฌธ์, ์ด๋ฅผ ๋น๊ตํ๊ธฐ ์ํ ๋ชฉ์ ์ผ๋ก **๊ฒฐ๊ณผ๋ฅผ ํ์คํ**ํ์ฌ local CSV ํ์ผ๋ก ์ ์ฅํฉ๋๋ค.
###Code
clean_results_df = pd.DataFrame()
for predictor_name, predictor_data in forecasts.items():
for item_id, forecast_data in predictor_data["forecasts"].items():
predictions = forecast_data["Forecast"]["Predictions"]
pred_mean_df = pd.DataFrame(predictions["mean"])
pred_timestamps = pd.to_datetime(pred_mean_df["Timestamp"].apply(lambda s: s.replace("T", " ")))
df = pd.DataFrame()
df["timestamp"] = pred_timestamps
df["model"] = f"amzforecast-{predictor_name}"
df["customer_type"] = item_id
df["mean"] = pred_mean_df["Value"]
df["p10"] = pd.DataFrame(predictions["p10"])["Value"]
df["p50"] = pd.DataFrame(predictions["p50"])["Value"]
df["p90"] = pd.DataFrame(predictions["p90"])["Value"]
clean_results_df = clean_results_df.append(df)
!mkdir -p results/amzforecast
clean_results_df.to_csv(
f"./results/amzforecast/results_clean.csv",
index=False
)
print("Clean results saved to ./results/amzforecast/results_clean.csv")
clean_results_df.head()
###Output
_____no_output_____
###Markdown
์ต์ข
์ ์ผ๋ก ํ์คํ๋ ํ์์ ์ฌ์ฉํ์ฌ ๊ฒฐ๊ณผ๋ฅผ ์๊ฐํํฉ๋๋ค.(๋
ธํธ๋ถ์์ ๋จ์ํ๋ ์๊ฐํ๋ฅผ ์ํด์ util ํด๋ ์๋ ํ๋กํ
๊ธฐ๋ฅ์ ์ฌ์ฉํฉ๋๋ค.)
###Code
# First, prepare the actual data (training + test) for easy plotting:
first_plot_dt = test_end_dt - timedelta(days=21)
actuals_df = target_train_df.append(target_test_df)
actuals_df["timestamp"] = pd.to_datetime(actuals_df["timestamp"])
actuals_plot_df = actuals_df[
(actuals_df["timestamp"] >= first_plot_dt)
& (actuals_df["timestamp"] <= test_end_dt)
]
actuals_plot_df.rename(columns={ "item_id": "customer_type"}, inplace=True)
util.plot_fcst_results(actuals_plot_df, clean_results_df)
###Output
_____no_output_____ |
wrangling/spotify_api.ipynb | ###Markdown
raw data
###Code
df = pd.read_pickle('../data/songs_counts_200.pkl')
# df = df[:550]
df
len(df)
###Output
_____no_output_____
###Markdown
track features
###Code
# define batches
batch_size = 100
num_batches = math.ceil(len(df)/batch_size)
# initialize list to save API calls
track_features = []
start_time = time.time()
# looping through the batches
for i in range(num_batches):
# define start and end of the batch
start_point = i*batch_size
end_point = min(start_point + batch_size, len(df))
# API call
track_list = list(df['track_uri'][start_point:end_point])
track_features.extend(sp.audio_features(track_list))
if i%100 == 0:
print('{}/{}, {}s'.format(i, num_batches, time.time()-start_time))
start_time = time.time()
# track_features = [i for i in track_features if i is not None]
track_features_df = pd.DataFrame(track_features)
track_features_df
# counter = 0
track_features_df.to_csv('../data/track_features'+str(counter)+'.csv')
track_features_df.to_pickle('../data/track_features'+str(counter)+'.pkl')
counter += 1
###Output
_____no_output_____
###Markdown
artist info
###Code
unique_artists = list(df['artist_uri'].unique())
len(unique_artists)
# define batches
batch_size = 50
num_batches = math.ceil(len(unique_artists)/batch_size)
# initialize list to save API calls
artist_info = []
start_time = time.time()
# looping through the batches
for i in range(num_batches):
# define start and end of the batch
start_point = i*batch_size
end_point = min(start_point + batch_size, len(df))
# API call
artist_list = unique_artists[start_point:end_point]
artist_info.extend(sp.artists(artist_list)['artists'])
if i%100 == 0:
print('{}/{}, {}s'.format(i, num_batches, time.time()-start_time))
start_time = time.time()
# artist_info = [i for i in artist_info if i is not None]
artist_info_df = pd.DataFrame(artist_info)
artist_info_df
len(set(list(chain.from_iterable(artist_info_df['genres']))))
counter = 0
# path = '../data/artist_info'+str(counter2)+'.pkl'
# with open(path, 'wb') as file:
# pickle.dump(artist_info, file)
# counter2 += 1
# with open(path, 'rb') as file:
# artist_data = pickle.load(file)
artist_info_df.to_csv('../data/artist_info'+str(counter)+'.csv')
artist_info_df.to_pickle('../data/artist_info'+str(counter)+'.pkl')
counter += 1
# counter
###Output
_____no_output_____
###Markdown
album info
###Code
unique_albums = list(df['album_uri'].unique())
len(unique_albums)
# albums_exist = pd.read_pickle('../data/album_info1.pkl')
albums_exist = list(albums_exist['uri'])
print(len(albums_exist))
# albums_exist
len(set(unique_albums_new))
# unique_albums_new = [i for i in unique_albums if i not in albums_exist]
unique_albums_new = list(set(unique_albums) - set(albums_exist))
print(len(unique_albums_new))
# unique_albums_new
unique_albums = unique_albums_new[140000:]
len(unique_albums)
# define batches
batch_size = 20
num_batches = math.ceil(len(unique_albums)/batch_size)
# initialize list to save API calls
album_info = []
start_time = time.time()
# looping through the batches
for i in range(num_batches):
# define start and end of the batch
start_point = i*batch_size
end_point = min(start_point + batch_size, len(df))
# API call
album_list = unique_albums[start_point:end_point]
album_info.extend(sp.albums(album_list)['albums'])
if i%100 == 0:
print('{}/{}, {}s'.format(i, num_batches, time.time()-start_time))
start_time = time.time()
album_info = [i for i in album_info if i is not None]
album_info_df = pd.DataFrame(album_info)
album_info_df
# counter = 4
album_info_df.to_csv('../data/album_info'+str(counter)+'.csv')
album_info_df.to_pickle('../data/album_info'+str(counter)+'.pkl')
counter += 1
counter
###Output
_____no_output_____
###Markdown
data joining
###Code
album_columns = ['genres','popularity','release_date','uri']
albums1 = pd.read_csv('../data/album_info1.csv', usecols=album_columns)
albums2 = pd.read_csv('../data/album_info2.csv', usecols=album_columns)
albums3 = pd.read_csv('../data/album_info3.csv', usecols=album_columns)
albums4 = pd.read_csv('../data/album_info4.csv', usecols=album_columns)
albums5 = pd.read_csv('../data/album_info5.csv', usecols=album_columns)
albums6 = pd.read_csv('../data/album_info6.csv', usecols=album_columns)
albums = pd.concat([albums1, albums2, albums3, albums4, albums5, albums6], axis=0, ignore_index=True)
albums = albums.rename(columns={'genres': 'album_genres', 'popularity': 'album_popularity', 'release_date': 'album_release_date', 'uri': 'album_uri'})
albums = albums.drop_duplicates()
albums
artist_columns = ['genres','popularity','uri']
artists = pd.read_csv('../data/artist_info1.csv', usecols=artist_columns)
artists = artists.rename(columns={'genres': 'artist_genres', 'popularity': 'artist_popularity', 'uri': 'artist_uri'})
artists = artists.drop_duplicates()
artists
track_columns = ['danceability','energy','key','loudness','mode','speechiness','acousticness','instrumentalness','liveness','valence','tempo','time_signature','uri']
tracks = pd.read_csv('../data/track_features3.csv', usecols=track_columns)
tracks = tracks.rename(columns={'uri': 'track_uri'})
tracks = tracks.drop_duplicates()
tracks
master = pd.read_pickle('../data/songs_counts_200.pkl')
master['song_id'] = master.index
master
master = master.merge(track_features, on='track_uri', suffixes=(None, '_tracks'))
master = master.merge(artists, on='artist_uri', suffixes=(None, '_artists'))
master = master.merge(albums, on='album_uri', suffixes=(None, '_albums'))
master = master.set_index('song_id')
master
master.shape
master.to_csv('../data/master200.csv')
master.to_pickle('../data/master200.pkl')
###Output
_____no_output_____ |
notebooks/02_RNAAS.ipynb | ###Markdown
RNAAS**Author(s):** Weixiang Yu & Gordon Richards**Last updated:** 12-09-20**Short description:**This notebook contains the code to make the figure included in the RNAAS paper. 0. Software Setup
###Code
import matplotlib.pyplot as plt
import matplotlib as mpl
import pandas as pd
import numpy as np
import glob
import os
mpl.rc_file('../src/lsst_dcr.rc')
%matplotlib inline
# automatically extract username
your_username = os.getcwd().split('/')[5]
print(f'Your automatically extracted username is: {your_username}.'
'\nIf it is incorrect, please mannually reset it.')
###Output
Your automatically extracted username is: ywx649999311.
If it is incorrect, please mannually reset it.
###Markdown
Import the sims_maf modules needed.
###Code
# import lsst.sim.maf moduels modules
import lsst.sims.maf.db as db
import lsst.sims.maf.metrics as metrics
import lsst.sims.maf.slicers as slicers
import lsst.sims.maf.stackers as stackers
from lsst.sims.maf.stackers import BaseStacker
import lsst.sims.maf.plots as plots
import lsst.sims.maf.metricBundles as metricBundles
# import convenience functions
from opsimUtils import *
###Output
_____no_output_____
###Markdown
1. Compare DCR metric to desired slopes 1.1 Load in simulated DCR data (used to define metric)Files created by Tina Peters as part of the efforts discussed at [DCR_AGN_metric_analysis.ipynb](https://github.com/RichardsGroup/LSSTprep/blob/master/DCR/DCR_AGN_metric_analysis.ipynb), which records the means colors and DCR slopes as a function of redshift for SDSS quasars. There seems to be some discrepancy between the g-band slopes and the plots of Kaczmarczik et al. 2009, so they should be double-checked before finalizing any quasar-specific DCR metric.
###Code
# load in data and merge into one df
dcr_data_dir = '../data/DCR_data/'
dfZ = pd.read_csv(os.path.join(dcr_data_dir, 'fittingS82_zshifts.dat'))
dfQSO = pd.read_csv(os.path.join(dcr_data_dir, 'fittingS82_zshiftfit.dat'), \
index_col=0, header=None, sep=' ').T.dropna().reset_index(drop=True)
dfDCR = pd.concat([dfZ, dfQSO], axis=1)
dfDCR.head()
###Output
_____no_output_____
###Markdown
1.2 Load in metric results
###Code
if your_username == '': # do NOT put your username here, put it in the cell at the top of the notebook.
raise Exception('Please provide your username! See the top of the notebook.')
resultDbPath = f'/home/idies/workspace/Temporary/{your_username}/scratch/MAFOutput/DCR/RNAAS/ResultDBs/'
metricDataPath = f'/home/idies/workspace/Temporary/{your_username}/scratch/MAFOutput/DCR/RNAAS/MetricData/'
# import metric evaluations
bundleDicts = {}
resultDbsView = getResultsDbs(resultDbPath)
for runName in resultDbsView:
bundleDicts[runName] = bundleDictFromDisk(resultDbsView[runName], runName, metricDataPath)
# check keys
dbRuns = list(resultDbsView.keys())
bd_keys = list(bundleDicts[dbRuns[1]].keys())
print(bd_keys)
###Output
[(1, 'DCR_20_g'), (2, 'DCR_22_g'), (3, 'DCR_24_g'), (4, 'DCR_20.15_u'), (5, 'DCR_22.15_u'), (6, 'DCR_24.15_u')]
###Markdown
__Note:__ The keys you see (in your own notebbook) could be different than what are shown above, which as a result you will need to modify the plotting code below to make the cell run properly. 2. Make plots
###Code
# return median metric data
def get_dcr_median(mb):
mask = mb.metricValues.mask
data = mb.metricValues.data[~mask]
data = data[~(np.isnan(data) | np.isinf(data))]
return np.median(data)
# get the median values from all opsims
# for normaliation in plotting
def get_metric_medians(key, bd, func):
mds = []
for run in bd:
keys = [*bd[run].keys()]
run_key = [elem for elem in keys if elem[1] == key[1]][0]
mds.append(func(bd[run][run_key]))
return mds
# get the metrics for plotting
Key1, Key2 = (1, 'DCR_22_g'), (4, 'DCR_22.15_u')
fig = plt.figure(figsize=(10,4.5), dpi=200)
ax1 = fig.add_axes([0.065, 0.15, 0.34, 0.8])
ax2 = fig.add_axes([0.645, 0.15, 0.34, 0.8])
# plot right panel
gKey = Key1
uKey = Key2
mds_g = np.sort(get_metric_medians(gKey, bundleDicts, get_dcr_median))
mds_u = np.sort(get_metric_medians(uKey, bundleDicts, get_dcr_median))
# create normalization object
gNorm = mpl.colors.LogNorm(vmin=mds_g[1], vmax=mds_g[-1])
uNorm = mpl.colors.LogNorm(vmin=mds_u[1], vmax=mds_u[-1])
# gNorm = mpl.colors.Normalize(vmin=mds_g[1], vmax=mds_g[-1]*1.04)
# uNorm = mpl.colors.Normalize(vmin=mds_u[1], vmax=mds_u[-1])
# seperate loops for u and g to keep the legend clean
for i, run in enumerate(resultDbsView):
# look for the correct combination of metricID and metricName
keys = [*bundleDicts[run].keys()]
metricKeyG = [elem for elem in keys if elem[1] == gKey[1]][0]
md_g = get_dcr_median(bundleDicts[run][metricKeyG])
if run == 'dcr_nham1_ugri_v1.5_10yrs':
ax2.plot(dfDCR['zshifts'].values, np.abs(dfDCR['g-slope']/md_g), \
color='k', linewidth=1, alpha=0.5)
else:
ax2.plot(dfDCR['zshifts'].values, np.abs(dfDCR['g-slope']/md_g), \
color=mpl.cm.summer(gNorm(md_g)), linewidth=0.7)
# seperate loops for u and g to keep the legend clean
for i, run in enumerate(resultDbsView):
# look for the correct combination of metricID and metricName
keys = [*bundleDicts[run].keys()]
metricKeyU = [elem for elem in keys if elem[1] == uKey[1]][0]
md_u = get_dcr_median(bundleDicts[run][metricKeyU])
if run == 'dcr_nham1_ugri_v1.5_10yrs':
ax2.plot(dfDCR['zshifts'].values, np.abs(dfDCR['u-slope']/md_u), linestyle='--', \
color='k', linewidth=1, alpha=0.5)
else:
ax2.plot(dfDCR['zshifts'].values, np.abs(dfDCR['u-slope']/md_u), linestyle='--', \
color=mpl.cm.winter(uNorm(md_u)), linewidth=0.7)
g_line = ax2.plot([], [], label='g band', linewidth=1, color='k')
u_line = ax2.plot([], [], label='u band', linestyle='--', linewidth=1, color='k')
# option to set uniform ylim
ylim = 22
if ylim is not None:
ax2.set_ylim(top=ylim, bottom=-.5)
ax2.set_xlim(0.2, 4.2)
ax2.tick_params(top=True, right=True, which='both')
ax2.yaxis.set_major_locator(plt.FixedLocator([5, 10, 15, 20]))
ax2.set_xlabel("Redshift")
ax2.set_ylabel("Abs(S/N)", fontsize=14)
ax2.legend(handles=(g_line[0], u_line[0]), loc=2)
# get normalization shift
run = 'baseline_2snaps_v1.5_10yrs'
keys = [*bundleDicts[run].keys()]
metricKey = [elem for elem in keys if elem[1] == Key1[1]][0]
norm_precision = get_dcr_median(bundleDicts[run][metricKey])
# get plotting order
unsort_mds_g = get_metric_medians(gKey, bundleDicts, get_dcr_median)
runs = list(bundleDicts.keys())
sort_order = np.argsort(unsort_mds_g)
# other plot setting
density = False
bins = 60
# plot left panel
for order in sort_order:
run = runs[order]
# look for the correct combination of metricID and metricName
keys = [*bundleDicts[run].keys()]
metricKey = [elem for elem in keys if elem[1] == Key1[1]][0]
# need to mask the pixels that have no available data
mask = bundleDicts[run][metricKey].metricValues.mask
data = bundleDicts[run][metricKey].metricValues.data[~mask]
data = data[~(np.isnan(data) | np.isinf(data))]
# weights = np.ones_like(data)*54.967783/3600
# match color to panel2
md_g = get_dcr_median(bundleDicts[run][metricKey])
# plot
if run == 'dcr_nham1_ugri_v1.5_10yrs':
c = 'k'
_ = ax1.hist(norm_precision/data, bins=bins, histtype='step', color=c, \
density=density, alpha=0.5, label=f"{run.rsplit('_', 2)[0]}", zorder = 10)
else:
# deal with non standard db names
if run in ['third_obs_pt120v1.5_10yrs', 'footprint_gp_smoothv1.5_10yrs']:
run = run.replace('v1.5', '_v1.5')
c = mpl.cm.summer(gNorm(md_g))
_ = ax1.hist(norm_precision/data, bins=bins, histtype='step', color=c, \
density=density, label=f"{run.rsplit('_', 2)[0]}")
ax1.set_xscale('log', basex=10)
# tick & format
ax1.set_xbound(lower=0.47, upper=2.3)
ax1.tick_params(top=True, right=True, which='both')
ax1.xaxis.set_major_locator(plt.FixedLocator([0.5, 1, 1.5, 2]))
ax1.xaxis.set_major_formatter(plt.FormatStrFormatter('%.1f'))
ax1.xaxis.set_minor_locator(plt.NullLocator())
# label & legend
ax1.set_xlabel('DCR Precision Metric', fontsize=12)
ax1.legend(fontsize=7.5, bbox_to_anchor=(1.0, 1.02), edgecolor='k', loc=2, labelspacing=0.45)
ax1.yaxis.set_major_locator(plt.FixedLocator(np.array([500, 1000, 1500, 2000])/(54.967783/60)**2))
y_vals = ax1.get_yticks()
ax1.set_yticklabels(['{:.0f}'.format(x * (54.967783/60)**2) for x in y_vals], rotation=90)
ax1.set_ylabel('Area ($\mathrm{degree^{2}}$)', labelpad=7)
plt.savefig('summer_winters.pdf')
###Output
_____no_output_____ |
_notebooks/2020-03-07-Python5 - Les ensembles.ipynb | ###Markdown
Les ensembles (set)> Dรฉcouverte de la structure d'ensembles en Python- toc: true- badges: true- comments: false- categories: [python, ISN]Un ensemble en python (***set***) est une structure pouvant contenir plusieurs donnรฉes, mais contrairement aux listes, ces donnรฉes sont uniques et non ordonnรฉes. Il n'y a pas de moyen d'accรฉder ร une donnรฉe en particulier en utilisant son numรฉro d'index.Les ensembles sont par contre extrรจmement efficaces pour la recherche d'un รฉlรฉment : Contrairement aux listes dans lesquelles une recherche impose de parcourir tous les รฉlรฉments, les ensembles utilisent des techniques d'optimisation (table de hachage) rendant la recherche trรจs performante.Voici quelques illustrations de l'utilisation des ***set***
###Code
# crรฉer un un ensemble
ensemble = {1,5,9,5,1,2,4}
ensemble
###Output
_____no_output_____
###Markdown
Comme on peut le voir, les รฉlรฉments en doubles dans *ensemble* ont รฉtรฉ รฉliminรฉs et l'ordre affichรฉ n'est pas celui dans lequel les รฉlรฉments ont รฉtรฉ saisis.
###Code
# essayons quelque chose...
ensemble[3]
###Output
_____no_output_____
###Markdown
l'accรจs aux รฉlรฉments par indice comme pour les listes n'est pas possible, cela n'a tout simpliement pas de sens. Conversion list set
###Code
liste = [1,5,9,5,1,2,4]
ensemble = set(liste)
ensemble
ensemble = {1, 9, 5, 4, 2}
liste = list(ensemble)
liste
###Output
_____no_output_____
###Markdown
Mรฉthodes sur les ensembles ajout et retrait : add et remove
###Code
ensemble = {1, 9, 5, 4, 2}
ensemble.add(18)
ensemble.remove(9)
ensemble
###Output
_____no_output_____
###Markdown
Attention de bien tester si un รฉlรฉment est dans l'ensemble avant la suppression car sinon...
###Code
ensemble.remove(3)
###Output
_____no_output_____
###Markdown
et du coup ... tester si un รฉlรฉment est prรฉsent dans un ensemble : in
###Code
3 in ensemble
18 in ensemble
###Output
_____no_output_____
###Markdown
Longueur et ensemble vide
###Code
# l'ensemble vide est notรฉ {} ou set()
vide = set()
vide.add(3)
vide.remove(3)
# Calculer le nb d'รฉlรฉments d'un ensemble
len(vide)
###Output
_____no_output_____
###Markdown
ApplicationCrรฉer une fonction **ensembleCarres** prenant en paramรจtre un entier $n$ e renvoyant un ensemble contenant les carrรฉs des entiers de 1 ร $n$
###Code
def ensembleCarres(n):
# YOUR CODE HERE
raise NotImplementedError()
ec = ensembleCarres(10)
assert len(ec)==10
assert 64 in ec
###Output
_____no_output_____
###Markdown
- Crรฉez une liste *l* de carrรฉs jusqu'ร un million. - Crรฉez un ensemble *s* de carrรฉs jusqu'ร un million. - Recherchez si $874466246641$ est un carrรฉ
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Comparez les **temps de recherche d'un mรชme nombre** dans l'ensemble et dans la liste.
###Code
%%time
assert 874466246641 in s
%%time
assert 874466246641 in l
###Output
_____no_output_____
###Markdown
quelques autres mรฉthodes sur les set s.isdisjoint(s2) s.issubset(s2) s.issuperset(s2) s = s2). s = s2). set.union(s1, s2, s3) : renvoie la rรฉunion de plusieurs sets. set.intersection(s1, s2, s3) : renvoie l'intersection de plusieurs sets ExerciceCrรฉer une fonction **ensembleCubes** prenant en paramรจtre un entier $n$ e renvoyant un ensemble contenant les cubes des entiers de 1 ร $n$
###Code
def ensembleCubes(n):
# YOUR CODE HERE
raise NotImplementedError()
assert 27 in ensembleCubes(10)
###Output
_____no_output_____
###Markdown
En dรฉduire en une ligne de python combien de nombres entre 1 et 100 sont ร la fois des carrรฉs et des cubes
###Code
# Tapez votre ligne dans la cellule ci-dessous
# Attention, pas plus d'une ligne de Python !!
###Output
_____no_output_____ |
notebooks/Plot_Bfield.ipynb | ###Markdown
Mean of the square of the derivative of the k=2 Legendre polynomial
###Code
from scipy.special import legendre
costh = linspace(-1,1,10000)
dB = legendre(2)(costh)
dB = gradient(dB,arccos(costh))
print(trapz(dB**2.0,costh)/2)
###Output
1.1999998699530594
|
CNN/.ipynb_checkpoints/hp-covidcxr-checkpoint.ipynb | ###Markdown
CNN Hyperparameters COVIDcxr Dataset
###Code
from fastai.vision.all import *
path = Path('/home/jupyter/covidcxr')
torch.cuda.empty_cache()
# fix result
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
SEED = 42
seed_everything(SEED)
df = pd.read_csv(path/'covidcxr.csv')
df
get_x=lambda x:path/f"{x[0]}"
get_y=lambda x:x[1]
splitter=RandomSplitter(seed=SEED)
metrics=[accuracy,
RocAuc(average='macro', multi_class='ovr'),
MatthewsCorrCoef(sample_weight=None),
Precision(average='macro'),
Recall(average='macro'),
F1Score(average='macro')]
item_tfms=Resize(480, method='squish', pad_mode='zeros', resamples=(2, 0))
batch_tfms=[*aug_transforms(mult=1.0, do_flip=False, flip_vert=False,
max_rotate=20.0, max_zoom=1.2, max_lighting=0.3, max_warp=0.2,
p_affine=0.75, p_lighting=0.75,
xtra_tfms=None, size=None, mode='bilinear', pad_mode='reflection',
align_corners=True, batch=False, min_scale=1.0),
Normalize.from_stats(*imagenet_stats)]
db = DataBlock(blocks=(ImageBlock(cls=PILImageBW), CategoryBlock),
get_x=get_x,
get_y=get_y,
splitter=splitter,
item_tfms = item_tfms,
batch_tfms=batch_tfms)
###Output
_____no_output_____
###Markdown
VGG-16 Epoch 10
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from torchvision.models import vgg16
arch = vgg16
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from torchvision.models import vgg16
arch = vgg16
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from torchvision.models import vgg16
arch = vgg16
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from torchvision.models import vgg16
arch = vgg16
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
VGG-19 Epoch 10
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from torchvision.models import vgg19
arch = vgg19
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from torchvision.models import vgg19
arch = vgg19
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from torchvision.models import vgg19
arch = vgg19
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from torchvision.models import vgg19
arch = vgg19
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-18 Epoch 10
###Code
arch = resnet18
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
arch = resnet18
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
dl = db.dataloaders(df, bs=bs)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet18
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet18
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet18
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet18
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-34 Epoch 10
###Code
arch = resnet34
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
arch = resnet34
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet34
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet34
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet34
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet34
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
ResNet-50 Epoch 10
###Code
arch = resnet50
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
arch = resnet50
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
arch = resnet50
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
arch = resnet50
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
arch = resnet50
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
arch = resnet50
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = cnn_learner(dl, arch=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Efficientnet-B0 Epoch 10
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 10
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
20
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 20
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
40
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
epoch = 40
bs = 32
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Batch Size 8
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
bs = 8
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
16
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
bs = 16
epoch = 30
loss_func=CrossEntropyLossFlat()
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____
###Markdown
Loss Function LabelSmoothingCrossEntropyFlat()
###Code
from efficientnet_pytorch import EfficientNet
arch = EfficientNet.from_pretrained("efficientnet-b0")
loss_func=LabelSmoothingCrossEntropyFlat(axis=-1, eps=0.06, reduction='mean', flatten=True, floatify=False, is_2d=True)
bs = 32
epoch = 30
dl = db.dataloaders(df, bs=bs)
learn = Learner(dl, model=arch, loss_func=loss_func, metrics=metrics)
learn.fine_tune(epoch)
###Output
_____no_output_____ |
tutorials/PyCSEP_tutorial_catalog.ipynb | ###Markdown
Catalog-based forecast tutorial - UCERF3 Landers In this tutorial we will look at an example of a catalog-based forecast. Our goal is to test whether the forecast number of earthquakes from a UCERF3-ETAS aftershock model is consistent with observations for the 1992 Landers sequence.The PyCSEP package has been designed so that the order of the steps that we take to do this is very similar to that for the gridded forecasts with a few differences. This tutorial aims to familiarise the user with some of the differences involved and further understanding of how these new CSEP tests are carried out.Full documentation of the package can be found [here](https://docs.cseptesting.org/) and any issues can be reported on the [PyCSEP Github page](https://github.com/SCECcode/pycsep).
###Code
import numpy
import cartopy
# Most of the core functionality can be imported from the top-level csep package.
import csep
# Or you could import directly from submodules, like csep.core or csep.utils submodules.
from csep.core import regions, catalog_evaluations
from csep.core import poisson_evaluations as poisson
from csep.utils import datasets, time_utils, comcat, plots
###Output
_____no_output_____
###Markdown
1. Load forecastForecasts should define a time horizon in which they are valid. The choice is flexible for catalog-based forecasts, because the catalogs can be filtered to accommodate multiple end-times. Conceptually, these should be separate forecasts. For catalog-based forecasts, we need to explicitly compute bin-wise rates. Before we can compute the bin-wise rates we need to define a spatial region and a set of magnitude bin edges. The magnitude bin edges are the lower bound (inclusive) except for the last bin, which is treated as extending to infinity. We can bind these to the forecast object. The spatial region should also be explicitly defined, in contrast to the gridded forecast where this is extracted from the data. In this example, we use the RELM polygon included in the package. This can also be done by passing the region as keyword arguments into `csep.load_catalog_forecast()`.
###Code
### Set up model parameters
# Start and end time
start_time = time_utils.strptime_to_utc_datetime("1992-06-28 11:57:34.14")
end_time = time_utils.strptime_to_utc_datetime("1992-07-28 11:57:34.14")
# Magnitude bins properties
min_mw = 4.95
max_mw = 8.95
dmw = 0.1
# Create space and magnitude regions. The forecast is already filtered in space and magnitude
magnitudes = regions.magnitude_bins(min_mw, max_mw, dmw)
region = regions.california_relm_region()
# Bind region information to the forecast (this will be used for binning of the catalogs)
space_magnitude_region = regions.create_space_magnitude_region(region, magnitudes)
###Output
_____no_output_____
###Markdown
To reduce the file size of this example, weโve already pre-filtered the catalogs to the appropriate magnitudes and spatial locations. The original forecast was computed for 1 year following the start date, so we still need to filter the catalog in time. We can do this by passing a list of filtering arguments to the forecast or updating the class.By default, the forecast loads catalogs on-demand, so the filters are applied as the catalog loads. On-demand means that until we loop over the forecast in some capacity, none of the catalogs are actually loaded! More fine-grain control and optimizations can be achieved by creating a `csep.core.forecasts.CatalogForecast` directly.
###Code
forecast = csep.load_catalog_forecast(datasets.ucerf3_ascii_format_landers_fname,
start_time = start_time,
end_time = end_time,
region = space_magnitude_region)
###Output
_____no_output_____
###Markdown
The `csep.core.forecasts.CatalogForecast` provides a method to compute the expected number of events in spatial cells. This requires a region with magnitude information.
###Code
# Assign filters to forecast (in this case time)
forecast.filters = [f'origin_time >= {forecast.start_epoch}', f'origin_time < {forecast.end_epoch}']
expected_rates = forecast.get_expected_rates(verbose=True)
###Output
_____no_output_____
###Markdown
The expected rates can now be plotted in a similar manner to the gridded forecast plots. Again, we can specify plot arguments as we did for the gridded forecasts.
###Code
args_forecast = {'title': 'Landers aftershock forecast',
'grid_labels': True,
'borders': True,
'feature_lw': 0.5,
'basemap': 'ESRI_imagery',
'cmap': 'rainbow',
'alpha_exp': 0.9,
'projection': cartopy.crs.Mercator(),
'clim':[-3.5, 0]}
ax = expected_rates.plot(plot_args = args_forecast)
###Output
_____no_output_____
###Markdown
2. Filter evaluation catalogIn this example we use the `csep.query_comcat` function to obtain a catalog directly from [ComCat](https://earthquake.usgs.gov/data/comcat/). We need to filter the ComCat catalog to be consistent with the forecast. This can be done either through the ComCat API or using catalog filtering strings (see the gridded forecast example). Here weโll use the Comcat API to make the data access quicker for this example. We still need to filter the observed catalog in space though.
###Code
# Obtain Comcat catalog and filter to region.
comcat_catalog = csep.query_comcat(start_time, end_time, min_magnitude=forecast.min_magnitude)
# Filter observed catalog using the same region as the forecast
comcat_catalog = comcat_catalog.filter_spatial(forecast.region)
###Output
_____no_output_____
###Markdown
3. Plot the catalogThe catalog can be plotted easily using the plot function.
###Code
comcat_catalog.plot()
###Output
_____no_output_____
###Markdown
* Let's try changing some plot arguments by looking at the docs 4. Composite plot Let's do a multiple plot, that includes the forecast expected rates and the observed catalog.* We must first create a forecast plot, which returns a matplotlib.pyplot.ax object.* This ax object should be passed to catalog.plot() as argument. * The plot order could be reversed, depending which layer is wanted above
###Code
args_catalog = {'basemap': 'ESRI_terrain',
'markercolor': 'black',
'markersize': 4}
ax_1 = expected_rates.plot(plot_args=args_forecast)
ax_2 = comcat_catalog.plot(ax=ax_1, plot_args=args_catalog)
###Output
_____no_output_____
###Markdown
4. Perform a test Now that we have a forecast and evaluation catalog, tests can be easily applied in a similar way as with gridded forecasts. For example, we can perform the Number test on the catalog based forecast using the observed catalog we obtained from Comcat.
###Code
number_test_result = catalog_evaluations.number_test(forecast, comcat_catalog)
ax = number_test_result.plot()
###Output
_____no_output_____
###Markdown
We can also quickly perform a spatial test
###Code
spatial_test_result = catalog_evaluations.spatial_test(forecast, comcat_catalog)
ax = spatial_test_result.plot()
###Output
_____no_output_____ |
sql-scavenger-hunt-day-1.ipynb | ###Markdown
If you haven't used BigQuery datasets on Kaggle previously, check out the Scavenger Hunt Handbook kernel to get started. SELECT, FROM & WHEREToday, we're going to learn how to use SELECT, FROM and WHERE to get data from a specific column based on the value of another column. For the purposes of this explanation, we'll be using this imaginary database, `pet_records` which has just one table in it, called `pets`, which looks like this:![](https://i.imgur.com/Ef4Puo3.png) SELECT ... FROM___The most basic SQL query is to select a single column from a specific table. To do this, you need to tell SELECT which column to select and then specify what table that column is from using from. > **Do you need to capitalize SELECT and FROM?** No, SQL doesn't care about capitalization. However, it's customary to capitalize your SQL commands and it makes your queries a bit easier to read.So, if we wanted to select the "Name" column from the pets table of the pet_records database (if that database were accessible as a BigQuery dataset on Kaggle , which it is not, because I made it up), we would do this: SELECT Name FROM `bigquery-public-data.pet_records.pets`Which would return the highlighted data from this figure.![](https://i.imgur.com/8FdVyFP.png) WHERE ...___When you're working with BigQuery datasets, you're almost always going to want to return only certain rows, usually based on the value of a different column. You can do this using the WHERE clause, which will only return the rows where the WHERE clause evaluates to true.Let's look at an example: SELECT Name FROM `bigquery-public-data.pet_records.pets` WHERE Animal = "Cat"This query will only return the entries from the "Name" column that are in rows where the "Animal" column has the text "Cat" in it. Those are the cells highlighted in blue in this figure:![](https://i.imgur.com/Va52Qdl.png) Example: What are all the U.S. cities in the OpenAQ dataset?___Now that you've got the basics down, let's work through an example with a real dataset. Today we're going to be working with the OpenAQ dataset, which has information on air quality around the world. (The data in it should be current: it's updated weekly.)To help get you situated, I'm going to run through a complete query first. Then it will be your turn to get started running your queries!First, I'm going to set up everything we need to run queries and take a quick peek at what tables are in our database.
###Code
# import package with helper functions
import bq_helper
# create a helper object for this dataset
open_aq = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="openaq")
# print all the tables in this dataset (there's only one!)
open_aq.list_tables()
###Output
_____no_output_____
###Markdown
I'm going to take a peek at the first couple of rows to help me see what sort of data is in this dataset.
###Code
# print the first couple rows of the "global_air_quality" dataset
open_aq.head("global_air_quality")
###Output
_____no_output_____
###Markdown
Great, everything looks good! Now that I'm set up, I'm going to put together a query. I want to select all the values from the "city" column for the rows there the "country" column is "us" (for "United States"). > **What's up with the triple quotation marks (""")?** These tell Python that everything inside them is a single string, even though we have line breaks in it. The line breaks aren't necessary, but they do make it much easier to read your query.
###Code
# query to select all the items from the "city" column where the
# "country" column is "us"
query = """SELECT city
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE country = 'US'
"""
###Output
_____no_output_____
###Markdown
> **Important:** Note that the argument we pass to FROM is *not* in single or double quotation marks (' or "). It is in backticks (\`). If you use quotation marks instead of backticks, you'll get this error when you try to run the query: `Syntax error: Unexpected string literal` Now I can use this query to get information from our open_aq dataset. I'm using the `BigQueryHelper.query_to_pandas_safe()` method here because it won't run a query if it's larger than 1 gigabyte, which helps me avoid accidentally running a very large query. See the [Scavenger Hunt Handbook ](https://www.kaggle.com/rtatman/sql-scavenger-hunt-handbook/)for more details.
###Code
# the query_to_pandas_safe will only return a result if it's less
# than one gigabyte (by default)
us_cities = open_aq.query_to_pandas_safe(query)
###Output
_____no_output_____
###Markdown
Now I've got a dataframe called us_cities, which I can use like I would any other dataframe:
###Code
# What five cities have the most measurements taken there?
us_cities.city.value_counts().head()
###Output
_____no_output_____
###Markdown
Scavenger hunt___Now it's your turn! Here's the questions I would like you to get the data to answer:* Which countries use a unit other than ppm to measure any type of pollution? (Hint: to get rows where the value *isn't* something, use "!=")* Which pollutants have a value of exactly 0?In order to answer these questions, you can fork this notebook by hitting the blue "Fork Notebook" at the very top of this page (you may have to scroll up). "Forking" something is making a copy of it that you can edit on your own without changing the original.
###Code
# Your code goes here :)
query1 = """SELECT country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit != 'ppm'
"""
query2 = """SELECT pollutant
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE value = 0.0
"""
country_unit = open_aq.query_to_pandas_safe(query1)
pollutant_v = open_aq.query_to_pandas_safe(query2)
country_unit.country.unique()
pollutant_v.pollutant.unique()
###Output
_____no_output_____ |
examples/Test_NXDS.ipynb | ###Markdown
Create ้ป่ฎคๅ `_default` ๅๅปบๅพ
###Code
nxds.create_graph(val=sample_graph1)
###Output
_____no_output_____
###Markdown
ไธๆฌกๆงๅๅปบๅคไธชๅๆ ท็ๅพ
###Code
nxds.create_graph(key=['graph2', 'graph3', 'graph4'], val = sample_graph2)
nxds.create_graph(key='foo')
nxds.create_node(sample_node, val = {'blah': True})
nxds.create_edge(sample_edge, {'baz': False})
###Output
_____no_output_____
###Markdown
Read ้ป่ฎค่ฏปๅๆๆๅพ
###Code
nxds.read_graph()
###Output
_____no_output_____
###Markdown
ๅพๅฑๆงๅจๅฏน่ฑก็.graph้๏ผNetworkX็นๆง๏ผ
###Code
nxds.read_graph('_default')['_default'].graph
###Output
_____no_output_____
###Markdown
้ป่ฎค่ฏปๅๆๆ่็น
###Code
nxds.read_node()
###Output
_____no_output_____
###Markdown
้้
็ฌฆ่ฟๆปค๏ผๆๆๅพไธญๅไธบ0็่็น๏ผ
###Code
nxds.read_node(('@*', 0))
###Output
_____no_output_____
###Markdown
้ป่ฎค่ฏปๅๆๆ่พน
###Code
nxds.read_edge()
###Output
_____no_output_____
###Markdown
้้
็ฌฆ่ฟๆปค๏ผๆๆๅพไธญๆๅ0่็น็่พน๏ผPS๏ผๆณจๆGraphๆฒกๆๅบๅ
ฅ่พนๅบๅซ๏ผ่ฟไนๅ้้
็ฌฆไผ่ชๅจ่ฝฌๆขไธบๅ0็ธ่ฟ็่พน๏ผๆชๆฅๆๅฏ่ฝไผๅจlogger้้ขๅ ไธชwarning๏ผ
###Code
nxds.read_edge(('@*', ('@*', 0)))
###Output
_____no_output_____
###Markdown
Update
###Code
nxds.update_graph(val = GraphValType(
attr = {
'graph_title': 'A New Graph'
},
nodes = [
7, 8, 9
],
edges = [
(7, 8),
(9, 7)
],
node_attr = {
'role': 'follower'
},
edge_attr = {
'create_date': '2018-01-20'
}
))
nxds.read_graph('_default')['_default'].graph
nxds.update_node(('foo', 0), {'build': 'yes'})
nxds.read_node(('foo', 0))
nxds.update_edge(('foo', (0, 2)), {'hello': 'world'})
nxds.read_edge(('foo', (0, 2)))
###Output
_____no_output_____
###Markdown
Delete
###Code
nxds.delete_graph('foo')
nxds.read_graph()
nxds.delete_graph()
nxds.read_graph()
nxds.delete_node(('@*', 0))
nxds.read_node()
nxds.delete_edge(('@*', (2, '@*')))
nxds.read_edge()
###Output
_____no_output_____
###Markdown
flush, clear, and reload
###Code
nxds.flush()
nxds.clear()
nxds.reload()
###Output
_____no_output_____ |
[kaggle] Ingredients_For_Chicken_Dinner.ipynb | ###Markdown
**Univariate Analysis**
###Code
## Id
#search for duplicates
any(data['Id'].duplicated())
## Id
#total no of players
len(data['Id'])
## groupId
#Check NaN
data[data['groupId'].isnull()]
#No nan present
## groupId
#No. of people per group
groupIdData=pd.DataFrame(data['groupId'].value_counts())
groupIdData.reset_index(level=0, inplace=True)
groupIdData.columns = ['groupId', 'Members']
groupIdData.head()
## groupId
#Basic Stats on the members in each group
groupIdData['Members'].describe()
## groupId
# removing invalid groups where members more than 4 / could be just "useless" bots
groupIdDataValid=groupIdData[groupIdData['Members']<=4]
groupIdDataValid.head()
## groupId
#Basic Stats on the members in each VALID group
groupIdDataValid['Members'].describe()
## matchId
# Total no. people in a match
matchIdData=pd.DataFrame(data['matchId'].value_counts())
matchIdData.reset_index(level=0, inplace=True)
matchIdData.columns = ['matchId', 'Players']
matchIdData.head()
## matchId
# Total no. of matches
len(matchIdData)
## matchId
#Basic Stats on the players in each match
matchIdData['Players'].describe()
## matchId
# removing invalid matches where players are equal to 10 or less
# we need good comepition to identify most import fratures for a win
matchIdDataValid=matchIdData[matchIdData['Players']>10]
matchIdDataValid.tail()
## matchId
#Basic Stats on the members in each VALID group
matchIdDataValid['Players'].describe()
## Main DataSet
# remove invalid groups from further analysis
groupIdDataValidList=list(groupIdDataValid['groupId'])
data=data[data['groupId'].isin(groupIdDataValidList)]
matchIdDataValidList=list(matchIdDataValid['matchId'])
data=data[data['matchId'].isin(matchIdDataValidList)]
len(data['Id'])
## assists
#Basic Stats on the player assists in each match
data['assists'].describe()
## boosts
#Basic Stats on the player boosts in each match
data['boosts'].describe()
## damageDealt
#Basic Stats on the player damage dealt in each match
data['damageDealt'].describe()
## Killing Stats
# Basic Stats on player headshotKills, kills, roadKills and friendlyKills
killing=data[['kills','headshotKills','roadKills','teamKills']]
killing.describe(include='all')
## heals
#Basic Stats on the player healing items used in each match
data['heals'].describe()
## revives
# Basic Stats on the player reviving another player in a match
data['revives'].describe()
## weaponsAcquired
# Basic Stats on the no. of weapon picked up a player
data['weaponsAcquired'].describe()
## numGroups
# Basic Stats on the no. of groups joining a game
data['numGroups'].describe()
## killPlace
#Basic Stats on the player rank based on her/his kills in the match
# Just checking for a min max limits else it is not useful
data['killPlace'].describe()
## Travel
# Basic descriptive analysis of player travel distance on foot, vehicle and swim
# All values are in 'm'
data['totalDistance']=data.walkDistance+data.rideDistance+data.swimDistance
travel=data[['walkDistance','rideDistance','swimDistance','totalDistance']]
travel.describe(include='all')
## Elo Rating
# basic description of Kill and win Elo rating of each players
Elo=data[['winPoints','killPoints']]
Elo.describe(include='all')
### Does this makes sense as Elo rating evolves with time and same player can increase/decrease so mean and all may not be meaningful
# Some rating for group participation
groupIdDataList=list(set(data['groupId']))
for group in groupIdDataList:
#if (i+1)%100 ==0:
# print(i+1,'/',len(groupIdDataList))
data.loc[data['groupId']==group,'totalTeamsKills']=data[data['groupId']==group]['kills'].mean()
data.loc[data['groupId']==group,'totalTeamWinPoints']=data[data['groupId']==group]['winPoints'].mean()
data.loc[data['groupId']==group,'totalTeamKillPoints']=data[data['groupId']==group]['killPoints'].mean()
# Some elo based expectation caluation
matchIdDataList=list(set(data['matchId']))
for match in matchIdDataList:
matchData=data[data['matchId']== match]
groupsMatchList=list(set(matchData['groupId']))
for group in groupsMatchList:
data.loc[data['groupId']==group,'ExpectedWinPoints']=1/(1+10**(-abs(matchData[matchData['groupId']==group]['totalTeamWinPoints'].mean()-matchData['totalTeamWinPoints'].mean())/400))
data.loc[data['groupId']==group,'ExpectedKillPoints']=1/(1+10**(-abs(matchData[matchData['groupId']==group]['totalTeamKillPoints'].mean()-matchData['totalTeamKillPoints'].mean())/400))
###Output
_____no_output_____
###Markdown
**Bivariate Analysis**
###Code
dropCols = ['Id', 'groupId', 'matchId']
# These have no outcome on the game;
#'maxPlace'=='numGroups'
#data=data.drop(['maxPlace'], axis=1)
keepCols = [col for col in data.columns if col not in dropCols]
corr = data[keepCols].corr()
plt.figure(figsize=(15,10))
plt.title("Correlation Heat Map of Data")
sns.heatmap(
corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values,
annot=True,
cmap="RdYlGn",
)
plt.show()
data.to_csv('../working/cleanedTrain.csv')
print(os.listdir("../working"))
###Output
_____no_output_____ |
GrowthModels/SIR_Model.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pylab as plt
import pandas as pd
from scipy.integrate import ode
from ipywidgets import interactive,IntSlider,FloatSlider
###Output
_____no_output_____
###Markdown
Let's S be the susceptable population, I the infected populatioin, R the fraction of the population removed from the desease (recovered or death) The SRI model describe $\begin{cases}\frac{dS}{dt}=-\frac{\beta I S}{N},\\\frac{dI}{dt}=\frac{\beta I S}{N}-\gamma I,\\\frac{dR}{dt}=\gamma I\end{cases}$ Because $N$ is costantant $S(0)+I(0)+R(0)=N$ and in general $S(t)+I(t)+R(t)=N$. We can consider $N=1$ and $S(t)+I(t)+R(t)=1$, so that value of S,I,R represent the fraction of Succetable, Infected and Removed in the population.Without loss of generality we can rewrite the system as:$\begin{cases}\frac{dS}{dt}=-\beta I S,\\\frac{dI}{dt}=\beta I S-\gamma I,\\\frac{dR}{dt}=\gamma I\end{cases}$We are interested when $\frac{dI}{dt}<0$ that occur when $\beta I S-\gamma I<0$ that occur when $I\gamma(\frac{\beta S}{\gamma}-1)$. We define $R_{o}=\frac{\beta}{\gamma}$, then $SR_{o}<1$
###Code
def ode_SIR(t, Y,beta,gamma):
A=beta*Y[1]*Y[0]
B=gamma*Y[1]
return [-A,A-B,B]
r=ode(ode_SIR)
S0=0.99
I0=1-S0
R0=1-I0-S0
SIR0=[S0,I0,R0]
beta=0.05
gamma=0.01
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma)
t1=365*2
dt=1
sol=[]
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
plt.plot(sol[:,0],sol[:,1],label='S')
plt.plot(sol[:,0],sol[:,2],label='I')
plt.plot(sol[:,0],sol[:,3],label='R')
plt.xlabel(r"$Time$")
plt.legend();
plt.title(r"$R_o=\frac{\beta}{\gamma}=%1.2f \quad S_o R_o=%1.2f$"%(beta/gamma,S0*beta/gamma)+'\nis '+"$S_o R_o<1 \quad %s$"%(S0*beta/gamma<1))
plt.grid()
def update(i0,beta,gamma,t1):
S0=1-i0
SIR0=[S0,i0,0]
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma)
dt=1
sol=[]
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
plt.figure()
[plt.plot(sol[:,0],sol[:,i]) for i in (1,2,3)]
plt.title(r"$R_o=\frac{\beta}{\gamma}=%1.4f \quad S_o R_o=%1.2f$"
%(beta/gamma,S0*beta/gamma)+'\nis '+"$S_o R_o<1 \quad %s$"
%(S0*beta/gamma<1))
plt.grid()
plt.show()
r=ode(ode_SIR)
#interactive_plot = interactive(update, i0=(0, 0.2,0.01), beta=(0.01, 0.2, 0.002)
# ,gamma=(0.001,0.1,0.002),t1=(700,1000,5))
timeSlider=IntSlider(value=360,min=300,max=1080,step=30,description="days")
iniInfectedSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description="i0")
betaSlider=FloatSlider(value=0.05, min=0.01,max=0.2,step=0.01,readout_format='.2f',description=r'<MATH>β</MATH>')
gammaSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description=r'<MATH>γ</MATH>')
interactive_plot = interactive(update, t1=timeSlider,i0=iniInfectedSlider,gamma=gammaSlider, beta=betaSlider)
output = interactive_plot.children[-1]
output.layout.height = '450px'
interactive_plot
###Output
_____no_output_____
###Markdown
If we consider that during the dynamics over long period of time we need to account for the fact newborn and natural death. We can consider this new system of ODEs:$\begin{cases}\frac{dS}{dt}=\Lambda -\beta I S -\mu S,\\\frac{dI}{dt}=\beta I S-\gamma I -\mu I,\\\frac{dR}{dt}=\gamma I -\mu R\end{cases}$Moreover, if we impose that the population is constant and equal 1 i.e. $S(t)+I(t)+R(t)=1$ it can be easilly show that $\Lambda=\mu$We are in general interested to stationary state i.e. when $\frac{dS}{dt}=\frac{dI}{dt}=\frac{dR}{dt}=0$a trivial solution can be easilly found for $(S_{\infty}=1;I_{\infty}=0;R_{\infty}=0)$.If $I_{\infty}>0$, we can show that stationary solution are:$S_{\infty}=R_o^{-1},\\I_{\infty}=\frac{\mu}{\beta}(R_o-1),\\R_{\infty}=\frac{\gamma}{\beta}(R_o-1),\\$with $R_o=\frac{\beta}{\gamma+\mu}$We also point that, for virus to remain endemic in the population, we must have $(R_o-1)>0$ i.e. $\frac{\beta}{\gamma+\mu}>1$
###Code
def ode_SIR_vd(t, Y,beta,gamma,mu):
Lambda=mu
A=beta*Y[1]*Y[0]
B=gamma*Y[1]
return [Lambda -A-mu*Y[0],A-B-mu*Y[1],B-mu*Y[2]]
r=ode(ode_SIR_vd)
i0=0.01
S0=1-i0
SIR0=[S0,i0,0]
mu=0.01
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma,mu)
dt=1
sol=[]
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
plt.figure()
[plt.plot(sol[:,0],sol[:,i]) for i in (1,2,3)]
def updateSIR_vd(i0,beta,gamma,mu,t1):
def fooPlot(ax,sol,i,j,mytitle):
'''
simple function to format phase space plot
'''
ax.plot(sol[:,i],sol[:,j])
ax.set_title(mytitle)
ax.grid()
S0=1-i0
SIR0=[S0,i0,0]
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma,mu)
dt=1
sol=[]
Ro=beta/(gamma+mu)
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
ax=plt.subplot(211)
#plt.figure()
mycolors=['b','r','g']
ax.hlines(1/Ro,0,t1,color='b',ls=':')
ax.hlines(mu*(Ro-1)/beta,0,t1,color='r',ls=':')
ax.hlines(gamma*(Ro-1)/beta,0,t1,color='g',ls=':')
ax.set_title(r"$R_o=\frac{\beta}{\gamma+\mu}=%.2f$"
%(Ro)+'\nis '+r"$R_o<1 \quad %s$"
%(Ro<1))
[ax.plot(sol[:,0],sol[:,i],color=mycolors[i-1]) for i in (1,2,3)]
plt.grid()
fooPlot(plt.subplot(234),sol,1,2,r"$S vs I$")
fooPlot(plt.subplot(235),sol,1,3,r"$S vs R$")
fooPlot(plt.subplot(236),sol,2,3,r"$I vs R$")
plt.tight_layout()
plt.show()
r=ode(ode_SIR_vd)
#interactive_plot = interactive(update, i0=(0, 0.2,0.01), beta=(0.01, 0.2, 0.002)
# ,gamma=(0.001,0.1,0.002),t1=(700,1000,5))
timeSlider=IntSlider(value=360,min=300,max=4000,step=30,description="days")
iniInfectedSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description="i0")
betaSlider=FloatSlider(value=0.05, min=0.01,max=0.2,step=0.01,readout_format='.2f',description=r'<MATH>β</MATH>')
gammaSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description=r'<MATH>γ</MATH>')
#LambdaSlider=FloatSlider(value=0.1, min=0.,max=0.3,step=0.01,description=r'<MATH>Λ</MATH>')
muSlider=FloatSlider(value=0.001, min=0.,max=0.02,step=0.002,readout_format='.3f',description=r'<MATH>Λ=μ</MATH>')
interactive_plot = interactive(updateSIR_vd,i0=iniInfectedSlider,
gamma=gammaSlider, beta=betaSlider,mu=muSlider, t1=timeSlider)
output = interactive_plot.children[-1]
output.layout.height = '450px'
interactive_plot
###Output
_____no_output_____
###Markdown
$\begin{cases}\frac{dS}{dt}=(1-p)\Lambda -\beta I S -\mu S,\\\frac{dI}{dt}=\beta I S-\gamma I -\mu I,\\\frac{dR}{dt}=\gamma I -\mu R,\\\frac{dV}{dt}=p\Lambda-\mu V,\end{cases}$We have introduced vaccination at birth (V), with p beeing the fraction of childs vaccinated at birth.Once again under constant (=1) population assumption it can be shown that $\lambda=\mu$a trivial statinary solution can be easilly found for $(S_{\infty}=1;I_{\infty}=0;R_{\infty}=0)$.If $I_{\infty}>0$, we can show that stationary solution are:$S_{\infty}=R_o^{-1},\\I_{\infty}=\frac{\mu}{\beta}((1-p)*R_o-1),\\R_{\infty}=\frac{\gamma}{\beta}((1-p)*R_o-1),\\$with $R_o=\frac{\beta}{\gamma+\mu}$We also point that, for virus to remain endemic in the population, we must have $((1-p)R_o-1)>0$ i.e. $\frac{\beta}{\gamma+\mu}>1$
###Code
def ode_SIRV(t, Y,beta,gamma,mu,p):
A=beta*Y[1]*Y[0]
B=gamma*Y[1]
return [(1-p)*mu -A-mu*Y[0],A-B-mu*Y[1],B-mu*Y[2],mu*(p-Y[3])]
S0=1-i0
SIR0=[S0,i0,0,0]
r=ode(ode_SIRV)
p=0.1
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma,mu,p)
dt=1
sol=[]
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
plt.figure()
[plt.plot(sol[:,0],sol[:,i]) for i in (1,2,3,4)]
def updateSIRV(i0,beta,gamma,mu,t1,p):
def fooPlot(ax,sol,i,j,mytitle):
'''
simple function to format phase space plot
'''
ax.plot(sol[:,i],sol[:,j])
ax.set_title(mytitle)
ax.grid()
S0=1-i0
SIR0=[S0,i0,0,0]
r.set_initial_value(SIR0, 0).set_f_params(beta,gamma,mu,p)
dt=1
sol=[]
while r.successful() and r.t < t1:
sol.append(np.concatenate(([r.t+dt],r.integrate(r.t+dt))))
sol=np.array(sol)
Ro=beta/(gamma+mu)
ax=plt.subplot(211)
#plt.figure()
mycolors=['b','r','g','gold']
ax.hlines(1/Ro,0,t1,color='b',ls=':')
ax.hlines(mu*((1-p)*Ro-1)/beta,0,t1,color='r',ls=':')
ax.hlines(gamma*((1-p)*Ro-1)/beta,0,t1,color='g',ls=':')
ax.hlines(p,0,t1,color='gold',ls=':')
ax.set_title(r"$(1-p)R_o=\frac{\beta}{\gamma+\mu}=%.2f$"
%((1-p)*Ro) +'\nis '+r"$(1-p)R_o<1 \quad %s$"
%((1-p)*Ro<1))
[ax.plot(sol[:,0],sol[:,i],color=mycolors[i-1]) for i in (1,2,3,4)]
plt.grid()
fooPlot(plt.subplot(234),sol,1,2,r"$S\quad vs \quad I$")
fooPlot(plt.subplot(235),sol,1,3,r"$S\quad vs \quad R$")
fooPlot(plt.subplot(236),sol,2,3,r"$I \quad vs \quad R$")
plt.tight_layout()
plt.show()
r=ode(ode_SIRV)
#interactive_plot = interactive(update, i0=(0, 0.2,0.01), beta=(0.01, 0.2, 0.002)
# ,gamma=(0.001,0.1,0.002),t1=(700,1000,5))
timeSlider=IntSlider(value=360,min=300,max=10000,step=30,description="days")
iniInfectedSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description="i0")
betaSlider=FloatSlider(value=0.05, min=0.01,max=0.2,step=0.01,readout_format='.2f',description=r'<MATH>β</MATH>')
gammaSlider=FloatSlider(value=0.01, min=0.,max=0.3,step=0.01,description=r'<MATH>γ</MATH>')
pSlider=FloatSlider(value=0.1, min=0.,max=1,step=0.01,description='p')
muSlider=FloatSlider(value=0.001, min=0.,max=0.02,step=0.002,readout_format='.3f',description=r'<MATH>μ</MATH>')
interactive_plot = interactive(updateSIRV,i0=iniInfectedSlider,
gamma=gammaSlider, beta=betaSlider,mu=muSlider, t1=timeSlider,p=pSlider
)
output = interactive_plot.children[-1]
output.layout.height = '550px'
interactive_plot
###Output
_____no_output_____ |
yolo_train_Speed_bumpYasmi_inline_trainning_3.ipynb | ###Markdown
###Code
!git clone https://github.com/Yasmic/SpeedBump3AnchorBox
%cd SpeedBump3AnchorBox/
!pip install -r requirements.txt
%cd /content/SpeedBump3AnchorBox/
!rm bump.pkl
pwd
%pycat config.json
%%writefile config.json
{
"model" : {
"min_input_size": 288,
"max_input_size": 448,
"anchors": [195,7, 219,15, 276,40, 291,25, 351,11, 382,58, 386,17, 401,33, 409,96],
"labels": ["bump"]
},
"train": {
"train_image_folder": "dataset/hump/hump/data/",
"train_annot_folder": "dataset/hump/hump/dataAnot/",
"cache_name": "bump.pkl",
"train_times": 8,
"batch_size": 4,
"learning_rate": 1e-4,
"nb_epochs": 100,
"warmup_epochs": 3,
"ignore_thresh": 0.5,
"gpus": "0",
"grid_scales": [1,1,1],
"obj_scale": 5,
"noobj_scale": 1,
"xywh_scale": 1,
"class_scale": 1,
"tensorboard_dir": "logs",
"saved_weights_name": "bump.h5",
"debug": true
},
"valid": {
"valid_image_folder": "",
"valid_annot_folder": "",
"cache_name": "",
"valid_times": 1
}
}
!python gen_anchors.py -c config.json
import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
file_id = '1ED7de6kr0u3TZTJkxN8YafWRgsme9WHP'
destination = 'backend.h5'
download_file_from_google_drive(file_id, destination)
import argparse
import os
import numpy as np
import json
from voc import parse_voc_annotation
from yolo import create_yolov3_model, dummy_loss
from generator import BatchGenerator
from utils.utils import normalize, evaluate, makedirs
from keras.callbacks import EarlyStopping, ReduceLROnPlateau
from keras.optimizers import Adam
from callbacks import CustomModelCheckpoint, CustomTensorBoard
from utils.multi_gpu_model import multi_gpu_model
import tensorflow as tf
import keras
from keras.models import load_model
from keras.utils import plot_model
config = tf.compat.v1.ConfigProto(
gpu_options = tf.compat.v1.GPUOptions(per_process_gpu_memory_fraction=0.9)
# device_count = {'GPU': 1}
)
config.gpu_options.allow_growth = True
session = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(session)
def create_training_instances(
train_annot_folder,
train_image_folder,
train_cache,
valid_annot_folder,
valid_image_folder,
valid_cache,
labels,
):
# parse annotations of the training set
train_ints, train_labels = parse_voc_annotation(train_annot_folder, train_image_folder, train_cache, labels)
# parse annotations of the validation set, if any, otherwise split the training set
if os.path.exists(valid_annot_folder):
valid_ints, valid_labels = parse_voc_annotation(valid_annot_folder, valid_image_folder, valid_cache, labels)
else:
print("valid_annot_folder not exists. Spliting the trainining set.")
train_valid_split = int(0.8*len(train_ints))
np.random.seed(0)
np.random.shuffle(train_ints)
np.random.seed()
valid_ints = train_ints[train_valid_split:]
train_ints = train_ints[:train_valid_split]
# compare the seen labels with the given labels in config.json
if len(labels) > 0:
overlap_labels = set(labels).intersection(set(train_labels.keys()))
print('Seen labels: \t' + str(train_labels) + '\n')
print('Given labels: \t' + str(labels))
# return None, None, None if some given label is not in the dataset
if len(overlap_labels) < len(labels):
print('Some labels have no annotations! Please revise the list of labels in the config.json.')
return None, None, None
else:
print('No labels are provided. Train on all seen labels.')
print(train_labels)
labels = train_labels.keys()
max_box_per_image = max([len(inst['object']) for inst in (train_ints + valid_ints)])
return train_ints, valid_ints, sorted(labels), max_box_per_image
def create_callbacks(saved_weights_name, tensorboard_logs, model_to_save):
makedirs(tensorboard_logs)
early_stop = EarlyStopping(
monitor = 'loss',
min_delta = 0.01,
patience = 7,
mode = 'min',
verbose = 1
)
checkpoint = CustomModelCheckpoint(
model_to_save = model_to_save,
filepath = saved_weights_name,# + '{epoch:02d}.h5',
monitor = 'loss',
verbose = 1,
save_best_only = True,
mode = 'min',
period = 1
)
reduce_on_plateau = ReduceLROnPlateau(
monitor = 'loss',
factor = 0.1,
patience = 2,
verbose = 1,
mode = 'min',
epsilon = 0.01,
cooldown = 0,
min_lr = 0
)
tensorboard = CustomTensorBoard(
log_dir = tensorboard_logs,
write_graph = True,
write_images = True,
)
return [early_stop, checkpoint, reduce_on_plateau, tensorboard]
def create_model(
nb_class,
anchors,
max_box_per_image,
max_grid, batch_size,
warmup_batches,
ignore_thresh,
multi_gpu,
saved_weights_name,
lr,
grid_scales,
obj_scale,
noobj_scale,
xywh_scale,
class_scale
):
if multi_gpu > 1:
with tf.device('/cpu:0'):
template_model, infer_model = create_yolov3_model(
nb_class = nb_class,
anchors = anchors,
max_box_per_image = max_box_per_image,
max_grid = max_grid,
batch_size = batch_size//multi_gpu,
warmup_batches = warmup_batches,
ignore_thresh = ignore_thresh,
grid_scales = grid_scales,
obj_scale = obj_scale,
noobj_scale = noobj_scale,
xywh_scale = xywh_scale,
class_scale = class_scale
)
else:
template_model, infer_model = create_yolov3_model(
nb_class = nb_class,
anchors = anchors,
max_box_per_image = max_box_per_image,
max_grid = max_grid,
batch_size = batch_size,
warmup_batches = warmup_batches,
ignore_thresh = ignore_thresh,
grid_scales = grid_scales,
obj_scale = obj_scale,
noobj_scale = noobj_scale,
xywh_scale = xywh_scale,
class_scale = class_scale
)
# load the pretrained weight if exists, otherwise load the backend weight only
if os.path.exists(saved_weights_name):
print("\nLoading pretrained weights.\n")
template_model.load_weights(saved_weights_name)
else:
template_model.load_weights("backend.h5", by_name=True)
if multi_gpu > 1:
train_model = multi_gpu_model(template_model, gpus=multi_gpu)
else:
train_model = template_model
optimizer = Adam(lr=lr, clipnorm=0.001)
train_model.compile(loss=dummy_loss, optimizer=optimizer)
return train_model, infer_model
!python train.py -c config.json
config_path = "config.json"
with open(config_path) as config_buffer:
config = json.loads(config_buffer.read())
###############################
# Parse the annotations
###############################
train_ints, valid_ints, labels, max_box_per_image = create_training_instances(
config['train']['train_annot_folder'],
config['train']['train_image_folder'],
config['train']['cache_name'],
config['valid']['valid_annot_folder'],
config['valid']['valid_image_folder'],
config['valid']['cache_name'],
config['model']['labels']
)
print('\nTraining on: \t' + str(labels) + '\n')
###############################
# Create the generators
###############################
train_generator = BatchGenerator(
instances = train_ints,
anchors = config['model']['anchors'],
labels = labels,
downsample = 32, # ratio between network input's size and network output's size, 32 for YOLOv3
max_box_per_image = max_box_per_image,
batch_size = config['train']['batch_size'],
min_net_size = config['model']['min_input_size'],
max_net_size = config['model']['max_input_size'],
shuffle = True,
jitter = 0.3,
norm = normalize
)
valid_generator = BatchGenerator(
instances = valid_ints,
anchors = config['model']['anchors'],
labels = labels,
downsample = 32, # ratio between network input's size and network output's size, 32 for YOLOv3
max_box_per_image = max_box_per_image,
batch_size = config['train']['batch_size'],
min_net_size = config['model']['min_input_size'],
max_net_size = config['model']['max_input_size'],
shuffle = True,
jitter = 0.0,
norm = normalize
)
if os.path.exists(config['train']['saved_weights_name']):
config['train']['warmup_epochs'] = 0
warmup_batches = config['train']['warmup_epochs'] * (config['train']['train_times']*len(train_generator))
os.environ['CUDA_VISIBLE_DEVICES'] = config['train']['gpus']
multi_gpu = len(config['train']['gpus'].split(','))
train_model, infer_model = create_model(
nb_class = len(labels),
anchors = config['model']['anchors'],
max_box_per_image = max_box_per_image,
max_grid = [config['model']['max_input_size'], config['model']['max_input_size']],
batch_size = config['train']['batch_size'],
warmup_batches = warmup_batches,
ignore_thresh = config['train']['ignore_thresh'],
multi_gpu = multi_gpu,
saved_weights_name = config['train']['saved_weights_name'],
lr = config['train']['learning_rate'],
grid_scales = config['train']['grid_scales'],
obj_scale = config['train']['obj_scale'],
noobj_scale = config['train']['noobj_scale'],
xywh_scale = config['train']['xywh_scale'],
class_scale = config['train']['class_scale'],
)
plot_model(
train_model,
to_file="model.png",
show_shapes=True,
show_layer_names=True)
callbacks = create_callbacks(config['train']['saved_weights_name'], config['train']['tensorboard_dir'], infer_model)
history = train_model.fit_generator(
generator = train_generator,
steps_per_epoch = len(train_generator) * config['train']['train_times'],
epochs = config['train']['nb_epochs'] + config['train']['warmup_epochs'],
verbose = 2 if config['train']['debug'] else 1,
validation_data = valid_generator,
validation_steps = len(valid_generator) * config['train']['train_times'], #np.floor(valid_generator / batch_size)
callbacks = callbacks,
workers = 4,
max_queue_size = 8
)
import matplotlib.pyplot as plt
plt.plot(history.history["loss"],label='loss')
plt.plot(history.history["val_loss"],label='val_loss')
plt.legend()
import requests
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
#https://drive.google.com/open?id=1WyfUiAKwidyUkc5COZ8QD2ymzH5_Gb-e
file_id = '1WyfUiAKwidyUkc5COZ8QD2ymzH5_Gb-e'
destination = 'bump.h5'
download_file_from_google_drive(file_id, destination)
!python predict.py -c config.json -i dataset/hump/hump_test/data/imgge15.jpg -x dataset/hump/hump_test/dataAnot/imgge15.xml
!python predict.py -c config.json -i dataset/hump/hump_test/data/bg1.jpg
!python predict.py -c config.json -i dataset/hump/hump_test/data/Image00017.jpg -x dataset/hump/hump_test/dataAnot/Image00017.xml
!python predict.py -c config.json -i dataset/hump/hump/data/Image00008.jpg -x dataset/hump/hump/dataAnot/Image00008.xml
!python predict.py -c config.json -i dataset/hump/hump/data/imgge62.jpg -x dataset/hump/hump/dataAnot/imgge62.xml
###Output
_____no_output_____
###Markdown
Results on unseen Data
###Code
import matplotlib.pyplot as plt
my_img = plt.imread('output/Image00017.jpg')
plt.imshow(my_img)
import matplotlib.pyplot as plt
my_img = plt.imread('output/imgge15.jpg')
plt.imshow(my_img)
import matplotlib.pyplot as plt
my_img = plt.imread('output/bg1.jpg')
plt.imshow(my_img)
import matplotlib.pyplot as plt
my_img = plt.imread('output/bg.jpg')
plt.imshow(my_img)
###Output
_____no_output_____
###Markdown
Results on Seen Data
###Code
import matplotlib.pyplot as plt
my_img = plt.imread('output/Image00008.jpg')
plt.imshow(my_img)
my_img = plt.imread('output/imgge62.jpg')
plt.imshow(my_img)
###Output
_____no_output_____
###Markdown
Copy to drive
###Code
from google.colab import drive
drive.mount('/content/drive')
cp -r /content/SpeedBump3AnchorBox/ /content/drive/'My Drive'/SpeedBump3AnchorBox
ls /content/drive/'My Drive'/SpeedBump3AnchorBox
drive.flush_and_unmount()
###Output
_____no_output_____ |
trafic_sign.ipynb | ###Markdown
CNN์ ์ด์ฉํ ํ์งํ ๋ถ๋ฅ - Step1 : ๋๋ก๊ตํต๊ณต๋จ ๊ตํต์์ ํ์ง ์ผ๋ํ์์ ํ์งํ ์ด๋ฏธ์ง ์ถ์ถ - ์ถ์ฒ : https://www.koroad.or.kr/kp_web/safeDataView.do?board_code=DTBBS_030&board_num=100162- Step 2 : Image Augementation ์งํ - ์ค์ ํ์งํ ๋ถ๋ฅ์ ํ์ฉ๋ ์์์ ๊ณ ๋ คํ์ฌ, ํ์ , ํ์ด์ง ์ ๋, ๋ช
๋, ๊ฐ๋ ๋ฑ์ ๊ณ ๋ ค- Step 3 : ๋ชจ๋ธ๋ง - keras๋ก CNN ๊ตฌํ- Step 4 : ๊ฒ์ฆ - ๊ฒฐ๊ณผ ๋์ผ๋ก ์ง์ ํ์ธ image augementation- ๊ฐ ํ์งํ ์ด๋ฏธ์ง๋ฅผ 1000์ฅ์ฉ augementation ์งํ- ์์์์ ํ์งํ์ด ์ธ์๋๋ ์ด๋ฏธ์ง์ ๋ชจ์์ ๊ณ ๋ คํ์ฌ, ์ด๋ฏธ์ง ์๊ณก, ํ์ ๋ฑ์ ๊ฐ์ ์ค์
###Code
import os
import glob
import numpy as np
path = './traffin_sign_png/'
full_names = os.listdir(path)
labels = sorted([each.split('.')[0] for each in full_names])
# example of brighting image augmentation
from tqdm.notebook import tqdm
from numpy import expand_dims
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import ImageDataGenerator
import cv2
os.mkdir('./traffic_image')
range_ = tqdm(labels)
for dir_num in range_:
# ์ด๋ฏธ์ง ๋ก๋
img = load_img("./traffin_sign_png/{}.png".format(dir_num))
# Numpy array ๋ก ๋ณํ
data = img_to_array(img)
# expand dimension to one sample
samples = expand_dims(data, 0)
# image data augmentation generator ์์ฑ
datagen = ImageDataGenerator(
brightness_range=[0.2, 2.0],
zoom_range=[0.3, 1],
rotation_range=20,
height_shift_range=0.2,
width_shift_range=0.2)
# prepare iterator
it = datagen.flow(samples, batch_size=1)
os.mkdir('./traffic_image/{}'.format(dir_num))
for i in range(1000):
batch = it.next()
image = batch[0].astype("uint8")
# rgb ๋ณํ
b, g, r = cv2.split(image)
img_astro3_rgb = cv2.merge([r, g, b])
cv2.imwrite("./traffic_image/{}/{}_{}.png".format(dir_num,
dir_num, i), img_astro3_rgb)
###Output
_____no_output_____
###Markdown
X, Y ์ค์ , Train, Test split
###Code
from PIL import Image
import os
import glob
import numpy as np
from sklearn.model_selection import train_test_split
caltech_dir = "./traffic_image/"
categories = labels
nb_classes = len(labels)
image_w = 64
image_h = 64
X = []
y = []
for idx, cat in enumerate(categories):
# one-hot ๋๋ฆฌ๊ธฐ.
label = [0 for i in range(nb_classes)]
label[idx] = 1
image_dir = caltech_dir + "/" + str(cat)
files = glob.glob(image_dir+"/*.png")
print(cat, " ํ์ผ ๊ธธ์ด : ", len(files))
# ์ด๋ฏธ์ง ํ์ผ์ 64 x 64 ๋ก ์ค์ด๊ณ , ๋ฒกํฐํ ์์ผ X์ ์ ์ฅ, one-hot-encoding๋ ๋ผ๋ฒจ๋ ์ ์ฅ
for i, f in enumerate(files):
img = Image.open(f)
img = img.convert("RGB")
img = img.resize((image_w, image_h))
data = np.asarray(img)
X.append(data)
y.append(label)
X = np.array(X)
y = np.array(y)
X_train, X_test, y_train, y_test = train_test_split(X, y)
xy = (X_train, X_test, y_train, y_test)
X_train.shape
y_train_test = y_train.reshape(-1,1)
y_train_test.shape
y_train.shape
###Output
_____no_output_____
###Markdown
npyํ์ผ๋ก ์ ์ฅ- ํ๋ ฌ๊ฐ ์ ์ฅ
###Code
import pickle
pickle.dump(xy, open("./model/multi_image_data.npy", 'wb'), protocol=4)
X_train, X_test, y_train, y_test = np.load('./model/multi_image_data.npy',allow_pickle=True)
X_train.shape
y_train.shape
###Output
_____no_output_____
###Markdown
ํฝ์
๊ฐ ์ ๊ทํ- ํฝ์
์ ๋ณด๋ 0~255 ๊ฐ๋ง ๊ฐ์ง๋ฏ๋ก ์ผ๋ฐ์ ์ผ๋ก ๋ฐ๋ก 255๋ก ๋๋๋ ๋ฐฉ์์ผ๋ก ์ ๊ทํ ์งํ
###Code
# ์ผ๋ฐํ
X_train = X_train.astype(float) / 255
X_test = X_test.astype(float) / 255
###Output
_____no_output_____
###Markdown
๋ชจ๋ธ๋ง
###Code
import os
import glob
import numpy as np
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers import BatchNormalization
import matplotlib.pyplot as plt
import keras.backend.tensorflow_backend as K
nb_classes = len(labels)
with K.tf_ops.device('/device:GPU:0'):
model = Sequential()
model.add(Conv2D(32, (3, 3), padding="same",
input_shape=X_train.shape[1:], activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding="same", activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), padding="same", activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
# ํ์ต์ ๋๋ฆฌ๋ ๋ฐฉ๋ฒ์ ์ ์ : cost function์ ์ค์ ํ๊ณ , ์ด๋ป๊ฒ ์ต์ ํ ํ ๊ฑด์ง ๋ฐฉ๋ฒ์ ์ ํ๊ณ
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
model_dir = './model'
if not os.path.exists(model_dir):
os.mkdir(model_dir)
model_path = model_dir + '/multi_img_classification.model'
checkpoint = ModelCheckpoint(
filepath=model_path, monitor='val_loss', verbose=1, save_best_only=True)
early_stopping = EarlyStopping(monitor='val_loss', patience=6)
model.summary()
history = model.fit(X_train, y_train, batch_size=32, epochs=50, validation_split=0.2, callbacks=[checkpoint, early_stopping])
###Output
Train on 40800 samples, validate on 10200 samples
Epoch 1/50
40800/40800 [==============================] - 229s 6ms/step - loss: 2.7341 - accuracy: 0.2342 - val_loss: 1.3379 - val_accuracy: 0.6121
Epoch 00001: val_loss improved from inf to 1.33788, saving model to ./model/multi_img_classification.model
Epoch 2/50
40800/40800 [==============================] - 201s 5ms/step - loss: 1.3139 - accuracy: 0.5773 - val_loss: 0.6551 - val_accuracy: 0.8278
Epoch 00002: val_loss improved from 1.33788 to 0.65508, saving model to ./model/multi_img_classification.model
Epoch 3/50
40800/40800 [==============================] - 183s 4ms/step - loss: 0.8574 - accuracy: 0.7219 - val_loss: 0.4560 - val_accuracy: 0.8602
Epoch 00003: val_loss improved from 0.65508 to 0.45595, saving model to ./model/multi_img_classification.model
Epoch 4/50
40800/40800 [==============================] - 176s 4ms/step - loss: 0.6784 - accuracy: 0.7828 - val_loss: 0.3402 - val_accuracy: 0.9067
Epoch 00004: val_loss improved from 0.45595 to 0.34021, saving model to ./model/multi_img_classification.model
Epoch 5/50
40800/40800 [==============================] - 175s 4ms/step - loss: 0.5557 - accuracy: 0.8187 - val_loss: 0.2888 - val_accuracy: 0.9120
Epoch 00005: val_loss improved from 0.34021 to 0.28880, saving model to ./model/multi_img_classification.model
Epoch 6/50
40800/40800 [==============================] - 174s 4ms/step - loss: 0.4797 - accuracy: 0.8439 - val_loss: 0.2505 - val_accuracy: 0.9230
Epoch 00006: val_loss improved from 0.28880 to 0.25052, saving model to ./model/multi_img_classification.model
Epoch 7/50
40800/40800 [==============================] - 174s 4ms/step - loss: 0.4351 - accuracy: 0.8588 - val_loss: 0.2197 - val_accuracy: 0.9316
Epoch 00007: val_loss improved from 0.25052 to 0.21968, saving model to ./model/multi_img_classification.model
Epoch 8/50
40800/40800 [==============================] - 173s 4ms/step - loss: 0.3964 - accuracy: 0.8727 - val_loss: 0.2093 - val_accuracy: 0.9372
Epoch 00008: val_loss improved from 0.21968 to 0.20934, saving model to ./model/multi_img_classification.model
Epoch 9/50
40800/40800 [==============================] - 173s 4ms/step - loss: 0.3563 - accuracy: 0.8862 - val_loss: 0.2033 - val_accuracy: 0.9375
Epoch 00009: val_loss improved from 0.20934 to 0.20327, saving model to ./model/multi_img_classification.model
Epoch 10/50
40800/40800 [==============================] - 173s 4ms/step - loss: 0.3330 - accuracy: 0.8924 - val_loss: 0.1958 - val_accuracy: 0.9431
Epoch 00010: val_loss improved from 0.20327 to 0.19580, saving model to ./model/multi_img_classification.model
Epoch 11/50
40800/40800 [==============================] - 174s 4ms/step - loss: 0.3109 - accuracy: 0.8997 - val_loss: 0.1620 - val_accuracy: 0.9505
Epoch 00011: val_loss improved from 0.19580 to 0.16204, saving model to ./model/multi_img_classification.model
Epoch 12/50
40800/40800 [==============================] - 174s 4ms/step - loss: 0.2979 - accuracy: 0.9035 - val_loss: 0.1520 - val_accuracy: 0.9542
Epoch 00012: val_loss improved from 0.16204 to 0.15200, saving model to ./model/multi_img_classification.model
Epoch 13/50
40800/40800 [==============================] - 175s 4ms/step - loss: 0.2798 - accuracy: 0.9094 - val_loss: 0.1489 - val_accuracy: 0.9548
Epoch 00013: val_loss improved from 0.15200 to 0.14888, saving model to ./model/multi_img_classification.model
Epoch 14/50
40800/40800 [==============================] - 175s 4ms/step - loss: 0.2687 - accuracy: 0.9132 - val_loss: 0.1521 - val_accuracy: 0.9522
Epoch 00014: val_loss did not improve from 0.14888
Epoch 15/50
40800/40800 [==============================] - 176s 4ms/step - loss: 0.2523 - accuracy: 0.9200 - val_loss: 0.1471 - val_accuracy: 0.9543
Epoch 00015: val_loss improved from 0.14888 to 0.14709, saving model to ./model/multi_img_classification.model
Epoch 16/50
40800/40800 [==============================] - 180s 4ms/step - loss: 0.2478 - accuracy: 0.9202 - val_loss: 0.1481 - val_accuracy: 0.9539
Epoch 00016: val_loss did not improve from 0.14709
Epoch 17/50
40800/40800 [==============================] - 204s 5ms/step - loss: 0.2333 - accuracy: 0.9257 - val_loss: 0.1351 - val_accuracy: 0.9578
Epoch 00017: val_loss improved from 0.14709 to 0.13514, saving model to ./model/multi_img_classification.model
Epoch 18/50
40800/40800 [==============================] - 199s 5ms/step - loss: 0.2230 - accuracy: 0.9289 - val_loss: 0.1304 - val_accuracy: 0.9596
Epoch 00018: val_loss improved from 0.13514 to 0.13043, saving model to ./model/multi_img_classification.model
Epoch 19/50
40800/40800 [==============================] - 199s 5ms/step - loss: 0.2247 - accuracy: 0.9276 - val_loss: 0.1339 - val_accuracy: 0.9580
Epoch 00019: val_loss did not improve from 0.13043
Epoch 20/50
40800/40800 [==============================] - 202s 5ms/step - loss: 0.2174 - accuracy: 0.9308 - val_loss: 0.1184 - val_accuracy: 0.9608
Epoch 00020: val_loss improved from 0.13043 to 0.11840, saving model to ./model/multi_img_classification.model
Epoch 21/50
40800/40800 [==============================] - 202s 5ms/step - loss: 0.2077 - accuracy: 0.9335 - val_loss: 0.1295 - val_accuracy: 0.9584
Epoch 00021: val_loss did not improve from 0.11840
Epoch 22/50
40800/40800 [==============================] - 201s 5ms/step - loss: 0.1983 - accuracy: 0.9371 - val_loss: 0.1241 - val_accuracy: 0.9591
Epoch 00022: val_loss did not improve from 0.11840
Epoch 23/50
40800/40800 [==============================] - 204s 5ms/step - loss: 0.1959 - accuracy: 0.9374 - val_loss: 0.1174 - val_accuracy: 0.9623
Epoch 00023: val_loss improved from 0.11840 to 0.11738, saving model to ./model/multi_img_classification.model
Epoch 24/50
40800/40800 [==============================] - 198s 5ms/step - loss: 0.1983 - accuracy: 0.9382 - val_loss: 0.1301 - val_accuracy: 0.9608
Epoch 00024: val_loss did not improve from 0.11738
Epoch 25/50
40800/40800 [==============================] - 196s 5ms/step - loss: 0.1912 - accuracy: 0.9391 - val_loss: 0.1230 - val_accuracy: 0.9633
Epoch 00025: val_loss did not improve from 0.11738
Epoch 26/50
40800/40800 [==============================] - 185s 5ms/step - loss: 0.1855 - accuracy: 0.9406 - val_loss: 0.1116 - val_accuracy: 0.9643
Epoch 00026: val_loss improved from 0.11738 to 0.11155, saving model to ./model/multi_img_classification.model
Epoch 27/50
40800/40800 [==============================] - 192s 5ms/step - loss: 0.1874 - accuracy: 0.9424 - val_loss: 0.1100 - val_accuracy: 0.9639
Epoch 00027: val_loss improved from 0.11155 to 0.11002, saving model to ./model/multi_img_classification.model
Epoch 28/50
40800/40800 [==============================] - 186s 5ms/step - loss: 0.1761 - accuracy: 0.9443 - val_loss: 0.1148 - val_accuracy: 0.9653
Epoch 00028: val_loss did not improve from 0.11002
Epoch 29/50
40800/40800 [==============================] - 185s 5ms/step - loss: 0.1785 - accuracy: 0.9429 - val_loss: 0.1227 - val_accuracy: 0.9626
Epoch 00029: val_loss did not improve from 0.11002
Epoch 30/50
40800/40800 [==============================] - 184s 5ms/step - loss: 0.1676 - accuracy: 0.9467 - val_loss: 0.1162 - val_accuracy: 0.9644
Epoch 00030: val_loss did not improve from 0.11002
Epoch 31/50
40800/40800 [==============================] - 190s 5ms/step - loss: 0.1697 - accuracy: 0.9473 - val_loss: 0.1118 - val_accuracy: 0.9632
Epoch 00031: val_loss did not improve from 0.11002
Epoch 32/50
40800/40800 [==============================] - 188s 5ms/step - loss: 0.1673 - accuracy: 0.9470 - val_loss: 0.1127 - val_accuracy: 0.9650
Epoch 00032: val_loss did not improve from 0.11002
Epoch 33/50
40800/40800 [==============================] - 186s 5ms/step - loss: 0.1715 - accuracy: 0.9471 - val_loss: 0.1194 - val_accuracy: 0.9621
Epoch 00033: val_loss did not improve from 0.11002
###Markdown
accuracy, loss ๊ฐ ํ์ธ
###Code
plot_target = ['loss', 'val_loss', 'accuracy', 'val_accuracy']
for each in plot_target:
plt.plot(history.history[each], label=each)
plt.legend()
plt.show()
model.evaluate(X_test, y_test)
###Output
17000/17000 [==============================] - 19s 1ms/step
###Markdown
ํ๋ฆฐ์ ๋ค ๋์ผ๋ก ํ์ธ
###Code
from keras.models import load_model
model = load_model('model/multi_img_classification.model')
y_test[0]
import numpy as np
predicted_result = model.predict(X_test)
predicted_labels = np.argmax(predicted_result, axis=1)
predicted_labels[:10]
###Output
_____no_output_____
###Markdown
- test ๋ฐ์ดํฐ์์ ๋ผ๋ฒจ ์ถ์ถ
###Code
y_labels = []
for vector in y_test:
for idx, i in enumerate(vector):
if i != 0:
y_labels.append(idx)
y_labels = np.array(y_labels)
y_labels
###Output
_____no_output_____
###Markdown
- ์์ธกํ ๋ผ๋ฒจ๊ณผ ์ค์ ๋ผ๋ฒจ์ ๋น๊ตํด ์๋ชป ์์ธก๋ ๊ฒฐ๊ณผ ์ถ์ถ
###Code
wrong_result = []
for n in range(0, len(y_test)):
if predicted_labels[n] != y_labels[n]:
wrong_result.append(n)
len(wrong_result)
###Output
_____no_output_____
###Markdown
- ์๋ชป ์์ธก๋ ๊ฒฐ๊ณผ๋ฅผ ๋๋คํ๊ฒ ์ ํ
###Code
import random
samples = random.choices(population=wrong_result, k=4)
###Output
_____no_output_____
###Markdown
์ง์ ๋์ผ๋ก ํ์ธ
###Code
label_to_str = ["+์ํ๊ต์ฐจ๋ก","T์ํ๊ต์ฐจ๋ก","Y์ํ๊ต์ฐจ๋ก","ใ
์ํ๊ต์ฐจ๋ก","ใ
์ํ๊ต์ฐจ๋ก","์ฐ์ ๋๋ก","์ฐํฉ๋ฅ๋๋ก","์ขํฉ๋ฅ๋๋ก","ํ์ ํ๊ต์ฐจ๋ก","์ฒ ๊ธธ๊ฑด๋๋ชฉ","์ฐ๋ก๊ตฝ์๋๋ก","์ข๋ก๊ตฝ์๋๋ก","์ฐ์ข๋ก์ด์ค๊ตฝ์๋๋ก","์ข์ฐ๋ก์ด์ค๊ตฝ์๋๋ก","2๋ฐฉํฅํตํ","์ค๋ฅด๋ง๊ฒฝ์ฌ","๋ด๋ฆฌ๋ง๊ฒฝ์ฌ","๋๋กํญ์ด์ข์์ง","์ฐ์ธก์ฐจ๋ก์์ด์ง","์ข์ธก์ฐจ๋ก์์ด์ง","์ฐ์ธก๋ฐฉํตํ","์์ธก๋ฐฉํตํ","์ค์๋ถ๋ฆฌ๋์์","์ค์๋ถ๋ฆฌ๋๋๋จ","์ ํธ๊ธฐ","๋ฏธ๋๋ฌ์ด๋๋ก","๊ฐ๋ณ๋๋ก","๋
ธ๋ฉด๊ณ ๋ฅด์ง๋ชปํจ","๊ณผ์๋ฐฉ์งํฑ","๋์๋๋ก","ํก๋จ๋ณด๋","์ด๋ฆฐ์ด๋ณดํธ","์์ ๊ฑฐ","๋๋ก๊ณต์ฌ์ค","๋นํ๊ธฐ","ํกํ","ํฐ๋","๊ต๋","์ผ์๋๋ฌผ๋ณดํธ","์ํ","์์ต์ ์ฒด๊ตฌ๊ฐ","ํตํ๊ธ์ง","์๋์ฐจํตํ๊ธ์ง","ํ๋ฌผ์๋์ฐจํตํ๊ธ์ง","์นํฉ์๋์ฐจํตํ๊ธ์ง","์ด๋ฅ์๋์ฐจ๋ฐ์๋๊ธฐ์ฅ์น์์ ๊ฑฐํตํ๊ธ์ง","์๋์ฐจ, ์ด๋ฅ์๋์ฐจ๋น์๋๊ธฐ์ฅ์น์์ ๊ฑฐํตํ๊ธ์ง","๊ฒฝ์ด๊ธฐ, ํธ๋ ํฐ๋ฐ ์์๋ ํตํ๊ธ์ง","์์ ๊ฑฐํตํ๊ธ์ง","์ง์
๊ธ์ง","์ง์ง๊ธ์ง","์ฐํ์ ๊ธ์ง","์ขํ์ ๊ธ์ง","์ ํด๊ธ์ง","์์ง๋ฅด๊ธฐ๊ธ์ง","์ ์ฐจ,์ฃผ์ฐจ๊ธ์ง","์ฃผ์ฐจ๊ธ์ง","์ฐจ์ค๋์ ํ","์ฐจ๋์ด์ ํ","์ฐจํญ์ ํ","์ฐจ๊ฐ๊ฑฐ๋ฆฌํ๋ณด","์ต๊ณ ์๋์ ํ","์ต์ ์๋์ ํ","์ํ","์ผ์์ ์ง","์๋ณด","๋ณดํ์๋ณดํ๊ธ์ง","์ํ๋ฌผ์ ์ฌ์ฐจ๋ ํตํ๊ธ์ง"]
plt.figure(figsize=(14,12))
for idx, n in enumerate(samples):
plt.subplot(4, 2, idx+1)
plt.imshow(X_test[n].reshape(64,64,3), cmap='Greys', interpolation='nearest')
plt.title('Label : ' + label_to_str[y_labels[n]] + ', Predict : ' + label_to_str[predicted_labels[n]])
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
์ค์ ์ดฌ์์ด๋ฏธ์ง๋ก ํ
์คํธ
###Code
from keras.models import load_model
model = load_model('model/multi_img_classification.model')
from PIL import Image
# ์ด๋ฏธ์ง ํ์ผ์ 64 x 64 ๋ก ์ค์ด๊ณ , ๋ฒกํฐํ ์์ผ X์ ์ ์ฅ
image_w = 64
image_h = 64
X = []
img = Image.open('test2.jpeg')
img = img.convert("RGB")
img_resized = img.resize((image_w, image_h))
data = np.asarray(img_resized)
X.append(data)
X = np.array(X)
X = X.astype(float) / 255
img
result = model.predict(X)
label_to_str[np.argmax(result, axis=1)[0]]
###Output
_____no_output_____ |
data/attention-index-august.ipynb | ###Markdown
Exploring data for the attention indexThe idea of the attention index is to provide a score that indicates the impact of an article, and can easily be aggregated by subject, publisher or other axis.The index comprises of two parts:- **promotion** how important the article was to the publisher, based on the extent to which they chose to editorially promote it- **response** how readers reacted to the article, based on social engagementsThe index will be a number between 0 and 100. 50% is driven by the promotion, and 50% by response:![Attention Index](../images/kaleida-attention-index-data-factors-chart.png) Promotion ScoreThe promotion score should take into account:- whether the publisher chose make the article a lead article on their primary front (30%)- how long the publisher chose to retain the article on their front (40%)- whether they chose to push the article on their facebook brand page (30%)It should be scaled based on the value of that promotion, so a popular, well-visited site should score higher than one on the fringes. And similarly a powerful, well-followed brand page should score higher than one less followed. Response ScoreThe response score takes into account the number of engagements on Facebook. The rest of this notebook explores how those numbers could work, starting with the response score because that is easier, I think. Setup
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
data = pd.read_csv("articles_2017-08-01_2017-08-31.csv", index_col="id", \
parse_dates=["published", "discovered"])
data.head()
###Output
_____no_output_____
###Markdown
Response Score The response score is a number between 0 and 50 that indicates the level of response to an article.Perhaps in the future we may choose to include other factors, but for now we just include engagements on Facebook. The maximum score of 50 should be achieved by an article that does really well compared with others.
###Code
pd.options.display.float_format = '{:.2f}'.format
data.fb_engagements.describe([0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
###Output
_____no_output_____
###Markdown
There's a few articles there with 1 million plus engagements, let's just double check that.
###Code
data[data.fb_engagements > 1000000]
data.fb_engagements.mode()
###Output
_____no_output_____
###Markdown
Going back to the enagement counts, we see the mean is 1,542, mode is zero, median is 29, 90th percentile is 2,085, 99th percentile is 27,998, 99.5th percentile is 46,698. The standard deviation is 12,427, significantly higher than the mean, so this is not a normal distribution. We want to provide a sensible way of allocating this to the 50 buckets we have available. Let's just bucket geometrically first:
###Code
mean = data.fb_engagements.mean()
median = data.fb_engagements.median()
plt.figure(figsize=(12,4.5))
plt.hist(data.fb_engagements, bins=50)
plt.axvline(mean, linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(median, label=f'Median ({median:,.0f})', color='red')
leg = plt.legend()
###Output
_____no_output_____
###Markdown
Well that's not very useful. Almost everything will score less than 0 if we just do that, which isn't a useful metric.Let's start by excluding zeros.
###Code
non_zero_fb_enagagements = data.fb_engagements[data.fb_engagements > 0]
plt.figure(figsize=(12,4.5))
plt.hist(non_zero_fb_enagagements, bins=50)
plt.axvline(mean, linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(median, label=f'Median ({median:,.0f})', color='red')
leg = plt.legend()
###Output
_____no_output_____
###Markdown
That's still a big number at the bottom, and so not a useful score.Next, we exclude the outliers: cap at the 99.9th percentile (i.e. 119211), so that 0.1% of articles should receive the maximum score.
###Code
non_zero_fb_enagagements_without_outliers = non_zero_fb_enagagements.clip_upper(119211)
plt.figure(figsize=(12,4.5))
plt.hist(non_zero_fb_enagagements_without_outliers, bins=50)
plt.axvline(mean, linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(median, label=f'Median ({median:,.0f})', color='red')
leg = plt.legend()
###Output
_____no_output_____
###Markdown
That's a bit better, but still way too clustered at the low end. Let's look at a log normal distribution.
###Code
mean = data.fb_engagements.mean()
median = data.fb_engagements.median()
ninety = data.fb_engagements.quantile(.90)
ninetyfive = data.fb_engagements.quantile(.95)
ninetynine = data.fb_engagements.quantile(.99)
plt.figure(figsize=(12,4.5))
plt.hist(np.log(non_zero_fb_enagagements + median), bins=50)
plt.axvline(np.log(mean), linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(np.log(median), label=f'Median ({median:,.0f})', color='green')
plt.axvline(np.log(ninety), linestyle='--', label=f'90% percentile ({ninety:,.0f})', color='red')
plt.axvline(np.log(ninetyfive), linestyle='-.', label=f'95% percentile ({ninetyfive:,.0f})', color='red')
plt.axvline(np.log(ninetynine), linestyle=':', label=f'99% percentile ({ninetynine:,.0f})', color='red')
leg = plt.legend()
###Output
_____no_output_____
###Markdown
That's looking a bit more interesting. After some exploration, to avoid too much emphasis on the lower end of the scale, we move the numbers to the right a bit by adding on the median.
###Code
log_engagements = (non_zero_fb_enagagements
.clip_upper(data.fb_engagements.quantile(.999))
.apply(lambda x: np.log(x + median))
)
log_engagements.describe()
###Output
_____no_output_____
###Markdown
Use standard feature scaling to bring that to a 1 to 50 range
###Code
def scale_log_engagements(engagements_logged):
return np.ceil(
50 * (engagements_logged - log_engagements.min()) / (log_engagements.max() - log_engagements.min())
)
def scale_engagements(engagements):
return scale_log_engagements(np.log(engagements + median))
scaled_non_zero_engagements = scale_log_engagements(log_engagements)
scaled_non_zero_engagements.describe()
# add in the zeros, as zero
scaled_engagements = pd.concat([scaled_non_zero_engagements, data.fb_engagements[data.fb_engagements == 0]])
proposed = pd.DataFrame({"fb_engagements": data.fb_engagements, "response_score": scaled_engagements})
proposed.response_score.plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
Now look at how the shares distribute to score:
###Code
plt.figure(figsize=(15,8))
shares = np.arange(1, 60000)
plt.plot(shares, scale_engagements(shares))
plt.xlabel("shares")
plt.ylabel("score")
plt.axhline(scale_engagements(mean), linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axhline(scale_engagements(median), label=f'Median ({median:,.0f})', color='green')
plt.axhline(scale_engagements(ninety), linestyle='--', label=f'90% percentile ({ninety:,.0f})', color='red')
plt.axhline(scale_engagements(ninetyfive), linestyle='-.', label=f'95% percentile ({ninetyfive:,.0f})', color='red')
plt.axhline(scale_engagements(ninetynine), linestyle=':', label=f'99% percentile ({ninetynine:,.0f})', color='red')
plt.legend(frameon=True, shadow=True)
proposed.groupby("response_score").fb_engagements.agg([np.size, np.min, np.max])
###Output
_____no_output_____
###Markdown
Looks good to me, lets save that.
###Code
data["response_score"] = proposed.response_score
###Output
_____no_output_____
###Markdown
ProposalThe maximum of 50 points is awarded when the engagements are greater than the 99.9th percentile, rolling over the last month. i.e. where $limit$ is the 99.5th percentile of engagements calculated over the previous month, the response score for article $a$ is:\begin{align}basicScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ \log(\min(engagements_a,limit) + median(engagements)) & \text{if } engagements_a > 0\end{cases} \\responseScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ 50 \cdot \frac{basicScore_a - \min(basicScore)}{\max(basicScore) - \min(basicScore)} & \text{if } engagements_a > 0\end{cases} \\\\\text{The latter equation can be expanded to:} \\responseScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ 50 \cdot \frac{\log(\min(engagements_a,limit) + median(engagements)) - \log(1 + median(engagements))} {\log(limit + median(engagements)) - \log(1 + median(engagements))} & \text{if } engagements_a > 0\end{cases} \\\end{align} Promotion ScoreThe aim of the promotion score is to indicate how important the article was to the publisher, by tracking where they chose to promote it. This is a number between 0 and 50 comprised of:- 20 points based on whether the article was promoted as the "lead" story on the publisher's home page- 15 points based on how long the article was promoted anywhere on the publisher's home page- 15 points based on whether the article was promoted on the publisher's main facebook brand pageThe first two should be scaled by the popularity/reach of the home page, for which we use the alexa page rank as a proxy.The last should be scaled by the popularity/reach of the brand page, for which we use the number of likes the brand page has. Lead story (20 points)
###Code
data.mins_as_lead.describe([0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
###Output
_____no_output_____
###Markdown
As expected, the vast majority of articles don't make it as lead. Let's explore how long typically publishers put something as lead for.
###Code
lead_articles = data[data.mins_as_lead > 0]
lead_articles.mins_as_lead.describe([0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
lead_articles.mins_as_lead.plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
For lead, it's a significant thing for an article to be lead at all, so although we want to penalise articles that were lead for a very short time, mostly we want to score the maximum even if it wasn't lead for ages. So we'll give maximum points when something has been lead for an hour.
###Code
lead_articles.mins_as_lead.clip_upper(60).plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
We also want to scale this by the alexa page rank, such that the maximum score of 20 points is for an article that was on the front for 4 hours for the most popular site.So lets explore the alexa nunbers.
###Code
alexa_ranks = data.groupby(by="publisher_id").alexa_rank.mean().sort_values()
alexa_ranks
alexa_ranks.plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
Let's try the simple option first: just divide the number of minutes as lead by the alexa rank. What's the scale of numbers we get then.
###Code
lead_proposal_1 = lead_articles.mins_as_lead.clip_upper(60) / lead_articles.alexa_rank
lead_proposal_1.plot.hist()
###Output
_____no_output_____
###Markdown
Looks like there's too much of a cluster around 0. Have we massively over penalised the publishers with a high alexa rank?
###Code
lead_proposal_1.groupby(data.publisher_id).mean().plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
Yes. Let's try taking the log of the alexa rank and see if that looks better.
###Code
lead_proposal_2 = (lead_articles.mins_as_lead.clip_upper(60) / np.log(lead_articles.alexa_rank))
lead_proposal_2.plot.hist()
lead_proposal_2.groupby(data.publisher_id).describe()
lead_proposal_2.groupby(data.publisher_id).min().plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
That looks about right, as long as the smaller publishers were closer to zero. So let's apply feature scaling to this, to give a number between 1 and 20. (Anything not as lead will pass though as zero.)
###Code
def rescale(series):
return (series - series.min()) / (series.max() - series.min())
lead_proposal_3 = np.ceil(20 * rescale(lead_proposal_2))
lead_proposal_2.min(), lead_proposal_2.max()
lead_proposal_3.plot.hist()
lead_proposal_3.groupby(data.publisher_id).median().plot.bar(figsize=[10,5])
data["lead_score"] = pd.concat([lead_proposal_3, data.mins_as_lead[data.mins_as_lead==0]])
data.lead_score.value_counts().sort_index()
data.lead_score.groupby(data.publisher_id).max()
###Output
_____no_output_____
###Markdown
In summary then, score for article $a$ is:$$unscaledLeadScore_a = \frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)}\\leadScore_a = 19 \cdot \frac{unscaledLeadScore_a - \min(unscaledLeadScore)}{\max(unscaledLeadScore) - \min(unscaledLeadScore)} + 1$$Since the minium value of $minsAsLead$ is 1, $\min(unscaledLeadScore)$ is pretty insignificant. So we can simplify this to:$$leadScore_a = 20 \cdot \frac{unscaledLeadScore_a } {\max(unscaledLeadScore)} $$or: $$leadScore_a = 20 \cdot \frac{\frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)} } {\frac{60}{\log(\max(alexaRank))}} $$$$leadScore_a = \left( 20 \cdot \frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)} \cdot {\frac{\log(\max(alexaRank))}{60}} \right)$$ Time on front score (15 points)This is similar to time as lead, so lets try doing the same calculation, except we also want to factor in the number of slots on the front:$$frontScore_a = 15 \left(\frac{\min(minsOnFront_a, 1440)}{alexaRank_a \cdot numArticlesOnFront_a}\right) \left( \frac{\min(alexaRank \cdot numArticlesOnFront)}{1440} \right)$$
###Code
(data.alexa_rank * data.num_articles_on_front).min() / 1440
time_on_front_proposal_1 = np.ceil(data.mins_on_front.clip_upper(1440) / (data.alexa_rank * data.num_articles_on_front) * (2.45) * 15)
time_on_front_proposal_1.plot.hist(figsize=(15, 7), bins=15)
time_on_front_proposal_1.value_counts().sort_index()
time_on_front_proposal_1.groupby(data.publisher_id).sum()
###Output
_____no_output_____
###Markdown
That looks good to me.
###Code
data["front_score"] = np.ceil(data.mins_on_front.clip_upper(1440) / (data.alexa_rank * data.num_articles_on_front) * (2.45) * 15).fillna(0)
data.front_score
###Output
_____no_output_____
###Markdown
Facebook brand page promotion (15 points)One way a publisher has of promoting content is to post to their brand page. The significance of doing so is stronger when the brand page has more followers (likes).$$ facebookPromotionProposed1_a = 15 \left( \frac {brandPageLikes_a} {\max(brandPageLikes)} \right) $$Now lets explore the data to see if that makes sense. **tr;dr the formula above is incorrect**
###Code
data.fb_brand_page_likes.max()
facebook_promotion_proposed_1 = np.ceil((15 * (data.fb_brand_page_likes / data.fb_brand_page_likes.max())).fillna(0))
facebook_promotion_proposed_1.value_counts().sort_index().plot.bar()
facebook_promotion_proposed_1.groupby(data.publisher_id).describe()
###Output
_____no_output_____
###Markdown
That's too much variation: sites like the Guardian, which have a respectable 7.5m likes, should not be scoring a 3. Lets try applying a log to it, and then standard feature scaling again.
###Code
data.fb_brand_page_likes.groupby(data.publisher_id).max()
np.log(2149)
np.log(data.fb_brand_page_likes.groupby(data.publisher_id).max())
###Output
_____no_output_____
###Markdown
That's more like it, but the lower numbers should be smaller.
###Code
np.log(data.fb_brand_page_likes.groupby(data.publisher_id).max() / 1000)
scaled_fb_brand_page_likes = (data.fb_brand_page_likes / 1000)
facebook_promotion_proposed_2 = np.ceil(\
(15 * \
(np.log(scaled_fb_brand_page_likes) / np.log(scaled_fb_brand_page_likes.max()))\
)\
).fillna(0)
facebook_promotion_proposed_2.groupby(data.publisher_id).max()
###Output
_____no_output_____
###Markdown
LGTM. So the equation is$$ facebookPromotion_a = 15 \left( \frac {\log(\frac {brandPageLikes_a}{1000})} {\log(\frac {\max(brandPageLikes)}{1000}))} \right) $$ Now, let's try applying standard feature scaling approch to this, rather than using a magic number of 1,000. That equation would be:\begin{align}unscaledFacebookPromotion_a &= \log(brandPageLikes_a) \\facebookPromotion_a &= 15 \cdot \frac{unscaledFacebookPromotion_a - \min(unscaledFacebookPromotion)}{\max(unscaledFacebookPromotion) - \min(unscaledFacebookPromotion)} \\\\\text{The scaling can be simplified to:} \\facebookPromotion_a &= 15 \cdot \frac{unscaledFacebookPromotion_a - \log(\min(brandPageLikes))}{\log(\max(brandPageLikes)) - \log(\min(brandPageLikes))} \\\\\text{Meaning the overall equation becomes:} \\facebookPromotion_a &= 15 \cdot \frac{\log(brandPageLikes_a) - \log(\min(brandPageLikes))}{\log(\max(brandPageLikes)) - \log(\min(brandPageLikes))} \end{align}
###Code
facebook_promotion_proposed_3 = np.ceil(
(14 *
(
(np.log(data.fb_brand_page_likes) - np.log(data.fb_brand_page_likes.min()) ) /
(np.log(data.fb_brand_page_likes.max()) - np.log(data.fb_brand_page_likes.min()))
)
) + 1
)
facebook_promotion_proposed_3.groupby(data.publisher_id).max()
data["facebook_promotion_score"] = facebook_promotion_proposed_3.fillna(0.0)
###Output
_____no_output_____
###Markdown
Review
###Code
data["promotion_score"] = (data.lead_score + data.front_score + data.facebook_promotion_score)
data["attention_index"] = (data.promotion_score + data.response_score)
data.promotion_score.plot.hist(bins=np.arange(50), figsize=(15,6))
data.attention_index.plot.hist(bins=np.arange(100), figsize=(15,6))
data.attention_index.value_counts().sort_index()
# and lets see the articles with the biggest attention index
data.sort_values("attention_index", ascending=False)
data["score_diff"] = data.promotion_score - data.response_score
# promoted but low response
data.sort_values("score_diff", ascending=False).head(25)
# high response but not promoted
data.sort_values("score_diff", ascending=True).head(25)
###Output
_____no_output_____
###Markdown
Write that data to a file. Note that the scores here are provisional for two reasons:1. they should be using a rolling-month based on the article publication date to calculate medians/min/max etc, whereas in this workbook we as just using values for the month of May2. for analysis, we've rounded the numbers; we don't expect to do that for the actual scores
###Code
data.to_csv("articles_with_provisional_scores_2017-08-01_2017-08-31.csv")
###Output
_____no_output_____ |
05_hot_bats/bin/qpcr_pro_tbdt_counts.ipynb | ###Markdown
Read data
###Code
mg_abund = pd.read_csv('../data/timeseries2plot.tsv',
delimiter='\t',
parse_dates=['date'])
mg_abund.head()
qpcr_abund = pd.read_csv('../data/hot_bats_prochlorococcus_qpcr_data.tsv',
delimiter='\t',
parse_dates=['date'])
qpcr_abund.head()
###Output
_____no_output_____
###Markdown
Wrangling Set a min and max date for the time window. Time stamps in each individual dataset will be relative to those absolute times
###Code
dmin = pd.to_datetime('2002-10-01 00:00:00+00:00')
dmax = pd.to_datetime('2005-04-01 00:00:00+00:00')
###Output
_____no_output_____
###Markdown
First wrangle the qPCR data
###Code
qpcr_abund['dmin'] = dmin
qpcr_abund['dmax'] = dmax
qpcr_abund_red = qpcr_abund.dropna(subset=['abundance'])
qpcr_abund_red['tdiff'] = (qpcr_abund_red['date'] - qpcr_abund_red['dmin']).dt.days
qpcr_abund_red['abundance_trans'] = (qpcr_abund_red['abundance']+1)**(1/3)
###Output
<ipython-input-17-af1f83480e54>:4: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
qpcr_abund_red['tdiff'] = (qpcr_abund_red['date'] - qpcr_abund_red['dmin']).dt.days
<ipython-input-17-af1f83480e54>:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
qpcr_abund_red['abundance_trans'] = (qpcr_abund_red['abundance']+1)**(1/3)
###Markdown
Now wrangle the metagenome data
###Code
mypad = 0.000
mg_abund['dmin'] = dmin
mg_abund['dmax'] = dmax
mg_abund['tdiff'] = (mg_abund['date'] - mg_abund['dmin']).dt.days
mg_abund.head()
###Output
_____no_output_____
###Markdown
Make dictionaries of data to plot
###Code
stcld = ['HOT_HLII', 'HOT_HLI', 'HOT_LLI', 'HOT_LLII', 'HOT_LLIV',
'BATS_HLII', 'BATS_HLI', 'BATS_LLI', 'BATS_LLII', 'BATS_LLIV']
qpcr_dict = {}
for station in ['HOT', "BATS"]:
for clade in ['HLII', 'HLI', 'LLI', 'LLII', 'LLIV']:
qpcr_dict[station+'_'+clade] = qpcr_abund_red.query("clade == @clade & location == @station & date >= dmin & date<= dmax")
mg_dict = {}
for station in ['HOT', "BATS"]:
for clade in ['HLII', 'HLI', 'LLI', 'LLII', 'LLIV']:
mg_dict[station+'_'+clade] = mg_abund.query("location == @station & date >= dmin & date<= dmax")
###Output
_____no_output_____
###Markdown
Preparing to plot Make some manual axis ticks that acutally correspond to 4 month cutoffs
###Code
pd.to_datetime('2005-01-01')-pd.to_datetime('2002-10-01')
dlabs = ['2003-01-01', '2003-04-01', '2003-07-01', '2003-10-01',
'2004-01-01', '2004-04-01', '2004-07-01', '2004-10-01',
'2005-01-01']
dvals = [92, 182, 273, 365,
457, 548, 639, 731,
823]
###Output
_____no_output_____
###Markdown
Do the plot
###Code
scale = 450
fig, axs = plt.subplots(2,5, figsize=(25, 6), facecolor='w', edgecolor='k', sharey=True)
fig.subplots_adjust(hspace = .25, wspace=0.03)
axs = axs.ravel()
axs_bin = []
for i in range(10):
x0=qpcr_dict[stcld[i]]['tdiff']
y0=qpcr_dict[stcld[i]]['depth']
z0=qpcr_dict[stcld[i]]['abundance_trans']
x1=mg_dict[stcld[i]]['tdiff']
y1=mg_dict[stcld[i]]['depth']
z1=mg_dict[stcld[i]]['RA']
axs[i].tricontour(x0, y0, z0, levels=4, linewidths=0.5, colors='k')
cntr = axs[i].tricontourf(x0, y0, z0, levels=15, cmap="viridis") #cmo.algae
#cntr = axs[i].tripcolor(x0, y0, z0, cmap="cmo.algae") #viridis
#axs[i].plot(x0, y0, 'ko', ms=0.5)
axs_bin.append(axs[i].scatter(x1, y1, edgecolor = '#000000', c='white', s=scale*z1, alpha=1, marker='o'))
axs[i].set_title(stcld[i])
axs[i].axis([min(x0), max(x0), max(y0), 0])
axs[i].set_xticks(dvals)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.81, 0.15, 0.01, 0.7])
mycbar = fig.colorbar(cntr, cax=cbar_ax)
mycbar.set_label("cubic root abun", rotation=270)
kw = dict(prop="sizes", num=6, func=lambda s: (s/scale))
fig.legend(*axs_bin[0].legend_elements(**kw), loc=[0.9, 0.3], title="TBDT\nrel abund")
plt.savefig("../figs/all_ecotypes.png", dpi=300, format="png")
plt.savefig("../figs/all_ecotypes.svg", format="svg")
###Output
_____no_output_____
###Markdown
Plot with only HLI HLII and LLI
###Code
stcld_r = stcld[0:3]+stcld[5:8]
stcld_r
scale = 450
fig, axs = plt.subplots(2,3, figsize=(15, 6), facecolor='w', edgecolor='k', sharey=True)
fig.subplots_adjust(hspace = .25, wspace=0.03)
axs = axs.ravel()
axs_bin = []
for i in range(6):
x0=qpcr_dict[stcld_r[i]]['tdiff']
y0=qpcr_dict[stcld_r[i]]['depth']
z0=qpcr_dict[stcld_r[i]]['abundance_trans']
x1=mg_dict[stcld_r[i]]['tdiff']
y1=mg_dict[stcld_r[i]]['depth']
z1=mg_dict[stcld_r[i]]['RA']
axs[i].tricontour(x0, y0, z0, levels=4, linewidths=0.5, colors='k')
cntr = axs[i].tricontourf(x0, y0, z0, levels=15, cmap="viridis") #cmo.algae
#cntr = axs[i].tripcolor(x0, y0, z0, cmap="cmo.algae") #viridis
#axs[i].plot(x0, y0, 'ko', ms=0.5)
axs_bin.append(axs[i].scatter(x1, y1, edgecolor = '#000000', c='white', s=scale*z1, alpha=1, marker='o'))
axs[i].set_title(stcld[i])
axs[i].axis([min(x0), max(x0), max(y0), 0])
axs[i].set_xticks(dvals)
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.81, 0.15, 0.01, 0.7])
mycbar = fig.colorbar(cntr, cax=cbar_ax)
mycbar.set_label("cubic root abun", rotation=270)
kw = dict(prop="sizes", num=6, func=lambda s: (s/scale))
fig.legend(*axs_bin[0].legend_elements(**kw), loc=[0.9, 0.3], title="TBDT\nrel abund")
plt.savefig("../figs/main_ecotypes.png", dpi=300, format="png")
plt.savefig("../figs/main_ecotypes.svg", format="svg")
###Output
_____no_output_____ |
topic_model_vis.ipynb | ###Markdown
**Topic Modeling for Biomedical Literature**> Author : Anoushkrit Goel--- Topic Model In machine learning and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body.[6] Libraries Gensim NLTK NumPy pandas pyLDAvis stop_words Importing Libraries
###Code
import nltk
nltk.download('stopwords')
import re
import numpy as np
import pandas as pd
from pprint import pprint
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
# spacy for lemmatization
import spacy
# Plotting tools
import pyLDAvis
import pyLDAvis.gensim # don't skip this
import matplotlib.pyplot as plt
# Enable logging for gensim - optional
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.ERROR)
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# NLTK Stop words
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
stop_words.extend(['from', 'subject', 're', 'edu', 'use'])
pip install pyldavis
# Import Dataset
df = pd.read_json('https://raw.githubusercontent.com/selva86/datasets/master/newsgroups.json')
print(df.target_names.unique())
df.head()
# Convert to list
data = df.content.values.tolist()
# Remove Emails
data = [re.sub('\S*@\S*\s?', '', sent) for sent in data]
# Remove new line characters
data = [re.sub('\s+', ' ', sent) for sent in data]
# Remove distracting single quotes
data = [re.sub("\'", "", sent) for sent in data]
pprint(data[:1])
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
data_words = list(sent_to_words(data))
print(data_words[:1])
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
# See trigram example
print(trigram_mod[bigram_mod[data_words[0]]])
# Define functions for stopwords, bigrams, trigrams and lemmatization
def remove_stopwords(texts):
return [[word for word in simple_preprocess(str(doc)) if word not in stop_words] for doc in texts]
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
def make_trigrams(texts):
return [trigram_mod[bigram_mod[doc]] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""https://spacy.io/api/annotation"""
texts_out = []
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
return texts_out
# Remove Stop Words
data_words_nostops = remove_stopwords(data_words)
# Form Bigrams
data_words_bigrams = make_bigrams(data_words_nostops)
# Initialize spacy 'en' model, keeping only tagger component (for efficiency)
# python3 -m spacy download en
nlp = spacy.load('en', disable=['parser', 'ner'])
# Do lemmatization keeping only noun, adj, vb, adv
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
print(data_lemmatized[:1])
# Create Dictionary
id2word = corpora.Dictionary(data_lemmatized)
# Create Corpus
texts = data_lemmatized
# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in texts]
# View
print(corpus[:1])
id2word[0]
# Human readable format of corpus (term-frequency)
[[(id2word[id], freq) for id, freq in cp] for cp in corpus[:1]]
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=20,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
# Print the Keyword in the 10 topics
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]
# Compute Perplexity
print('\nPerplexity: ', lda_model.log_perplexity(corpus)) # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=data_lemmatized, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
# Visualize the topics
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda_model, corpus, id2word)
vis
# Download File: http://mallet.cs.umass.edu/dist/mallet-2.0.8.zip
mallet_path = 'path/to/mallet-2.0.8/bin/mallet'
# update this path
ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=20, id2word=id2word)
# Show Topics
pprint(ldamallet.show_topics(formatted=False))
# Compute Coherence Score
coherence_model_ldamallet = CoherenceModel(model=ldamallet, texts=data_lemmatized, dictionary=id2word, coherence='c_v')
coherence_ldamallet = coherence_model_ldamallet.get_coherence()
print('\nCoherence Score: ', coherence_ldamallet)
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=num_topics, id2word=id2word)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values
# Can take a long time to run.
model_list, coherence_values = compute_coherence_values(dictionary=id2word, corpus=corpus, texts=data_lemmatized, start=2, limit=40, step=6)
# Show graph
limit=40; start=2; step=6;
x = range(start, limit, step)
plt.plot(x, coherence_values)
plt.xlabel("Num Topics")
plt.ylabel("Coherence score")
plt.legend(("coherence_values"), loc='best')
plt.show()
# Print the coherence scores
for m, cv in zip(x, coherence_values):
print("Num Topics =", m, " has Coherence Value of", round(cv, 4))
# Select the model and print the topics
optimal_model = model_list[3]
model_topics = optimal_model.show_topics(formatted=False)
pprint(optimal_model.print_topics(num_words=10))
def format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=data):
# Init output
sent_topics_df = pd.DataFrame()
# Get main topic in each document
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
# Get the Dominant topic, Perc Contribution and Keywords for each document
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_keywords = ", ".join([word for word, prop in wp])
sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True)
else:
break
sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords']
# Add original text to the end of the output
contents = pd.Series(texts)
sent_topics_df = pd.concat([sent_topics_df, contents], axis=1)
return(sent_topics_df)
df_topic_sents_keywords = format_topics_sentences(ldamodel=optimal_model, corpus=corpus, texts=data)
# Format
df_dominant_topic = df_topic_sents_keywords.reset_index()
df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text']
# Show
df_dominant_topic.head(10)
# Group top 5 sentences under each topic
sent_topics_sorteddf_mallet = pd.DataFrame()
sent_topics_outdf_grpd = df_topic_sents_keywords.groupby('Dominant_Topic')
for i, grp in sent_topics_outdf_grpd:
sent_topics_sorteddf_mallet = pd.concat([sent_topics_sorteddf_mallet,
grp.sort_values(['Perc_Contribution'], ascending=[0]).head(1)],
axis=0)
# Reset Index
sent_topics_sorteddf_mallet.reset_index(drop=True, inplace=True)
# Format
sent_topics_sorteddf_mallet.columns = ['Topic_Num', "Topic_Perc_Contrib", "Keywords", "Text"]
# Show
sent_topics_sorteddf_mallet.head()
# Number of Documents for Each Topic
topic_counts = df_topic_sents_keywords['Dominant_Topic'].value_counts()
# Percentage of Documents for Each Topic
topic_contribution = round(topic_counts/topic_counts.sum(), 4)
# Topic Number and Keywords
topic_num_keywords = df_topic_sents_keywords[['Dominant_Topic', 'Topic_Keywords']]
# Concatenate Column wise
df_dominant_topics = pd.concat([topic_num_keywords, topic_counts, topic_contribution], axis=1)
# Change Column names
df_dominant_topics.columns = ['Dominant_Topic', 'Topic_Keywords', 'Num_Documents', 'Perc_Documents']
# Show
df_dominant_topics
###Output
_____no_output_____ |
_tutorials/tutorial_03/code/mle_hands_on.ipynb | ###Markdown
MLE - Taxi Ride Durations Initialization
###Code
# Importing packages
import numpy as np # Numerical package (mainly multi-dimensional arrays and linear algebra)
import pandas as pd # A package for working with data frames
import matplotlib.pyplot as plt # A plotting package
## Setup matplotlib to output figures into the notebook
## - To make the figures interactive (zoomable, tooltip, etc.) use ""%matplotlib notebook" instead
%matplotlib inline
## Setting some nice matplotlib defaults
plt.rcParams['figure.figsize'] = (5.0, 5.0) # Set default plot's sizes
plt.rcParams['figure.dpi'] = 120 # Set default plot's dpi (increase fonts' size)
plt.rcParams['axes.grid'] = True # Show grid by default in figures
## Auxiliary function for prining equations and pandas tables in cells output
from IPython.core.display import display, HTML, Latex
## Setting style (not relevant in Colab)
display(HTML('<link rel="stylesheet" href="../../../css/style.css">')) ## Use the same style as the rest of the site (mostly for titiles)
display(HTML("<style>.output_png { display: table-cell; text-align: center; vertical-align: middle; }</style>")) ## Center output figures
###Output
_____no_output_____
###Markdown
Preparing the Dataset Preparing the NYC taxi rides dataset. Loading the data - The data can be found at [https://technion046195.github.io/semester_2019_spring/datasets/nyc_taxi_rides.csv](https://technion046195.github.io/semester_2019_spring/datasets/nyc_taxi_rides.csv)
###Code
data_file = 'https://technion046195.github.io/semester_2019_spring/datasets/nyc_taxi_rides.csv'
## Loading the data
dataset = pd.read_csv(data_file)
###Output
_____no_output_____
###Markdown
Previewing the dataprinting out the 10 first rows.
###Code
## Print the number of rows in the data set
number_of_rows = len(dataset)
display(Latex('Number of rows in the dataset: $N={}$'.format(number_of_rows)))
## Show the first 10 rows
display(HTML(dataset.head(10).to_html()))
###Output
_____no_output_____
###Markdown
Plotting the dataLet us plot again the histogram of the durations
###Code
## Prepare the figure
fig, ax = plt.subplots()
ax.hist(dataset['duration'].values, bins=300 ,density=True)
ax.set_title('Historgram of Durations')
ax.set_ylabel('PDF')
ax.set_xlabel('Duration [min]');
###Output
_____no_output_____
###Markdown
Splitting the datasetWe will split the data into 80% train set and 20% test set for later evaluations
###Code
n_samples = len(dataset)
## Generate a random generator with a fixed seed (this is important to make our result reproducible)
rand_gen = np.random.RandomState(0)
## Generating a shuffled vector of indices
indices = rand_gen.permutation(n_samples)
## Split the indices into 80% train / 20% test
n_samples_train = int(n_samples * 0.8)
train_indices = indices[:n_samples_train]
test_indices = indices[n_samples_train:]
train_set = dataset.iloc[train_indices]
test_set = dataset.iloc[test_indices]
###Output
_____no_output_____
###Markdown
Attempt 1 : Normal Distribution + MLE Calculating models parameters:$$\mu=\displaystyle{\frac{1}{N}\sum_i x_i} \\\sigma=\sqrt{\displaystyle{\frac{1}{N}\sum_i\left(x_i-\mu\right)^2}} \\$$
###Code
## extarcting the samples
x = train_set['duration'].values
## Normal distribution parameters
mu = np.sum(x) / len(x)
sigma = np.sqrt(np.sum((x - mu) ** 2) / len(x))
display(Latex('$\\mu = {:.01f}\\ \\text{{min}}$'.format(mu)))
display(Latex('$\\sigma = {:.01f}\\ \\text{{min}}$'.format(sigma)))
###Output
_____no_output_____
###Markdown
From here on we will use [np.mean](http://lagrange.univ-lyon1.fr/docs/numpy/1.11.0/reference/generated/numpy.mean.html) and [np.std](http://lagrange.univ-lyon1.fr/docs/numpy/1.11.0/reference/generated/numpy.std.html) functions to calculate the mean and standard deviation.In addition [scipy.stats](https://docs.scipy.org/doc/scipy/reference/stats.html) has a wide range of distribution models. Each model comes with a set of methods for calculating the CDF, PDF, performing MLE fit, generate samples and more.
###Code
## Define the grid
grid = np.arange(-10, 60 + 0.1, 0.1)
## Import the normal distribution model from SciPy
from scipy.stats import norm
## Define the normal distribution object
norm_dist = norm(mu, sigma)
## Calculate the normal distribution PDF over the grid
norm_pdf = norm_dist.pdf(grid)
## Prepare the figure
fig, ax = plt.subplots()
ax.hist(dataset['duration'].values, bins=300 ,density=True, label='Histogram')
ax.plot(grid, norm_pdf, label='Normal')
ax.set_title('Distribution of Durations')
ax.set_ylabel('PDF')
ax.set_xlabel('Duration [min]')
ax.legend();
fig.savefig('../media/normal.png')
###Output
_____no_output_____
###Markdown
Attempt 2 : Rayleigh Distribution + MLE Calculating models parameters:$$\Leftrightarrow \sigma = \sqrt{\frac{1}{2N}\sum_i x^2}$$
###Code
## Import the normal distribution model from SciPy
from scipy.stats import rayleigh
## Find the model's parameters using SciPy
_, sigma = rayleigh.fit(x, floc=0) ## equivalent to running: sigma = np.sqrt(np.sum(x ** 2) / len(x) / 2)
display(Latex('$\\sigma = {:.01f}$'.format(sigma)))
## Define the Rayleigh distribution object
rayleigh_dist = rayleigh(0, sigma)
## Calculate the Rayleigh distribution PDF over the grid
rayleigh_pdf = rayleigh_dist.pdf(grid)
## Prepare the figure
fig, ax = plt.subplots()
ax.hist(dataset['duration'].values, bins=300 ,density=True, label='Histogram')
ax.plot(grid, norm_pdf, label='Normal')
ax.plot(grid, rayleigh_pdf, label='Rayleigh')
ax.set_title('Distribution of Durations')
ax.set_ylabel('PDF')
ax.set_xlabel('Duration [min]')
ax.legend();
fig.savefig('../media/rayleigh.png')
###Output
_____no_output_____
###Markdown
Attempt 2 : Generalized Gamma Distribution + MLE Numerical solution
###Code
## Import the normal distribution model from SciPy
from scipy.stats import gengamma
## Find the model's parameters using SciPy
a, c, _, sigma = gengamma.fit(x, floc=0)
display(Latex('$a = {:.01f}$'.format(a)))
display(Latex('$c = {:.01f}$'.format(c)))
display(Latex('$\\sigma = {:.01f}$'.format(sigma)))
## Define the generalized gamma distribution object
gengamma_dist = gengamma(a, c, 0, sigma)
## Calculate the generalized gamma distribution PDF over the grid
gengamma_pdf = gengamma_dist.pdf(grid)
## Prepare the figure
fig, ax = plt.subplots()
ax.hist(dataset['duration'].values, bins=300 ,density=True, label='Histogram')
ax.plot(grid, norm_pdf, label='Normal')
ax.plot(grid, rayleigh_pdf, label='Rayleigh')
ax.plot(grid, gengamma_pdf, label='Generalized Gamma')
ax.set_title('Distribution of Durations')
ax.set_ylabel('PDF')
ax.set_xlabel('Duration [min]')
ax.legend();
fig.savefig('../media/generalized_gamma.png')
###Output
_____no_output_____ |
ch03/08_Training_Dov2Vec_using_Gensim.ipynb | ###Markdown
Doc2Vecใใฎใใผใใใใฏใงใฏใgensimใ็จใใฆใDoc2vecใฎใขใใซใๅญฆ็ฟใใๆนๆณใ็ดนไปใใพใใ ใใใฑใผใธใฎใคใณใใผใgensimใจNLTKใฏใคใณในใใผใซๆธใฟใจใใพใใ
###Code
import warnings
from pprint import pprint
import nltk
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
nltk.download('punkt')
warnings.filterwarnings('ignore')
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
###Markdown
ๅญฆ็ฟใใผใฟใฎๆบๅใพใใฏใgensimใงDoc2Vecใๅญฆ็ฟใใใใใฎใใผใฟใจใใฆใ[TaggedDocument](https://radimrehurek.com/gensim/models/doc2vec.htmlgensim.models.doc2vec.TaggedDocument)ใ็จๆใใพใใTaggedDocumentใฏใๅ่ชใฎ็ณปๅใจใฟใฐใฎ็ณปๅใใๆงๆใใใพใใใฟใฐใฏ1ใคไปฅไธใฎๆๅญๅใไฝฟใใพใใใใกใขใชๅน็ใฎ่ฆณ็นใใไธๆใชๆดๆฐใฎIDใไฝฟใใใจใใปใจใใฉใงใใ
###Code
data = [
"dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"
]
tagged_data = [
TaggedDocument(
words=word_tokenize(word),
tags=[str(i)]
) for i, word in enumerate(data)
]
tagged_data
###Output
_____no_output_____
###Markdown
ใขใใซใฎๅญฆ็ฟๅญฆ็ฟใใผใฟใ็จๆใใใใ[Doc2Vec](https://radimrehurek.com/gensim/models/doc2vec.htmlgensim.models.doc2vec.Doc2Vec)ใฏใฉในใซไธใใฆใใขใใซใๅญฆ็ฟใใพใใๆๅฎใงใใใใฉใกใผใฟใฏๆงใ
ใใใพใใใ`dm`ใงๅญฆ็ฟใขใซใดใชใบใ ใๆๅฎใงใใพใใ1ใๆๅฎใใใจใๅๆฃใกใขใชใ0ใๆๅฎใใใจๅๆฃBoWใซใชใใพใใ
###Code
# ๅๆฃBoWใฎๅญฆ็ฟ
model_dbow = Doc2Vec(
tagged_data,
vector_size=20,
min_count=1,
epochs=2,
dm=0
)
###Output
_____no_output_____
###Markdown
ๅญฆ็ฟใ็ตใใใใใขใใซใฎ[inifer_vector](https://radimrehurek.com/gensim/models/doc2vec.htmlgensim.models.doc2vec.Doc2Vec.infer_vector)ใกใฝใใใซๆๆธใไธใใฆใใใฏใใซใๅๅพใใพใใ
###Code
print(model_dbow.infer_vector(['man', 'eats', 'food']))
model_dbow.wv.most_similar("man", topn=5) # top 5 most simlar words.
###Output
_____no_output_____
###Markdown
`n_similarity`ใไฝฟใฃใฆใ2ใคใฎๅ่ช้ๅ้ใฎ้กไผผๅบฆใ่จ็ฎใใพใใ
###Code
model_dbow.wv.n_similarity(["dog"],["man"])
# ๅๆฃใกใขใชใฎๅญฆ็ฟ
model_dm = Doc2Vec(
tagged_data,
min_count=1,
vector_size=20,
epochs=2,
dm=1
)
print("Inference Vector of man eats food")
print(model_dm.infer_vector(['man', 'eats', 'food']))
print("Most similar words to man in our corpus")
print(model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ", model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-0.01564045 0.0173833 -0.00516716 0.0037643 -0.01813941 -0.00460716
-0.01941588 -0.01952404 -0.00677244 -0.00411688 0.00786548 0.01587102
-0.00982586 -0.02477862 0.00217828 0.02137304 -0.00618664 0.00858937
0.01089258 -0.01651028]
Most similar words to man in our corpus
[('dog', 0.3310743570327759), ('eats', 0.2360897958278656), ('meat', 0.052991606295108795), ('food', -0.0032464265823364258), ('bites', -0.41033852100372314)]
Similarity between man and dog: 0.33107436
###Markdown
ใใญใฃใใฉใชใซๅญๅจใใชใๅ่ชใ้กไผผๅบฆใฎๆฏ่ผใซไฝฟใใจไฝใ่ตทใใใงใใใใ๏ผ
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____ |
Training_Models.ipynb | ###Markdown
SetupFirst, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn โฅ0.20.
###Code
# Python โฅ3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn โฅ0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
###Output
_____no_output_____
###Markdown
The Normal Equation : Linear Regression
###Code
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
import matplotlib.pyplot as plt
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show();
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
save_fig("linear_model_predictions_plot")
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X,y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
###Output
_____no_output_____
###Markdown
Singular Value Decomposition (SVD)The LinearRegression class is based on the scipy.linalg.lstsq() function (the name stands for "least squares"), which you could call directly:
###Code
theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)
theta_best_svd
np.linalg.pinv(X_b).dot(y)
###Output
_____no_output_____
###Markdown
Linear Regression using Batch Gradient Descent
###Code
eta = 0.1 # learning rate
n_iterations = 1000
m = 100
theta = np.random.randn(2,1) # random initialization
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
theta
X_new_b.dot(theta)
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.")
n_iterations = 1000
for iteration in range(n_iterations):
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.01, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show();
###Output
Saving figure gradient_descent_plot
###Markdown
Stochastic Gradient Descent using a simple learning scheduleStochastic == Randomlearning schedule determines the learning rate at each iteration
###Code
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
# not in book
if epoch == 0 and i < 20:
y_predict = X_new_b.dot(theta)
style = "b-" if i > 0 else "r--"
plt.plot(X_new, y_predict, style)
# end of not in book
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta)
plt.plot(X,y, "b.")
plt.xlabel("$x_1", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0,2, 0,15])
save_fig("sgd_plot")
plt.show();
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None,
eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
###Output
_____no_output_____
###Markdown
Mini-batch Gradient Descent
###Code
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indicies = np. random.permutation(m)
X_b_shuffled = X_b[shuffled_indicies]
y_shuffled = y[shuffled_indicies]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:,0], theta_path_sgd[:, 1], "r-s",
linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:,0], theta_path_mgd[:, 1], "g-+",
linewidth=1, label="Mini-batch")
plt.plot(theta_path_bgd[:,0], theta_path_bgd[:, 1], "b-o",
linewidth=1, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
save_fig("gradient_descent_paths_plot")
plt.show();
###Output
Saving figure gradient_descent_paths_plot
###Markdown
Polynomial Regression
###Code
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3,3, 0,10])
save_fig("quadratic_data_plot")
plt.show();
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
###Output
_____no_output_____
###Markdown
X_poly now contains the original feature of X plust the square of the feature.Now you can fit a Linear Regression model to this extended training data.
###Code
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc='upper left', fontsize=14)
plt.axis([-3,3, 0,10])
save_fig('quadratic_predictions_plot')
plt.show();
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--",2 ,2),("r-+",2,1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
('poly_features', polybig_features),
('std_scaler', std_scaler),
('lin_reg', lin_reg),
])
polynomial_regression.fit(X,y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth = width)
plt.legend(loc='upper left')
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3,3, 0,10])
save_fig("high_degree_polynomials_plot")
plt.show();
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y,
test_size=0.2,
random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc='upper right', fontsize=14)
plt.xlabel("Training set size", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0,80, 0,3])
save_fig("underfitting_learning_curves")
plt.show()
#132
###Output
_____no_output_____ |
1-ETL/10min_to_cuDF.ipynb | ###Markdown
10 Minutes to cuDF=======================Modeled after 10 Minutes to Pandas, this is a short introduction to cuDF, geared mainly for new users.
###Code
import os
import numpy as np
import pandas as pd
import cudf
np.random.seed(12)
#### Portions of this were borrowed from the
#### cuDF cheatsheet, existing cuDF documentation,
#### and 10 Minutes to Pandas.
#### Created November, 2018.
###Output
_____no_output_____
###Markdown
Object Creation--------------- Creating a `Series`.
###Code
s = cudf.Series([1,2,3,None,4])
print(s)
###Output
_____no_output_____
###Markdown
Creating a `DataFrame` by specifying values for each column.
###Code
df = cudf.DataFrame([('a', list(range(20))),
('b', list(reversed(range(20)))),
('c', list(range(20)))])
print(df)
###Output
_____no_output_____
###Markdown
Creating a `Dataframe` from a pandas `Dataframe`.
###Code
pdf = pd.DataFrame({'a': [0, 1, 2, 3],'b': [0.1, 0.2, None, 0.3]})
gdf = cudf.DataFrame.from_pandas(pdf)
print(gdf)
###Output
_____no_output_____
###Markdown
Viewing Data------------- Viewing the top rows of the GPU dataframe.
###Code
print(df.head(2))
###Output
_____no_output_____
###Markdown
Sorting by values.
###Code
print(df.sort_values(by='a', ascending=False))
###Output
_____no_output_____
###Markdown
Selection------------ Getting Selecting a single column, which yields a `cudf.Series`, equivalent to `df.a`.
###Code
print(df['a'])
###Output
_____no_output_____
###Markdown
Selection by Label Selecting rows from index 2 to index 5 from columns 'a' and 'b'.
###Code
print(df.loc[2:5, ['a', 'b']])
###Output
_____no_output_____
###Markdown
Selection by Position Selecting by integer slicing, like numpy/pandas.
###Code
print(df[3:5])
###Output
_____no_output_____
###Markdown
Selecting elements of a `Series` with direct index access.
###Code
print(s[2])
###Output
_____no_output_____
###Markdown
Boolean Indexing Selecting rows in a `Series` by direct Boolean indexing.
###Code
print(df.b[df.b > 15])
###Output
_____no_output_____
###Markdown
Selecting values from a `DataFrame` where a Boolean condition is met, via the `query` API.
###Code
print(df.query("b == 3"))
###Output
_____no_output_____
###Markdown
Supported logical operators include `>`, `=`, `<=`, `==`, and `!=`. Setting Missing Data------------ Missing data can be replaced by using the `fillna` method.
###Code
print(s.fillna(999))
###Output
_____no_output_____
###Markdown
Operations------------ Stats Calculating descriptive statistics for a `Series`.
###Code
print(s.mean(), s.var())
###Output
_____no_output_____
###Markdown
Applymap Applying functions to a `Series`.
###Code
def add_ten(num):
return num + 10
print(df['a'].applymap(add_ten))
###Output
_____no_output_____
###Markdown
Histogramming Counting the number of occurrences of each unique value of variable.
###Code
print(df.a.value_counts())
###Output
_____no_output_____
###Markdown
String Methods Merge------------ Concat Concatenating `Series` and `DataFrames` row-wise.
###Code
print(cudf.concat([s, s]))
print(cudf.concat([df.head(), df.head()], ignore_index=True))
###Output
_____no_output_____
###Markdown
Join Performing SQL style merges.
###Code
df_a = cudf.DataFrame()
df_a['key'] = [0, 1, 2, 3, 4]
df_a['vals_a'] = [float(i + 10) for i in range(5)]
df_b = cudf.DataFrame()
df_b['key'] = [1, 2, 4]
df_b['vals_b'] = [float(i+10) for i in range(3)]
df_merged = df_a.merge(df_b, on=['key'], how='left')
print(df_merged.sort_values('key'))
###Output
_____no_output_____
###Markdown
Append Appending values from another `Series` or array-like object. `Append` does not support `Series` with nulls. For handling null values, use the `concat` method.
###Code
print(df.a.head().append(df.b.head()))
###Output
_____no_output_____
###Markdown
Grouping Like pandas, cuDF supports the Split-Apply-Combine groupby paradigm.
###Code
df['agg_col1'] = [1 if x % 2 == 0 else 0 for x in range(len(df))]
df['agg_col2'] = [1 if x % 3 == 0 else 0 for x in range(len(df))]
###Output
_____no_output_____
###Markdown
Grouping and then applying the `sum` function to the grouped data.
###Code
print(df.groupby('agg_col1').sum())
###Output
_____no_output_____
###Markdown
Grouping hierarchically then applying the `sum` function to grouped data.
###Code
print(df.groupby(['agg_col1', 'agg_col2']).sum())
###Output
_____no_output_____
###Markdown
Grouping and applying statistical functions to specific columns, using `agg`.
###Code
print(df.groupby('agg_col1').agg({'a':'max', 'b':'mean', 'c':'sum'}))
###Output
_____no_output_____
###Markdown
Reshaping------------ Time Series------------ cuDF supports `datetime` typed columns, which allow users to interact with and filter data based on specific timestamps.
###Code
import datetime as dt
date_df = cudf.DataFrame()
date_df['date'] = pd.date_range('11/20/2018', periods=72, freq='D')
date_df['value'] = np.random.sample(len(date_df))
search_date = dt.datetime.strptime('2018-11-23', '%Y-%m-%d')
print(date_df.query('date <= @search_date'))
###Output
_____no_output_____
###Markdown
Categoricals------------ cuDF supports categorical columns.
###Code
pdf = pd.DataFrame({"id":[1,2,3,4,5,6], "grade":['a', 'b', 'b', 'a', 'a', 'e']})
pdf["grade"] = pdf["grade"].astype("category")
gdf = cudf.DataFrame.from_pandas(pdf)
print(gdf)
###Output
_____no_output_____
###Markdown
Accessing the categories of a column.
###Code
print(gdf.grade.cat.categories)
###Output
_____no_output_____
###Markdown
Accessing the underlying code values of each categorical observation.
###Code
print(gdf.grade.cat.codes)
###Output
_____no_output_____
###Markdown
Plotting------------ Converting Data Representation-------------------------------- Pandas Converting a cuDF `DataFrame` to a pandas `DataFrame`.
###Code
print(df.head().to_pandas())
###Output
_____no_output_____
###Markdown
Numpy Converting a cuDF `DataFrame` to a numpy `rec.array`.
###Code
print(df.to_records())
###Output
_____no_output_____
###Markdown
Converting a cuDF `Series` to a numpy `ndarray`.
###Code
print(df['a'].to_array())
###Output
_____no_output_____
###Markdown
Arrow Converting a cuDF `DataFrame` to a PyArrow `Table`.
###Code
print(df.to_arrow())
###Output
_____no_output_____
###Markdown
Getting Data In/Out------------------------ CSV Writing to a CSV file, by first sending data to a pandas `Dataframe` on the host.
###Code
df.to_pandas().to_csv('foo.txt', index=False)
###Output
_____no_output_____
###Markdown
Reading from a csv file.
###Code
df = cudf.read_csv('foo.txt', delimiter=',',
names=['a', 'b', 'c', 'a1', 'a2'],
dtype=['int64', 'int64', 'int64', 'int64', 'int64'],
skiprows=1)
print(df)
###Output
_____no_output_____ |
02/lab2.ipynb | ###Markdown
ะะฐะฑะพัะฐัะพัะฝะฐั ัะฐะฑะพัะฐ 2 ะะฟะธัะฐะฝะธะต ะดะฐัะฐัะตัะฐะญัะธ ะดะฐะฝะฝัะต ัะฒะปััััั ั
ะฐัะฐะบัะตัะธััะธะบะฐะผะธ ะบัะฐัะฝะพะณะพ ะฒะธะฝะฐ. ะะฐะฑะพั ะดะฐะฝะฝัั
ะทะฐะณััะถะฐะตััั ะธะท ัะตะฟะพะทะธัะพัะธั ะผะฐัะธะฝะฝะพะณะพ ะพะฑััะตะฝะธั UCI. ะ ะฝัะผ 1599 ะทะฐะฟะธัะตะน ะธ 12 ะฐััะธะฑััะพะฒ. Attribute Information: - fixed acidity - volatile acidity - citric acid - residual sugar - chlorides - free sulfur dioxide - total sulfur dioxide - density - pH - sulphates - alcohol Output variable (based on sensory data): - quality (score between 0 and 10)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
DATASET = pd.read_csv('../02/winequality-red.csv')
DATASET.head()
quality = DATASET['quality'].sort_values().values
plt.hist(quality)
plt.show()
DS_PROCESSED = DATASET.copy()
min_ = DS_PROCESSED.min()
max_ = DS_PROCESSED.max()
mean_ = DS_PROCESSED.mean()
std_ = DS_PROCESSED.std()
nulls = DS_PROCESSED.isnull().sum()
stats = pd.DataFrame({'ะัะพะฟััะบะธ': nulls, 'ะะธะฝ.': min_, 'ะะฐะบั.': max_, 'ะกัะตะดะฝ.': mean_, 'ะกั. ะพัะบะป.': std_})
stats
DS_PROCESSED = DATASET.copy()
DS_PROCESSED.corr()['quality'].sort_values().to_frame()
from sklearn.model_selection import train_test_split
FEATURE_LABELS = ['alcohol', 'sulphates', 'citric acid','fixed acidity', 'residual sugar']
FEATURES = DS_PROCESSED[FEATURE_LABELS]
TARGET = DS_PROCESSED['quality']
X_train, X_test, Y_train, Y_test = train_test_split(FEATURES, TARGET, test_size = 0.4)
import seaborn as sns
sns.set(style= 'whitegrid', context = 'notebook')
sns.pairplot(FEATURES , height=2.5)
plt.show()
cm = np.corrcoef(FEATURES.values.T)
sns.set(font_scale=1.5)
_, ax = plt.subplots(figsize=(30, 30))
hm = sns.heatmap(cm, annot=True, cbar=True, square=True, fmt='.2f', ax=ax, xticklabels=FEATURE_LABELS, yticklabels=FEATURE_LABELS)
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
def evaluate_model(Y_pred, Y):
mse = mean_squared_error(Y, Y_pred)
r2 = r2_score(Y, Y_pred)
print(f'MSE = {mse}')
print(f'R2 = {r2}')
###Output
_____no_output_____
###Markdown
ะะธะฝะตะนะฝะฐั ัะตะณัะตััะธั
###Code
from sklearn.linear_model import LinearRegression
linear = LinearRegression().fit(X_train, Y_train)
evaluate_model(linear.predict(X_test), Y_test)
###Output
MSE = 0.44006167307081745
R2 = 0.28472231520836666
###Markdown
ะะพะปะธะฝะพะผะธะฐะปัะฝัะต ัะตะณัะตััะธั
###Code
from sklearn.preprocessing import PolynomialFeatures
def train_polynomial(degree):
polynomial_features= PolynomialFeatures(degree=degree)
X_train_poly = polynomial_features.fit_transform(X_train)
polynomial = LinearRegression().fit(X_train_poly, Y_train)
return polynomial_features, polynomial
polynomial_features, polynomial = train_polynomial(degree=2)
X_test_poly = polynomial_features.fit_transform(X_test)
evaluate_model(polynomial.predict(X_test_poly), Y_test)
polynomial_features, polynomial = train_polynomial(degree=3)
X_test_poly = polynomial_features.fit_transform(X_test)
evaluate_model(polynomial.predict(X_test_poly), Y_test)
polynomial_features, polynomial = train_polynomial(degree=4)
X_test_poly = polynomial_features.fit_transform(X_test)
evaluate_model(polynomial.predict(X_test_poly), Y_test)
###Output
MSE = 1.9240836606379819
R2 = -2.127411884163498
###Markdown
ะกะปััะฐะนะฝัะน ะปะตั
###Code
from sklearn.ensemble import RandomForestRegressor
FEATURES = DS_PROCESSED.loc[:, DS_PROCESSED.columns != 'quality']
TARGET = DS_PROCESSED['quality']
forest = RandomForestRegressor(n_estimators=10).fit(FEATURES, TARGET)
def print_importance(forest, columns):
df = pd.DataFrame(
zip(
forest.feature_importances_.round(decimals=4),
columns
),
columns = ['importance', 'feature']
).sort_values('importance', ascending=False)
return df
print_importance(forest, FEATURES.columns).head()
FEATURES = DS_PROCESSED[['alcohol', 'volatile acidity', 'sulphates', 'total sulfur dioxide', 'citric acid']]
TARGET = DS_PROCESSED['quality']
X_train, X_test, Y_train, Y_test = train_test_split(FEATURES, TARGET, test_size = 0.4)
n_estimators = [2, 5, 10, 20, 50, 100, 200, 500]
for n in n_estimators:
print(f'\n{n} estimators')
forest = RandomForestRegressor(n_estimators=n).fit(X_train, Y_train)
evaluate_model(forest.predict(X_test), Y_test)
###Output
2 estimators
MSE = 0.57734375
R2 = 0.1190287261903431
5 estimators
MSE = 0.44212500000000005
R2 = 0.3253595896152083
10 estimators
MSE = 0.423046875
R2 = 0.35447098136951394
20 estimators
MSE = 0.41531250000000003
R2 = 0.36627289694558374
50 estimators
MSE = 0.397635
R2 = 0.3932470690792047
100 estimators
MSE = 0.39181609375000004
R2 = 0.40212616277553626
200 estimators
MSE = 0.3858859375
R2 = 0.4111750133181339
500 estimators
MSE = 0.39185013750000003
R2 = 0.40207421527319864
|
devkit/notebooks/final_notebooks/python_notes.ipynb | ###Markdown
numpy็ดขๅผ็ธๅ
ณ ๅค่กๆททๅ็ดขๅผ(ๅฝๆฏ่กๅๅพๅ
็ด ไธชๆฐ็ธๅๆถ)ๆฏๅฆไฝ ๆไธไธชไบ็ปดๆฐ็ปa๏ผไฝ ๆไธไธช็ดขๅผๅบๅๆฐ็ปa_idx,len(a)==len(a_idx), a_idx.shape[1] = k,่กจ็คบๆฏ่กๅkไธชๅ
็ด ๏ผa_idx็ๆฏไธ่ก็ๅผ่กจ็คบ่ฆๅ็ๅฏนๅบ็ๅ็ดขๅผ๏ผๆฏ่กๅ็ดขๅผ้ฝไธไธๆ ท๏ผ่ฟๆ ท็็ดขๅผๆ็งฐไธบๅค่กๆททๅ็ดขๅผ๏ผๆ ๆณ็ดๆฅไฝฟ็จa[a_idx]
###Code
a = np.array([[1.2, 1.4, 1.12, 2.3], [2.1, 2.12, 1.56, 1.74], [3.23, 2.12, 4.23, 2.34]])
a
k = 3 # ๆฏ่กๅๅพๅ
็ด ไธชๆฐๅฟ
้กป็ธๅ๏ผๅฆๅๆ ๆณ็ดๆฅๆ้ ๆๆฐ็ป
a_idx = np.array([[0,3,2], [1,2,3], [0,1,2]]) # ๆณๅๆฐ็ปa็ฌฌไธ่ก็0,3,2ๅ
็ด ๏ผ็ฌฌไบ่ก็1๏ผ2๏ผ3ๅ
็ด ๏ผ็ฌฌไธ่ก็0,1,2ๅ
็ด
a[
np.repeat(np.arange(len(a_idx)), k),
a_idx.ravel()].reshape(len(a_idx), k)
###Output
_____no_output_____
###Markdown
Parallel Processing in Python
###Code
import multiprocessing as mp
np.random.RandomState(100)
arr = np.random.randint(0, 10, size=[2000000, 5])
data = arr.tolist()
def howmany_within_range(row, minimum, maximum):
"""Returns how many numbers lie within `maximum` and `minimum` in a given `row`"""
count = 0
for n in row:
if minimum <= n <= maximum:
count = count + 1
return count
results = []
for row in data:
results.append(howmany_within_range(row, minimum=4, maximum=8))
mp.cpu_count() // 2
# Step 1: Init multiprocessing.Pool()
pool = mp.Pool(mp.cpu_count()// 2)
# Step 2: `pool.apply` the `howmany_within_range()`
results = [pool.apply(howmany_within_range, args=(row, 4, 8)) for row in data]
# Step 3: Don't forget to close
pool.close()
###Output
Process ForkPoolWorker-2:
Process ForkPoolWorker-4:
Process ForkPoolWorker-1:
Process ForkPoolWorker-3:
Traceback (most recent call last):
###Markdown
่ฎก็ฎๆ ทๆฌ้ด่ท็ฆปๅนถๅช้ๅบๆๅฐ็kไธช่ท็ฆป็น - distance.pdist๏ผ่ฎก็ฎn็ปด็ฉบ้ดXไธญๆ ทๆฌ้ด็ไธคไธค(ๆๅฏน)่ท็ฆปใ ๅๆฐ๏ผX, metric- distance.cdist๏ผ่ฎก็ฎX_AๅX_Bไน้ด็ไธคไธค(ๆๅฏน)่ท็ฆปใ ๅๆฐ๏ผXA, XB, metric- np.partition: ๅฏนๆ็ปๆฐ็ปๆๆพ็ปๅฎไฝ็ฝฎ่ฟ่กๅๅฒ๏ผ่ฟๅๅๅฒๅ็ๆฐ็ปใๅๆฐ๏ผ ็ปๅฎๆฐ็ปa๏ผๅไฝ็ฝฎ็ดขๅผkthๆฏๅฆๆๅฎkth=10๏ผๅ่กจ็คบๅ
็กฎๅฎๆ็ปๆฐ็ป็ฌฌ10ๅฐ็ๆฐๅญไธบn๏ผๅ่ฆๆฑ่ฟๅ็ๆฐ็ปๆปก่ถณ่ฟไบๆกไปถ๏ผnไฝไบ็ฌฌ10ไธชไฝ็ฝฎ๏ผๅ10ไธชๅ
็ด ็ๅผๅฟ
้กปๅฐไบn๏ผnไนๅ็ๅ
็ด ๅฟ
้กปๅคงไบn๏ผไธค้จๅๅ
้จ็้กบๅบไธไฝ่ฆๆฑ;kthๅฏไปฅไธบ่ดๆฐ๏ผๅฆ-3๏ผๅ่กจ็คบๆ็
งๆฐ็ปaไธญ็ฌฌ3็ๅ
็ด ๅฏนa่ฟ่กๅๅฒใๅ
ถๅบ็จๅบๆฏไธบ๏ผๆฏๅฆๆไปฌไป
ๆณไปไธไธชๅพๅคง็ๆฐ็ป้ๆพๅฐๆๅคง็10ไธชๅผ๏ผๅฆๆๅ
ๅฏนๅ
็ด ่ฟ่กๆๅบ๏ผๅๅๅ10ไธชๅ
็ด ๏ผ่ฟๆ ท็ไปฃไปทไผๆฏ่พๅคง๏ผ่่ๅฐๅช้ๅ10ไธช๏ผๅๅฏไปฅ็จnp.partition
###Code
from scipy.spatial import distance
nsamples = 10005
nfeatures = 20
X = np.random.randn(nsamples, nfeatures)
njobs = 20
step = int(np.ceil(nsamples / njobs))
step
X.shape
i = 0
st = i*step
end = (i+1)*step
w = distance.cdist(XA=X[st:end], XB=X, metric="euclidean")
w.shape
w
k = 10
kths = tuple(np.arange(1, k+1))
z = np.zeros((nsamples, k))
pairs = np.zeros_like(z)
pairs.shape
z.shape
w.shape
w_parted_ix = np.argpartition(w, kths, axis=1)
w_parted_ix
w_parted_ix[:, 1:k+1].shape
z[st:end, :] = w_parted_ix[:, 1:k+1]
z[0]
ixs_rows = np.repeat(np.arange(len(w)), k)
ixs_cols = tuple(w_parted_ix[:, 1:k+1].ravel())
pairs[st:end, :] = w[ixs_rows, ixs_cols].reshape(len(w), k)
###Output
_____no_output_____ |
Model backlog/Train/135-jigsaw-fold1-xlm-roberta-ratio-1-2-sample-drop.ipynb | ###Markdown
Dependencies
###Code
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
pd.set_option('max_colwidth', 120)
pd.set_option('display.float_format', lambda x: '%.4f' % x)
###Output
[34m[1mwandb[0m: [33mWARNING[0m W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable.
###Markdown
TPU configuration
###Code
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
###Output
Running on TPU grpc://10.0.0.2:8470
REPLICAS: 8
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-192-ratio-1-clean-polish/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_1/'
fold_n = 1
# Unzip files
!tar -xf /kaggle/input/jigsaw-data-split-roberta-192-ratio-1-clean-polish/fold_1.tar.gz
###Output
Train samples: 267220
###Markdown
Model parameters
###Code
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 192,
"BATCH_SIZE": 128,
"EPOCHS": 3,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": None,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Learning rate schedule
###Code
lr_min = 1e-7
lr_start = 0
lr_max = config['LEARNING_RATE']
step_size = (len(k_fold[k_fold[f'fold_{fold_n}'] == 'train']) * 2) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = total_steps * 0.1
decay = .9998
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps=warmup_steps,
hold_max_steps=hold_max_steps, lr_start=lr_start,
lr_max=lr_max, lr_min=lr_min, decay=decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
###Output
Learning rate schedule: 0 to 9.96e-06 to 1.66e-06
###Markdown
Model
###Code
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
N_SAMPLES = 2
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x_avg = layers.GlobalAveragePooling1D()(last_hidden_state)
x_max = layers.GlobalMaxPooling1D()(last_hidden_state)
x = layers.Concatenate()([x_avg, x_max])
samples = []
sample_mask = layers.Dense(64, activation='relu')
for n in range(N_SAMPLES):
sample = layers.Dropout(.5)(x)
sample = sample_mask(sample)
sample = layers.Dense(1, activation='sigmoid', name=f'sample_{n}')(sample)
samples.append(sample)
output = layers.Average(name='output')(samples)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
###Output
_____no_output_____
###Markdown
Train
###Code
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train_int.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid = np.load(base_data_path + 'x_valid.npy')
y_valid = np.load(base_data_path + 'y_valid_int.npy').reshape(x_valid.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train_tail = np.load(base_data_path + 'x_train_tail.npy')
y_train_tail = np.load(base_data_path + 'y_train_int_tail.npy').reshape(x_train_tail.shape[1], 1).astype(np.float32)
x_train = np.hstack([x_train, x_train_tail])
y_train = np.vstack([y_train, y_train_tail])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
valid_2_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
valid_2_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid, y_valid, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
valid_2_data_iter = iter(valid_2_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
@tf.function
def valid_2_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_2_auc.update_state(y, probabilities)
valid_2_loss.update_state(loss)
for _ in tf.range(valid_2_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
lr = lambda: exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps=warmup_steps, hold_max_steps=hold_max_steps,
lr_start=lr_start, lr_max=lr_max, lr_min=lr_min, decay=decay)
optimizer = optimizers.Adam(learning_rate=lr)
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
valid_2_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
valid_2_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc,
'val_2_loss': valid_2_loss, 'val_2_auc': valid_2_auc}
history = custom_fit_2(model, metrics_dict, train_step, valid_step, valid_2_step, train_data_iter,
valid_data_iter, valid_2_data_iter, step_size, valid_step_size, valid_2_step_size,
config['BATCH_SIZE'], config['EPOCHS'], config['ES_PATIENCE'], save_last=False)
# model.save_weights('model.h5')
# Make predictions
# x_train = np.load(base_data_path + 'x_train.npy')
# x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
# train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
# valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
# k_fold.loc[k_fold[f'fold_{fold_n}'] == 'train', f'pred_{fold_n}'] = np.round(train_preds)
# k_fold.loc[k_fold[f'fold_{fold_n}'] == 'validation', f'pred_{fold_n}'] = np.round(valid_preds)
valid_df[f'pred_{fold_n}'] = valid_ml_preds
# Fine-tune on validation set
#################### ADD TAIL ####################
x_valid_ml_tail = np.hstack([x_valid_ml, np.load(database_base_path + 'x_valid_tail.npy')])
y_valid_ml_tail = np.vstack([y_valid_ml, y_valid_ml])
valid_step_size_tail = x_valid_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_valid_ml_tail, y_valid_ml_tail, config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
# Step functions
@tf.function
def train_ml_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(valid_step_size_tail):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
# Fine-tune on validation set
optimizer = optimizers.Adam(learning_rate=3e-6)
history_ml = custom_fit_2(model, metrics_dict, train_ml_step, valid_step, valid_2_step, train_ml_data_iter,
valid_data_iter, valid_2_data_iter, valid_step_size_tail, valid_step_size, valid_2_step_size,
config['BATCH_SIZE'], 2, config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df[f'pred_ml_{fold_n}'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
###Output
Train for 125 steps, validate for 62 steps, validate_2 for 417 steps
EPOCH 1/2
time: 207.4s loss: 0.2164 auc: 0.9387 val_loss: 0.1929 val_auc: 0.9677 val_2_loss: 0.2712 val_2_auc: 0.9709
EPOCH 2/2
time: 69.5s loss: 0.1602 auc: 0.9670 val_loss: 0.1444 val_auc: 0.9873 val_2_loss: 0.2397 val_2_auc: 0.9709
Training finished
###Markdown
Model loss graph
###Code
plot_metrics_2(history)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
# display(evaluate_model_single_fold(k_fold, fold_n, label_col='toxic_int').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
# train_set = k_fold[k_fold[f'fold_{fold_n}'] == 'train']
# validation_set = k_fold[k_fold[f'fold_{fold_n}'] == 'validation']
# plot_confusion_matrix(train_set['toxic_int'], train_set[f'pred_{fold_n}'],
# validation_set['toxic_int'], validation_set[f'pred_{fold_n}'])
###Output
_____no_output_____
###Markdown
Model evaluation by language
###Code
display(evaluate_model_single_fold_lang(valid_df, fold_n).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_single_fold_lang(valid_df, fold_n, pred_col='pred_ml').style.applymap(color_map))
###Output
_____no_output_____
###Markdown
Visualize predictions
###Code
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
###Output
English validation set
###Markdown
Test set predictions
###Code
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
###Output
_____no_output_____ |
doc/ipython_notebooks_src/tutorial-coupled-relaxation-of-two-nanodisks.ipynb | ###Markdown
Relaxation of two coupled nanodisks **Author**: Maximilian Albert**Date**: Jan 2013**Purpose**: This notebook illustrates a typical relaxation. It uses a mesh consisting of two separated nanodisks and also shows how such a mesh can be produced by writing a `.geo` file directly from the code. --- In this simulation we explore the relaxation behaviour of two nanodisks which are separated by a small distance. The initial magnetisation in each disk is uniform, but with a different (random) direction for each of them. We hope to see some kind of interaction so that ideally in the final relaxed state the magnetisation in the two disks is aligned along the common axis (due to the magnetostatic interaction). First we import all the relevant modules and functions. We also change the logging level from the default 'DEBUG' to 'INFO' to avoid cluttering the notebook with lots of debugging message.
###Code
from paraview import servermanager
from finmag.util.meshes import from_geofile, plot_mesh, plot_mesh_with_paraview
from finmag import sim_with
from finmag.util.helpers import vector_valued_function
from finmag.util.visualization import render_paraview_scene
import finmag
import numpy as np
finmag.set_logging_level("INFO")
###Output
[2014-06-09 18:43:58] INFO: Finmag logging output will be appended to file: '/home/albert/.finmag/global.log'
[2014-06-09 18:43:58] DEBUG: Building modules in 'native'...
[2014-06-09 18:43:59] DEBUG: FinMag 5047:e8225c1b7a79ea431efa470d26532258c63bb6ef
[2014-06-09 18:43:59] DEBUG: Dolfin 1.4.0 Matplotlib 1.3.1
[2014-06-09 18:43:59] DEBUG: Numpy 1.8.1 Scipy 0.12.0
[2014-06-09 18:43:59] DEBUG: IPython 2.1.0 Python 2.7.5+
[2014-06-09 18:43:59] DEBUG: Paraview 4.0.1-1 Sundials 2.5.0
[2014-06-09 18:43:59] DEBUG: Boost-Python <unknown> Linux Linux Mint 16 Petra
[2014-06-09 18:43:59] DEBUG: Registering debug signal handler. Press Ctrl-Z at any time to stop execution and jump into the debugger.
###Markdown
First we set the parameters which define the geometry, such as the disk radius/height and their separation. The two disks are put on the positive and negative x-axis with equal distances to the origin.
###Code
# Geometry parameters:
r = 25 # disk radius
h = 10 # disk height
s = 30 # distance from the origin to the center of each disk
maxh = 3.0 # mesh discretisation
unit_length = 1e-9 # the mesh units are given in nm
assert(s > r) # Disk separation should be greater than the radius
###Output
_____no_output_____
###Markdown
Next we set the material parameters for Permalloy and define the initial magnetisations for each disk.
###Code
# Material parameters
Ms = 8.6e5
A = 13.0e-12
alpha = 1.0
m_init_1 = np.array([-0.7, 0.1, 0.1]) # m_init for first disk
m_init_2 = np.array([0.6, 0.2, -0.2]) # m_init for second disk
demag_solver = 'FK'
###Output
_____no_output_____
###Markdown
We also define some helper strings used in filename, directory names etc., which make it easier to distinguish between simulations with different parameters.
###Code
geom_descr = "r_{:05.1f}__h_{:05.1f}__s_{:05.1f}__maxh_{:04.1f}".format(r, h, s, maxh)
sim_descr = "sim_{}__m_init_1_{:03.1f}_{:03.1f}_{:03.1f}__m_init_2_{:03.1f}_{:03.1f}_{:03.1f}".format(
geom_descr, m_init_1[0], m_init_1[1], m_init_1[2], m_init_2[0], m_init_2[1], m_init_2[2])
print "geom_descr: {}".format(geom_descr)
print "sim_descr: {}".format(sim_descr)
###Output
geom_descr: r_025.0__h_010.0__s_030.0__maxh_03.0
sim_descr: sim_r_025.0__h_010.0__s_030.0__maxh_03.0__m_init_1_-0.7_0.1_0.1__m_init_2_0.6_0.2_-0.2
###Markdown
Now we create the mesh and load it. Since we would like to be able to change parameters, we write a CSG description of the mesh geometry here interactively and write it to a .geo file, which is then loaded.
###Code
import textwrap
import os
meshfilename = os.path.join("meshes", "mesh__{}.geo".format(geom_descr))
if not os.path.exists(meshfilename):
# Interactively write the mesh file from here so that we
# can adapt the geometry parameters from this notebook.
if not os.path.exists("meshes"):
os.mkdir('meshes')
csg = textwrap.dedent("""\
algebraic3d
solid disk1 = cylinder (-{s}, 0, 1; -{s}, 0, -1; {r} )
and plane (0, 0, 0; 0, 0, -1)
and plane (0, 0, {h}; 0, 0, 1) -maxh = {maxh};
solid disk2 = cylinder ({s}, 0, 1; {s}, 0, -1; {r} )
and plane (0, 0, 0; 0, 0, -1)
and plane (0, 0, {h}; 0, 0, 1) -maxh = {maxh};
tlo disk1;
tlo disk2;
""".format(s=s, r=r, h=h, maxh=maxh))
with open(meshfilename, "w") as f:
f.write(csg)
mesh = from_geofile(meshfilename)
###Output
[2014-06-09 18:44:11] WARNING: Warning: Ignoring netgen's output status of 34304.
###Markdown
Plot the mesh to be sure that it looks ok. Unfortunately, matplotlib doesn't support equal aspect ratios for 3D plots (yet), but the axes labels indicate that the disks have the correct proportions.
###Code
plot_mesh(mesh, figsize=(10, 5))
###Output
_____no_output_____
###Markdown
Alternatively, we can plot the mesh using Paraview.
###Code
plot_mesh_with_paraview(mesh)
###Output
_____no_output_____
###Markdown
Now we actually set the initial magnetisation in the two disks. Since m_init is different in each of the disks, this is done via a function that distinguishes the mesh points in each disk according to their x-coordinate.
###Code
def fun_m_init(pt):
if pt[0] < 0:
return m_init_1
else:
return m_init_2
m_init = vector_valued_function(fun_m_init, mesh)
###Output
_____no_output_____
###Markdown
Now create the Simulation object (with the mesh and material parameters defined above).
###Code
sim = sim_with(mesh, Ms=Ms, m_init=m_init, alpha=alpha,
unit_length=unit_length, A=A,
demag_solver=demag_solver,
name="sim_01__relaxation_of_two_nanodisks")
###Output
[2014-06-09 18:44:54] INFO: Finmag logging output will be written to file: '/home/albert/work/code/finmag/doc/ipython_notebooks_src/sim_01__relaxation_of_two_nanodisks.log' (any old content will be overwritten).
[2014-06-09 18:44:54] INFO: Creating Sim object 'sim_01__relaxation_of_two_nanodisks' (rank=0/1).
[2014-06-09 18:44:54] INFO: <Mesh of topological dimension 3 (tetrahedra) with 1973 vertices and 7295 cells, ordered>
###Markdown
... and relax the configuration. While doing so, we save vtk snapshots of the magnetisation configuration every 50 ps for later analysis. Since the simulation takes a few mintes to finish, we also print a message every 0.5 ns (of simulation time) to keep us informed about the progress (note that this would not be necessary if we had used the 'DEBUG' logging level above, but it is also a nice illustration of how to use the scheduler for these purposes).
###Code
def print_simulation_time(sim):
finmag.logger.info("Reached simulation time: {} ns".format(sim.t * 1e9))
sim.schedule(print_simulation_time, every=0.5e-9)
sim.schedule('save_vtk', filename='snapshots/{}/relaxation.pvd'.format(sim_descr), every=5e-11, overwrite=True)
sim.relax()
###Output
[2014-06-09 18:45:00] INFO: Create integrator sundials with kwargs={}
[2014-06-09 18:45:00] INFO: Simulation will run until relaxation of the magnetisation.
[2014-06-09 18:45:00] INFO: Reached simulation time: 0.0 ns
[2014-06-09 18:45:13] INFO: Reached simulation time: 0.5 ns
[2014-06-09 18:45:20] INFO: Reached simulation time: 1.0 ns
[2014-06-09 18:45:26] INFO: Reached simulation time: 1.5 ns
[2014-06-09 18:45:32] INFO: Reached simulation time: 2.0 ns
[2014-06-09 18:45:38] INFO: Reached simulation time: 2.5 ns
[2014-06-09 18:45:41] INFO: Relaxation finished at time t = 2.8e-09.
###Markdown
Here is an image of the initial state. It shows that each disk has uniform magnetisation but they point in different directions, as specified by `m_init_1` and `m_init_2`.
###Code
render_paraview_scene('snapshots/{}/relaxation.pvd'.format(sim_descr),
timesteps=0, color_by_axis='X', view_size=(1000, 800))
###Output
_____no_output_____
###Markdown
And this is an image of the relaxed state. It clearly shows how the magnetisation in the two disks is now aligned due the magnetostatic interaction.
###Code
render_paraview_scene('snapshots/{}/relaxation.pvd'.format(sim_descr),
timesteps=42, color_by_axis='Y', view_size=(1000, 800))
###Output
_____no_output_____ |
Deep Learning for Computer Vision/DL_CV_Assessment_Solution.ipynb | ###Markdown
Deep Learning for Image Classification Assessment SOLUTIONWelcome to your assessment! Follow the instructions in bold below to complete the assessment.If you get stuck, check out the solutions video and notebook. (Make sure to run the solutions notebook before posting a question to the QA forum please, thanks!)------------ The Challenge**Your task is to build an image classifier with Keras and Convolutional Neural Networks for the Fashion MNIST dataset. This data set includes 10 labels of different clothing types with 28 by 28 *grayscale* images. There is a training set of 60,000 images and 10,000 test images.** Label Description 0 T-shirt/top 1 Trouser 2 Pullover 3 Dress 4 Coat 5 Sandal 6 Shirt 7 Sneaker 8 Bag 9 Ankle boot The Data**TASK 1: Run the code below to download the dataset using Keras.**
###Code
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
###Output
Using TensorFlow backend.
###Markdown
Visualizing the Data**TASK 2: Use matplotlib to view an image from the data set. It can be any image from the data set.**
###Code
import matplotlib.pyplot as plt
%matplotlib inline
x_train[0]
plt.imshow(x_train[0])
y_train[0]
###Output
_____no_output_____
###Markdown
Preprocessing the Data**TASK 3: Normalize the X train and X test data by dividing by the max value of the image arrays.**
###Code
x_train.max()
x_train = x_train/255
x_test = x_test/255
###Output
_____no_output_____
###Markdown
**Task 4: Reshape the X arrays to include a 4 dimension of the single channel. Similar to what we did for the numbers MNIST data set.**
###Code
x_train.shape
x_train = x_train.reshape(60000,28,28,1)
x_test = x_test.reshape(10000,28,28,1)
###Output
_____no_output_____
###Markdown
**TASK 5: Convert the y_train and y_test values to be one-hot encoded for categorical analysis by Keras.**
###Code
from keras.utils import to_categorical
y_train
y_cat_train = to_categorical(y_train)
y_cat_test = to_categorical(y_test)
###Output
_____no_output_____
###Markdown
Building the Model**TASK 5: Use Keras to create a model consisting of at least the following layers (but feel free to experiment):*** 2D Convolutional Layer, filters=32 and kernel_size=(4,4)* Pooling Layer where pool_size = (2,2)* Flatten Layer* Dense Layer (128 Neurons, but feel free to play around with this value), RELU activation* Final Dense Layer of 10 Neurons with a softmax activation**Then compile the model with these parameters: loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']**
###Code
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPool2D, Flatten
model = Sequential()
# CONVOLUTIONAL LAYER
model.add(Conv2D(filters=32, kernel_size=(4,4),input_shape=(28, 28, 1), activation='relu',))
# POOLING LAYER
model.add(MaxPool2D(pool_size=(2, 2)))
# FLATTEN IMAGES FROM 28 by 28 to 764 BEFORE FINAL LAYER
model.add(Flatten())
# 128 NEURONS IN DENSE HIDDEN LAYER (YOU CAN CHANGE THIS NUMBER OF NEURONS)
model.add(Dense(128, activation='relu'))
# LAST LAYER IS THE CLASSIFIER, THUS 10 POSSIBLE CLASSES
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 25, 25, 32) 544
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4608) 0
_________________________________________________________________
dense_1 (Dense) (None, 128) 589952
_________________________________________________________________
dense_2 (Dense) (None, 10) 1290
=================================================================
Total params: 591,786
Trainable params: 591,786
Non-trainable params: 0
_________________________________________________________________
###Markdown
Training the Model**TASK 6: Train/Fit the model to the x_train set. Amount of epochs is up to you.**
###Code
model.fit(x_train,y_cat_train,epochs=10)
###Output
Epoch 1/10
60000/60000 [==============================] - 5s 86us/step - loss: 0.1802 - acc: 0.9365
Epoch 2/10
60000/60000 [==============================] - 5s 87us/step - loss: 0.1679 - acc: 0.9395
Epoch 3/10
60000/60000 [==============================] - 5s 88us/step - loss: 0.1579 - acc: 0.9439
Epoch 4/10
60000/60000 [==============================] - 5s 87us/step - loss: 0.1502 - acc: 0.9469
Epoch 5/10
60000/60000 [==============================] - 5s 86us/step - loss: 0.1427 - acc: 0.9496
Epoch 6/10
60000/60000 [==============================] - 5s 87us/step - loss: 0.1397 - acc: 0.9523
Epoch 7/10
60000/60000 [==============================] - 5s 87us/step - loss: 0.1312 - acc: 0.9551
Epoch 8/10
60000/60000 [==============================] - 5s 86us/step - loss: 0.1274 - acc: 0.9559
Epoch 9/10
60000/60000 [==============================] - 5s 84us/step - loss: 0.1238 - acc: 0.9582
Epoch 10/10
60000/60000 [==============================] - 5s 84us/step - loss: 0.1201 - acc: 0.9588
###Markdown
Evaluating the Model**TASK 7: Show the accuracy,precision,recall,f1-score the model achieved on the x_test data set. Keep in mind, there are quite a few ways to do this, but we recommend following the same procedure we showed in the MNIST lecture.**
###Code
model.metrics_names
model.evaluate(x_test,y_cat_test)
from sklearn.metrics import classification_report
predictions = model.predict_classes(x_test)
y_cat_test.shape
y_cat_test[0]
predictions[0]
y_test
print(classification_report(y_test,predictions))
###Output
precision recall f1-score support
0 0.86 0.85 0.85 1000
1 0.99 0.97 0.98 1000
2 0.88 0.83 0.85 1000
3 0.91 0.91 0.91 1000
4 0.83 0.88 0.85 1000
5 0.97 0.98 0.98 1000
6 0.73 0.76 0.74 1000
7 0.95 0.97 0.96 1000
8 0.99 0.97 0.98 1000
9 0.98 0.94 0.96 1000
avg / total 0.91 0.91 0.91 10000
|
Daily/20150902_phoenix_cifist_bcs.ipynb | ###Markdown
Phoenix BT-Settl Bolometric CorrectionsFiguring out the best method of handling Phoenix bolometric correction files.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate as scint
###Output
_____no_output_____
###Markdown
Change to directory containing bolometric correction files.
###Code
cd /Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15/
###Output
/Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15
###Markdown
Load a bolometric correction table, say for the Cousins AB photometric system.
###Code
bc_table = np.genfromtxt('colmag.BT-Settl.server.JOHNSON.Vega', comments='!')
###Output
_____no_output_____
###Markdown
Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
###Code
test_surface = scint.LinearNDInterpolator(bc_table[:, :2], bc_table[:, 4:])
###Output
_____no_output_____
###Markdown
The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
###Code
test_surface(np.array([1500., 5.0]))
###Output
_____no_output_____
###Markdown
This agrees with data in the bolometric correciton table.``` Teff logg [Fe/H] [a/Fe] B V R I1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291```Now, let's raise the temperature.
###Code
test_surface(np.array([3000., 5.0]))
###Output
_____no_output_____
###Markdown
Again, we have a good match to tabulated values,``` Teff logg [Fe/H] [a/Fe] B V R I3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273```However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
###Code
test_surface(np.array([3000., 5.0]))
###Output
_____no_output_____
###Markdown
This appears consistent. What about progressing to lower metallicity values?
###Code
test_surface(np.array([3000., 5.0]))
###Output
_____no_output_____
###Markdown
For reference, at [Fe/H] = $-0.5$ dex, we have``` Teff logg [Fe/H] [a/Fe] B V R I3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154```The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.---Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
###Code
iso = np.genfromtxt('/Users/grefe950/evolve/dmestar/iso/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
###Output
_____no_output_____
###Markdown
Make sure there are magnitudes and colors associated with this isochrone.
###Code
iso.shape
###Output
_____no_output_____
###Markdown
A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
###Code
test_bcs = test_surface(10**iso[:,1], iso[:, 2])
test_bcs.shape
###Output
_____no_output_____
###Markdown
For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
###Code
bol_mags = 4.74 - 2.5*iso[:, 3]
for i in range(test_bcs.shape[1]):
bcs = -1.0*np.log10(10**iso[:, 1]/5777.) + test_bcs[:, i] - 5.0*iso[:, 4]
if i == 0:
test_mags = bol_mags - bcs
else:
test_mags = np.column_stack((test_mags, bol_mags - bcs))
iso[50, 0:4], iso[50, 6:], test_mags[50]
###Output
_____no_output_____
###Markdown
Let's try something different: using the color tables provided by the Phoenix group, from which the bolometric corrections are calculated.
###Code
col_table = np.genfromtxt('colmag.BT-Settl.server.COUSINS.Vega', comments='!')
###Output
_____no_output_____
###Markdown
Create an interpolation surface from the magnitude table.
###Code
col_surface = scint.LinearNDInterpolator(col_table[:, :2], col_table[:, 4:8])
###Output
_____no_output_____
###Markdown
Compute magnitudes for a Dartmouth isochrone.
###Code
phx_mags = col_surface(10.0**iso[:, 1], iso[:, 2])
###Output
_____no_output_____
###Markdown
Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
###Code
for i in range(phx_mags.shape[1]):
phx_mags[:, i] = phx_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
###Output
_____no_output_____
###Markdown
Now compare against MARCS values.
###Code
iso[40, :5], iso[40, 6:], phx_mags[40]
###Output
_____no_output_____
###Markdown
Load an isochrone from the Lyon-Phoenix series.
###Code
phx_iso = np.genfromtxt('/Users/grefe950/Notebook/Projects/ngc2516_spots/data/phx_isochrone_120myr.txt')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True)
ax[0].set_xlim(0.0, 2.0)
ax[1].set_xlim(0.0, 4.0)
ax[0].set_ylim(16, 2)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c="#b22222")
ax[0].plot(phx_mags[:, 0] - phx_mags[:, 1], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[0].plot(phx_iso[:, 7] - phx_iso[:, 8], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c="#b22222")
ax[1].plot(phx_mags[:, 1] - phx_mags[:, 3], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[1].plot(phx_iso[:, 8] - phx_iso[:, 10], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
###Output
_____no_output_____
###Markdown
Export a new isochrone with colors from AGSS09 (PHX)
###Code
new_isochrone = np.column_stack((iso[:, :6], phx_mags))
np.savetxt('/Users/grefe950/Notebook/Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso',
new_isochrone, fmt='%16.8f')
###Output
_____no_output_____
###Markdown
--- Separate Test CaseThese are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first.
###Code
tmp = -10.*np.log10(3681./5777.) + test_surface(3681., 4.78, 0.0) #+ 5.0*np.log10(0.477)
tmp
4.74 - 2.5*(-1.44) - tmp
###Output
_____no_output_____ |
FuzzyMatching.ipynb | ###Markdown
This notebook demonstrates how to match metadata records in HTRC with metadata records from other sources.In this example, we use strings of "author" and "title" for matching.Here we use two datasets:(1) part of Hathifiles and (2) CMU Book Summary Dataset.
###Code
#Yuerong Hu
import pandas as pd
from fuzzywuzzy import fuzz
###Output
_____no_output_____
###Markdown
Import and preprocess the datasets 1. Import the open access dataset downloaded from https://www.kaggle.com/applecrazy/cmu-book-summary-dataset
###Code
df=pd.read_csv('booksummaries.txt',sep='\t')
# take a look at the dataset
df.head()
# We need to add column names to it and select rows
# Add column names according to the readme file
# 1. Wikipedia article ID
# 2. Freebase ID
# 3. Book title
# 4. Author
# 5. Publication date
# 6. Book genres (Freebase ID:name tuples)
# 7. Plot summary
# Taking into consideration the matching later ,we use "title" and "author" instead of "Book title" and "Author"
df.columns = ['Wikipedia article ID','Freebase ID','title','author','ublication date','Book genres','Plot summary']
df.head()
# We only need two columns
df = df[['title','author']]
df.head()
# import htrc metadata
df2=pd.read_csv('hathi_upd_20190801.txt',sep='\t')
df2.head()
df2.shape[1]
# Similarly, we rename the columns and select columns
# a "readme" file for Hathifiles can be found at https://www.hathitrust.org/hathifiles_description
df2.columns=['htid','access','rights', 'ht_bib_key', 'description','source','source_bib_num','oclc_num',
'isbn','issn','iccn','title','imprint','rights_reason_code','rights_timestamp','us_gov_doc_flag',
'rights_date_used','pub_place','lang','bib_fmt','collection_code', 'content_provider_code',
'responsible_entity_code','digitization_agent_code','access_profile_code','author']
df2.head()
df2 = df2[['title','author']]
df2.head()
###Output
_____no_output_____
###Markdown
Accurate Matchingfind the pairs where the two author strings and the two title strings are exactly the same
###Code
df_output=pd.merge(df, df2, on=['title','author']).drop_duplicates()
df_output.to_csv(r'accurate_match.csv')
print(df_output)
# No match found
###Output
_____no_output_____
###Markdown
Fuzzy MatchingWe only use the first 30 characters in a title because there are many redundant"volumn" information
###Code
def fuzzyMatching(path):
df=pd.read_csv('booksummaries.txt',sep='\t',nrows=1000)
df.columns = ['id','Freebase ID','title','author','ublication date','Book genres','Plot summary']
df = df[['id','title','author']]
df.name = str(path)
#print(df.head())
authorratio = []
titleratio = []
sourceauthor = []
htrcauthor = []
sourcetitle = []
htrctitle = []
sourceIDlist= []
htrcDocid = []
#count df items in the loop
counter = 0
#count df2 items in the loop
counter2 = 0
# count number of output files
df2=pd.read_csv('hathi_upd_20190801.txt',sep='\t',nrows=100)
df2.columns=['htid','access','rights', 'ht_bib_key', 'description','source','source_bib_num','oclc_num',
'isbn','issn','iccn','title','imprint','rights_reason_code','rights_timestamp','us_gov_doc_flag',
'rights_date_used','pub_place','lang','bib_fmt','collection_code', 'content_provider_code',
'responsible_entity_code','digitization_agent_code','access_profile_code','author']
df2 = df2[['htid','title','author']]
titlelist = df['title'].values
authorlist = df["author"].values
sourceidlist=df['id'].values
for i in range(len(df)):
counter = counter + 1
author = authorlist[i]
sourceid = sourceidlist[i]
title = str(titlelist[i])[0:30]
authorlist2 = df2["author"].values
titlelist2 = df2['title'].values
htrcgidlist = df2['htid'].values
for j in range(len(df2)):
counter2 = counter2 + 1
author2 = authorlist2[j]
title2 = str(titlelist2[j])[0:30]
htrcid2 = htrcgidlist[j]
aRatio = fuzz.ratio(str(author).lower(), str(author2).lower())
tRatio = fuzz.ratio(str(title).lower(), str(title2).lower())
if aRatio > 60 and tRatio > 70:
sourceIDlist.append(sourceid)
htrcDocid.append(htrcid2)
authorratio.append(aRatio)
sourceauthor.append(author)
htrcauthor.append(author2)
titleratio.append(tRatio)
sourcetitle.append(title)
htrctitle.append(title2)
else:
pass
print(len(sourceauthor), len(htrctitle))
#print(counter,counter2)
df_output = pd.DataFrame(list(zip(authorratio, titleratio, sourceauthor, htrcauthor, sourcetitle, htrctitle, sourceIDlist,
htrcDocid)), columns=['authorratio', 'titleratio', 'sourceauthor', 'htrcauthor', 'sourcetitle', 'htrctitle',
'sourceId', 'htrcDocid'])
fileName=df.name
df_output.to_csv('fuzzyMatchingResult.csv', index=False)
print('done')
#this is a simple test, we did not find any pairs.
path='booksummaries.txt'
fuzzyMatching(path)
# this is an exmple of some successful fuzzy matching cases using metadata datasets of British Periodicals and HTRC
# first two rows represents the reliablity(100%) of the matching
demo=pd.read_csv('example_bpo_htrc.csv')
demo
###Output
_____no_output_____ |
Session11/Day3/GalaxyPhotometryAndShapes.ipynb | ###Markdown
Practice with galaxy photometry and shape measurementTo accompany galaxy-measurement lecture from the LSSTC Data Science Fellowship Program, July 2020.All questions and corrections can be directed to me at [email protected]!_Gary Bernstein, 16 July 2020_
###Code
# Load the packages we will use
import numpy as np
import astropy.io.fits as fits
import astropy.coordinates as co
from matplotlib import pyplot as plt
import scipy.fft as fft
%matplotlib inline
###Output
/Users/rmorgan/anaconda3/envs/DSFP/lib/python3.6/site-packages/matplotlib/style/core.py:167: UserWarning: In /Users/rmorgan/.matplotlib/stylelib/alex.mplstyle:
The text.latex.unicode rcparam was deprecated in Matplotlib 2.2 and will be removed in 3.1.
styles = read_style_directory(stylelib_path)
###Markdown
Useful toolsFor our galaxy measurement practice, we'll be testing out some of our techniques on *exponential profile* galaxies, which are define by$$ I(x,y) \propto e^{-r/r_0},$$where $r_0$ is the "scale length," and we'll allow our galaxy to potentially be elliptical shaped by setting$$ r^2 = (1-e^2) \left[ \frac{(x-x_0)^2}{1-e} + \frac{(y-y_0)^2}{1+e}\right].$$To reduce the complexity of our problem, I'm only letting the galaxy have the $e_+$ form of ellipticity, where $e>0$ ($e<0$) means the galaxy is stretched along the $x$ ($y$) axis.We're also going to assume that our galaxy is viewed through a circular Gaussian PSF:$$ T(x,y) \propto e^{-(x^2+y^2)/2\sigma_{\rm PSF}^2}.$$The function `drawDisk` below is provided to draw an image of an elliptical exponential galaxy as convolved with a Gaussian PSF. You don't have to understand how it works to do these exercises. But you might be interested (since this is how the `GalSim` galaxy simulation package works): the galaxy and the PSF are first "drawn" in Fourier space, and then multiplied, since a convolution in real space is multiplication in Fourier space (which is *much* faster). Then we use a Fast Fourier Transform (FFT) to get our image back in real space.I also include in this notebook two helpful things from the astrometry notebook:* The function `addBackground` which will add background noise of a chosen level (denoted as $n$ in the lecture notes) to any image.* The `x` and `y` arrays that give the location values of each pixel. In this set of exercises, we'll work exclusively with 64x64 images. Also I am going to redefine the coordinate system so that $(x,y)=(0,0)$ is actually at element `[32,32]` of the array.
###Code
def addBackground(image, variance):
# Add Gaussian noise with given variance to each pixel of the image
noise = np.random.normal(scale=np.sqrt(variance),size=image.shape)
return image + noise
n_pix = 64
xy=np.indices( (n_pix,n_pix),dtype=float)
x = xy[1].copy()- n_pix/2
y = xy[0].copy()- n_pix/2
plt.imshow(x,origin='lower',interpolation='nearest')
plt.title("This is a plot of x coordinate")
plt.colorbar()
# Here is our elliptical exponential galaxy drawing function
# It is always centered on the pixel just above right of the image center.
def drawDisk(r0=4.,flux=1.,e=0.,sigma_psf=3.,n_pix=n_pix):
# n_pix must be even.
# Build arrays holding the (ky,kx) values
# irfft2 wants array of this shape:
tmp = np.ones((n_pix,n_pix//2+1),dtype=float)
freqs = np.arange(-n_pix//2,n_pix//2)
freqs = (2 * np.pi / n_pix)*np.roll(freqs,n_pix//2)
kx = tmp * freqs[:n_pix//2+1]
ky = tmp * freqs[:,np.newaxis]
# Calculate the FT of the PSF
ft = np.exp( (kx*kx+ky*ky)*(-sigma_psf*sigma_psf/2.))
# Produce the FT of the exponential - for the circular version,
# it's (1+k^2 r_0^2)**(-3/2)
# factors to "ellipticize" and scale the k's:
a = np.power((1+e)/(1-e),0.25)
ksqp1 = np.square(r0*kx*a) + np.square(r0*ky/a) + 1
ft *= flux / (ksqp1*np.sqrt(ksqp1))
# Now FFT back to real space
img = fft.irfft2(ft)
# And roll the origin to the center
return np.roll(img, (n_pix//2,n_pix//2),axis=(0,1))
# As a test, let's draw an image with a small PSF size and
# see if it really is exponential.
# With e>0, it should be extended along x axis
r0=4.
img = drawDisk(e=0.2,flux=1e5,sigma_psf=3.,r0=r0)
plt.imshow(img,origin='lower',interpolation='nearest')
plt.title("Is it stretched along x?")
# And also a plot of log(flux) vs x or y should look linear
plt.figure()
plt.plot(np.arange(-32,32)/r0,np.log(img[:,32]),label='Y')
plt.plot(np.arange(-32,32)/r0,np.log(img[32,:]),label='X')
plt.legend()
plt.title("Are the lines straight and near unity slope?")
plt.xlabel("(x or y)/r0")
plt.ylabel("log(I)")
plt.grid()
###Output
_____no_output_____
###Markdown
Exercise 1: Aperture photometryHere we'll try out a few forms of aperture photometry and see how they compare in terms of the S/N ratios they provide on the galaxy flux.**(a)** Write a function `tophat_flux(img,R)` which implements a simple tophat aperture sum of flux in all pixels within radius `R` of the center of the galaxy. We will keep the center of our galaxy fixed at pixel \[32,32\] so you don't have to worry about iterating to find the centroid.Draw a noiseless version of a circular galaxy with the characteristics in the cell below. Then use your `tophat_flux` function to plot the "curve of growth" for this image, with `R` on the x axis going from 5 to 30 pixels, and the y axis showing the fraction of the total flux that falls in your aperture.How many scale radii do we need the aperture to be to miss <1% of the flux?
###Code
r0 = 4.
e = 0.
flux = 1e4
sigma_psf = 2.
def tophat_flux(img, R, center_x=32, center_y=32):
yy, xx = np.meshgrid(range(img.shape[0]), range(img.shape[1]))
distances = np.sqrt((center_x - xx)**2 + (center_y - yy)**2)
return np.sum(img[np.where(distances < R)])
circ_gal_ = drawDisk(r0=r0,flux=flux,e=e,sigma_psf=sigma_psf)
R_arr = np.arange(5., 31.)
cog = np.array([tophat_flux(circ_gal_, R) / flux for R in R_arr])
plt.figure()
plt.plot(R_arr, cog)
plt.xlabel("Aperture Radius [pixels]")
plt.ylabel("Fraction of Total Flux in Aperature")
plt.show()
print("You need {:.2f} scale radii".format(R_arr[np.argmin(np.abs(cog - 0.99))] / r0))
###Output
_____no_output_____
###Markdown
**(b)** Next let's add some background noise to our image, say `n_bg=100`. * First, make one such noisy version of your galaxy and `imshow` it. * Then, using **analytic** methods, estimate what the variance of your aperture flux measurements will be when `R=10`. * Finally, make 1000 different realizations of your noisy galaxy and measure their `tophat_flux` to see whether the real variance of the flux measurements matches your prediction.
###Code
circ_gal_noise_ = addBackground(circ_gal_, 100)
plt.imshow(circ_gal_noise_, origin='lower', interpolation='nearest')
###Output
_____no_output_____
###Markdown
**(c)** Now create a plot of the S/N level of the flux measurement vs the radius `R` of the aperture. Here the signal is the mean, and the noise the std deviation, of the `tophat_flux` of many noisy measurements of this galaxy. You can use either an analytic or numeric estimate of these quantities. Report what the optimal tophat S/N is, and what `R` achieves it.
###Code
# your work here...
###Output
_____no_output_____
###Markdown
**(d)** Repeat part (c), but this time use a *Gaussian* aperture whose width $\sigma_w$ you vary to optimize the S/N ratio of the aperture flux, i.e. a function `gaussian_flux(img,sigma_w)` is needed. Which performs better, the optimized tophat or the optimized Gaussian?
###Code
# your work here...
###Output
_____no_output_____
###Markdown
Exercise 2: Spurious colorThis time let's consider that we want to measure an accurate $g-r$ color for our galaxy, but the seeing is $\sigma_{\rm PSF}=2$ pixels in the $r$ image but $\sigma_{\rm PSF}=2.5$ pixels in the $g$ image. Let's see how the size of our aperture biases our color measurement.**(a)** Draw a noiseless $g$-band and a noiseless $r$-band image of our galaxy. Let's assume that the true color $g-r \equiv 2.5\log_10(f_r/f_g) = 0,$ i.e. that the $g$ and $r$ fluxes of the galaxy are both equal to our nominal `flux`. Plot the difference between the two images: are they the same?
###Code
# your work here...
###Output
_____no_output_____
###Markdown
**(b)** Using either your Gaussian or your tophat aperture code, plot the *measured* $g-r$ color of the galaxy as a function of the size of the aperture. Since the true color is zero, this measurement is the size of the systematic error that is being made in color because of mismatched *pre-seeing* apertures.
###Code
# your work here...
###Output
_____no_output_____
###Markdown
We can see here that a naive use of "matched" apertures can cause significant spurious color, even when the aperture has a sigma that is many times that of the galaxy and PSF. But the tophat does better. So without any kind of PSF matching, we have to use algorithms with non-optimal S/N in order to approach true colors. Exercise 3: Degradation of ellipticity measurements by seeingIt's hard to measure the shape of a galaxy that is not resolved by the PSF. That means that poorly-resolved galaxies are less useful for detecting weak-lensing (WL) shear. Let's see if we can quantify this by using the Fisher matrix to determine the best possible measurement accuracy on the parameter $e$ of our model (we'll make things easy by holding all other parameters of the galaxy model as fixed).Remember how the Fisher matrix works: for an image signal $I_{xy}$ and noise $\sigma_{xy}$ in each pixel, the Fisher information for a parameter $\theta$ is$$ F_{\theta\theta} = \sum_{xy} \frac{1}{\sigma^2_{xy}} \left(\frac{\partial I_{xy}}{\partial\theta}\right)^2.$$Here we're interested in $\theta=e$.**(a)** Draw two versions of our standard galaxy, with $e = \pm0.01.$ Use these to calculate and plot the quantity we need, $\frac{\partial I_{xy}}{\partial e}.$ Comment on how this picture relates to the fact that we like to measure WL shear using the moment of $x^2-y^2$.
###Code
# your work here...
###Output
_____no_output_____
###Markdown
**(b)** Use this to calculate the best achievable measurement accuracy on $e$ for our standard image.
###Code
# your work here...
###Output
_____no_output_____
###Markdown
**(c)** Make a graph showing how the optimal $\sigma_e$ varies as the size $\sigma_{\rm PSF}$ of the Gaussian PSF varies from being $0.2\times r_0$ to being $3\times r_0.$. What's the lesson here?
###Code
# your work here...
###Output
_____no_output_____ |
Gaia/M34/M34.ipynb | ###Markdown
Import data from csv's
###Code
datadir = os.getcwd()
suffix = ['1-20', '21-40', '41-60', '61-80', '81-100', '101-120', '121-135']
#What we gave the ESA archive
datafile_input = []
for i in range(0 , len(suffix)):
temp = '/ids_{0}.csv'.format(suffix[i])
with open(datadir+temp, 'r') as f:
reader = csv.reader(f)
input_1_20 = list(reader)
datafile_input.append(input_1_20)
#What we got from the ESA archive
datafile_output = []
for i in range(0 , len(suffix)):
temp = '/{0}.csv'.format(suffix[i])
with open(datadir+temp, 'r') as f:
reader = csv.reader(f)
output_1_20 = list(reader)
datafile_output.append(output_1_20)
#extract gaia source IDs from the input files
input_ids = []
for j in range(0, len(datafile_input)):
input_idss = []
for i in range(0, len(datafile_input[j])):
input_idss.append(int(datafile_input[j][i][0].split(" ")[2]))
input_ids.append(input_idss)
#extract gaia source IDs from the output files
output_ids = []
for j in range(0, len(datafile_output)):
temp = [int(datafile_output[j][i][0]) for i in range(1,len(datafile_output[j]))]
output_ids.append(temp)
#check if every pair of files (resp. first input and first output file) contain same IDs
for i in range(0, len(output_ids)):
print(set(output_ids[i]) == set(input_ids[i])) #we have to use set, because the output is not in the same order as the input
#now extract all data into lists
output_info = datafile_output[0][0]
output_info
rv = np.asarray([output_all[i][12] for i in range(0, len(output_all))])
rv
rv_0 = []
for i in range(0, len(rv)):
if rv[i] == "":
rv_0.append(0)
else:
rv_0.append(float(rv[i]))
plt.plot(np.arange(0,len(rv_0_new)), rv_0_new)
#list that contains all data
output_all = []
for j in range(0, len(datafile_output)):
#print(j)
for i in range(0, len(datafile_output[j])-1):
#print(i)
temp = datafile_output[j][1:][i]
output_all.append(temp)
len(output_all)
###Output
_____no_output_____
###Markdown
Store data in arrays and exclude stars w/ no 5 parameter solutions
###Code
#every star normally has an id, ra&dec and a magnitude.
sid = np.array([int(output_all[i][0]) for i in range(0, len(output_all))])
ra = np.array([float(output_all[i][1]) for i in range(0, len(output_all))])
dec = np.array([float(output_all[i][3]) for i in range(0, len(output_all))])
#we can convert the magnitudes to fluxes
magg = np.array([float(output_all[i][11]) for i in range(0, len(output_all))])
fluxg = 10**(-0.4*np.array(magg))
max(magg)
#using ra&dec and the flux we can recreate our observation
plt.subplots(1,1,figsize=(16,14))
plt.scatter(ra, dec, s=fluxg*5e5)
plt.gca().invert_xaxis()
plt.xlabel('RA (ยฐ)')
plt.ylabel('DEC (ยฐ)')
plt.show()
#a histogram of the magnitudes
fig, ax1 = plt.subplots(1, 1, figsize=(8,8))
ax1.hist(magg, bins=np.arange(7,18,0.5), edgecolor='black', linewidth=0.5)
ax1.set_xticks(np.arange(7,18,1))
ax1.set_xlabel('Gaia magnitude')
ax1.set_ylabel('frequency')
plt.show()
#because an (or some) element in the following lists is not a number we cant convert it yet into floats...
pax = np.asarray([output_all[i][5] for i in range(0, len(output_all))])
pmra = np.asarray([output_all[i][7] for i in range(0, len(output_all))])
pmdec = np.asarray([output_all[i][9] for i in range(0, len(output_all))])
#Look for missing values
for j in range(0, len(output_all[0])):
for i in range(0, len(output_all)):
if output_all[i][j] == '':
print(output_info[j],i)
#Where is/are the star/s with only a 2 parameter solution?
two_para_star = []
for i in range(0, len(pax)):
if pax[i] == '':
print(i)
two_para_star.append(i)
if pmra[i] == '':
print(i)
two_para_star.append(i)
if pmdec[i] == '':
print(i)
two_para_star.append(i)
list(set(two_para_star))
# star 133 resp. element 132 has no pax, pmra & pmdec!
# so the star will be removed from all lists
sid[132]
#remove element 132:
sid_new = np.delete(sid, two_para_star)
ra_new = np.delete(ra, two_para_star)
dec_new = np.delete(dec, two_para_star)
magg_new = np.delete(magg, two_para_star)
fluxg_new = np.delete(fluxg, two_para_star)
rv_0_new = np.delete(rv_0, two_para_star)
pax_new = np.delete(pax, two_para_star).astype(float)
pmra_new = np.delete(pmra, two_para_star).astype(float)
pmdec_new = np.delete(pmdec, two_para_star).astype(float)
#plot rv values
#positive --> receding (visa versa)
plt.scatter(np.arange(0,len(rv_0_new)), rv_0_new)
plt.show()
#so most stars - with rv values - are moving towards us
#using ra&dec and the flux we can recreate our observation
plt.subplots(1,1,figsize=(8,8))
plt.scatter(ra_new, dec_new, s=fluxg*5e5)
plt.scatter(ra[132], dec[132], s=fluxg[132]*5e5, c='r')
plt.gca().invert_xaxis()
plt.xlabel('RA (ยฐ)')
plt.ylabel('DEC (ยฐ)')
plt.show()
###Output
_____no_output_____
###Markdown
Reconstruct our Observation
###Code
def arrows(x, y, pm_x, pm_y, scale):
temp = []
for i in range(0, len(x)):
temp2 = [x[i], y[i], scale * pm_x[i], scale * pm_y[i]]
temp.append(temp2)
return np.array(temp)
soa = arrows(ra_new, dec_new, pmra_new*np.cos(dec_new), pmdec_new, 0.005)
X, Y, U, V = zip(*soa)
plt.subplots(1,1,figsize=(10,10))
ax = plt.gca()
ax.quiver(X, Y, U, V, angles='xy', scale_units='xy', scale=1, width=0.0017, alpha=1, color='r')
ax.scatter(ra[132], dec[132], s=np.array(fluxg[132])*3e5, c='k')
ax.scatter(ra_new, dec_new, s=np.array(fluxg_new)*3e5)
ax.invert_xaxis()
ax.margins(0.15)
ax.set_xlabel('RA (ยฐ)')
ax.set_ylabel('DEC (ยฐ)')
#plt.savefig('M34_pm.png', dpi=1000)
plt.draw()
plt.show()
#0-->min and 1-->max
def get_index_max(array, min_or_max):
if min_or_max == 0:
tmp = min(array)
tmpi = list(array).index(tmp)
name = "Gaia DR2 %i" % sid_new[tmpi]
return tmp, name
elif min_or_max == 1:
tmp = max(array)
tmpi = list(array).index(tmp)
name = "Gaia DR2 %i" % sid_new[tmpi]
return tmp, name
else:
print('Read the instructions.... dummy')
get_index_max(pax_new, 1)
# convert parallaxes into parsecs
parcs = 1000./np.array(pax_new)
pmra_new_c = pmra_new * np.cos(dec_new)
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(5, 1, figsize=(10,14))
ax1.hist(parcs, bins='auto')
ax2.hist(parcs, bins=np.arange(0,1000,20))
ax3.hist(parcs, bins=np.arange(300,700,16.5))
ax4.hist(pmra_new_c, bins='auto')
ax5.hist(pmdec_new, bins='auto')
#ax1.set_title('distance')
#ax2.set_title('distance zoom')
#ax3.set_title('pm ra')
#ax4.set_title('pm dec')
ax1.set_xlabel('distance (parsec)')
ax2.set_xlabel('distance (parsec)')
ax3.set_xlabel('distance (parsec)')
ax4.set_xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
ax5.set_xlabel('$\mu_\\delta$ (mas/yr)')
ax1.set_ylabel('frequency')
ax2.set_ylabel('frequency')
ax3.set_ylabel('frequency')
ax4.set_ylabel('frequency')
ax5.set_ylabel('frequency')
posx = 0.97
posy = 0.83
ax1.text(posx, posy, 'a', transform=ax1.transAxes, fontsize=16, fontweight='bold')
ax2.text(posx, posy, 'b', transform=ax2.transAxes, fontsize=16, fontweight='bold')
ax3.text(posx, posy, 'c', transform=ax3.transAxes, fontsize=16, fontweight='bold')
ax4.text(posx, posy, 'd', transform=ax4.transAxes, fontsize=16, fontweight='bold')
ax5.text(posx, posy, 'e', transform=ax5.transAxes, fontsize=16, fontweight='bold')
plt.subplots_adjust(hspace=0.5)
fig.savefig('M34_histogram.png', dpi=1000)
plt.show()
###Output
_____no_output_____
###Markdown
Extract Cluster Members
###Code
mask_dist = []
mask_pmra = []
mask_pmdec = []
for i in range(len(parcs)):
mask_dist.append(300 <= parcs[i] <= 700)
for j in range(len(pmra_new_c)):
mask_pmra.append(-1 <= pmra_new_c[j] <= 1.3)
for k in range(len(pmdec_new)):
mask_pmdec.append(-9 <= pmdec_new[k] <= -4)
mask_dist = np.array(mask_dist)
mask_pmra = np.array(mask_pmra)
mask_pmdec = np.array(mask_pmdec)
mask_cluster = []
for ind in range(max(len(mask_dist),len(mask_pmra),len(mask_pmdec))):
if mask_dist[ind] and mask_pmra[ind] and mask_pmdec[ind]:
mask_cluster.append(True)
else:
mask_cluster.append(False)
mask_cluster = np.array(mask_cluster)
mask_cluster
ra_cl = ra_new[mask_cluster]
dec_cl = dec_new[mask_cluster]
pmra_new_c_cl = pmra_new_c[mask_cluster]
pmdec_new_cl = pmdec_new[mask_cluster]
parcs_cl = parcs[mask_cluster]
fluxg_cl = fluxg_new[mask_cluster]
mask_cluster_not = ~(mask_cluster)
soa = arrows(ra_cl, dec_cl, pmra_new_c_cl, pmdec_new_cl, 0.005)
X, Y, U, V = zip(*soa)
plt.subplots(1,1,figsize=(8,8))
ax = plt.gca()
ax.quiver(X, Y, U, V, angles='xy', scale_units='xy', scale=1, width=0.002, alpha=1, color='r')
ax.scatter(ra_new, dec_new, s=np.array(fluxg_new)*5e5)
ax.scatter(ra_cl, dec_cl, s=np.array(fluxg_cl)*5e5,c='k')
ax.invert_xaxis()
ax.margins(0.1)
ax.set_xlabel('RA (ยฐ)')
ax.set_ylabel('DEC (ยฐ)')
#plt.savefig('M34_pm_mask.png', dpi=1000)
plt.draw()
plt.show()
arrow_members = arrows(ra_new[mask_cluster], dec_new[mask_cluster], pmra_new_c[mask_cluster], pmdec_new[mask_cluster], 0.005)
arrow_nomembers = arrows(ra_new[mask_cluster_not], dec_new[mask_cluster_not], pmra_new_c[mask_cluster_not], pmdec_new[mask_cluster_not], 0.005)
X, Y, U, V = zip(*arrow_members)
Xno, Yno, Uno, Vno = zip(*arrow_nomembers)
d10 = list(map(math.log10, parcs[mask_cluster]))
d10no = list(map(math.log10, parcs[mask_cluster_not]))
from mpl_toolkits.mplot3d import Axes3D
import random
fig = plt.figure(figsize=(16,16))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(ra_new[mask_cluster_not], d10no , dec_new[mask_cluster_not], s = np.array(fluxg_new[mask_cluster_not])*5e5)
ax.scatter(ra_new[mask_cluster], d10, dec_new[mask_cluster], s = np.array(fluxg_new[mask_cluster])*5e5, c='k')
ax.set_xlabel('RA (ยฐ)', labelpad=15, fontsize=14)
ax.set_ylabel('log$_{10}$(distance (parsec))', labelpad=15, fontsize=14)
ax.set_zlabel('DEC (ยฐ)', labelpad=17, fontsize=14)
ax.xaxis.set_tick_params(labelsize=13)
ax.yaxis.set_tick_params(labelsize=13)
ax.zaxis.set_tick_params(labelsize=13)
ax.quiver(Xno, d10no, Yno, Uno, 0, Vno, alpha=0.6, color='skyblue', arrow_length_ratio = 0.01)
ax.quiver(X, d10, Y, U, 0, V, alpha=0.8, color='darkblue', arrow_length_ratio = 0.01)
ax.quiver(Xno, d10no, Yno, 0, rv_0_new[mask_cluster_not]*0.01, 0, alpha=0.6, color='y', arrow_length_ratio = 0.01)
ax.quiver(X, d10, Y, 0, rv_0_new[mask_cluster]*0.01, 0, alpha=0.8, color='red', arrow_length_ratio = 0.01)
#ax.tick_params(axis='x', which='major', pad=10)
#ax.tick_params(axis='y', which='major', pad=10)
ax.tick_params(axis='z', which='major', pad=11)
ax.view_init(30, -60)
ax.invert_xaxis()
plt.show()
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(8,10))
hist,bins, __ = ax1.hist(parcs_cl, bins=np.arange(300, 700, 16.6))
ax2.hist(pmra_new_c_cl, bins=np.arange(-1, 1.3, 0.173))
ax3.hist(pmdec_new_cl, bins=np.arange(-9, -4, 0.36))
ax1.set_xlabel('distance (parsec)')
ax2.set_xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
ax3.set_xlabel('$\mu_\\delta$ (mas/yr)')
plt.subplots_adjust(hspace=0.3)
plt.show()
values, bins, _ = plt.hist(parcs_cl, bins='auto')#np.arange(400, 600, 16.6)
mu1, std1 = norm.fit(parcs_cl)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
area = sum(np.diff(bins)*values)
p = norm.pdf(x, mu1, std1)*area
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: $\mu$ = %.1f, $\sigma$ = %.1f" % (mu1, std1)
plt.title(title)
plt.xlabel('distance (parsec)')
plt.ylabel('frequency')
#plt.savefig('M34_Gaussian_pc.png', dpi=1000)
plt.show()
values, bins, _ = plt.hist(pmra_new_c_cl, bins=np.arange(-0.8,1,0.173))
mu2, std2 = norm.fit(pmra_new_c_cl)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
area = sum(np.diff(bins)*values)
p = norm.pdf(x, mu2, std2)*area
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: $\mu$ = %.2f, $\sigma$ = %.2f" % (mu2, std2)
plt.title(title)
plt.xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
plt.ylabel('frequency')
#plt.savefig('M34_Gaussian_pmra.png', dpi=1000)
plt.show()
values, bins, _ = plt.hist(pmdec_new_cl, bins=np.arange(-8,-3,0.36))
mu3, std3 = norm.fit(pmdec_new_cl)
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
area = sum(np.diff(bins)*values)
p = norm.pdf(x, mu3, std3)*area
plt.plot(x, p, 'k', linewidth=2)
title = "Fit results: $\mu$ = %.1f, $\sigma$ = %.1f" % (mu3, std3)
plt.title(title)
plt.xlabel('$\mu_\\delta$ (mas/yr)')
plt.ylabel('frequency')
#plt.savefig('M34_Gaussian_pmdec.png', dpi=1000)
plt.show()
###Output
_____no_output_____
###Markdown
Error Analysis
###Code
err_ra = np.asarray([output_all[i][2] for i in range(0, len(output_all))])
err_dec = np.asarray([output_all[i][4] for i in range(0, len(output_all))])
err_pax = np.asarray([output_all[i][6] for i in range(0, len(output_all))])
err_pmra = np.asarray([output_all[i][8] for i in range(0, len(output_all))])
err_pmdec = np.asarray([output_all[i][10] for i in range(0, len(output_all))])
err_ra_new = np.delete(err_ra, 132).astype(float)
err_dec_new = np.delete(err_dec, 132).astype(float)
err_pax_new = np.delete(err_pax, 132).astype(float)
err_pmra_new = np.delete(err_pmra, 132).astype(float)
err_pmdec_new = np.delete(err_pmdec, 132).astype(float)
fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(5, 1, figsize=(10,14))
_,bins,__ = ax1.hist(err_ra_new, bins='auto')
ax1.hist(err_ra_new[mask_cluster], bins)
_,bins,__ = ax2.hist(err_dec_new, bins='auto')
ax2.hist(err_dec_new[mask_cluster], bins)
_,bins,__ = ax3.hist(err_pax_new, bins='auto')
ax3.hist(err_pax_new[mask_cluster], bins)
_,bins,__ = ax4.hist(err_pmra_new, bins='auto')
ax4.hist(err_pmra_new[mask_cluster], bins)
_,bins,__ = ax5.hist(err_pmdec_new, bins='auto')
ax5.hist(err_pmdec_new[mask_cluster], bins)
ax1.set_xlabel('distance (parsec)')
ax2.set_xlabel('distance (parsec)')
ax3.set_xlabel('distance (parsec)')
ax4.set_xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
ax5.set_xlabel('$\mu_\\delta$ (mas/yr)')
ax1.set_ylabel('frequency')
ax2.set_ylabel('frequency')
ax3.set_ylabel('frequency')
ax4.set_ylabel('frequency')
ax5.set_ylabel('frequency')
posx = 0.97
posy = 0.83
ax1.text(posx, posy, 'a', transform=ax1.transAxes, fontsize=16, fontweight='bold')
ax2.text(posx, posy, 'b', transform=ax2.transAxes, fontsize=16, fontweight='bold')
ax3.text(posx, posy, 'c', transform=ax3.transAxes, fontsize=16, fontweight='bold')
ax4.text(posx, posy, 'd', transform=ax4.transAxes, fontsize=16, fontweight='bold')
ax5.text(posx, posy, 'e', transform=ax5.transAxes, fontsize=16, fontweight='bold')
plt.subplots_adjust(hspace=0.5)
#fig.savefig('M34_histogram.png', dpi=1000)
plt.show()
###Output
_____no_output_____
###Markdown
Gaia Magnitude
###Code
fig, ax1 = plt.subplots(1, 1, figsize=(8,8))
ax1.hist(magg_new, bins=np.arange(7,17.5,0.5), edgecolor='black', linewidth=0.5)
ax1.hist(magg_new[mask_cluster], bins=np.arange(7,17.5,0.5), edgecolor='black', linewidth=0.5, alpha=1)
ax1.set_xticks(np.arange(7,18,1))
ax1.set_yticks(np.arange(0,22,3))
ax1.set_xlabel('Gaia magnitude')
ax1.set_ylabel('frequency')
plt.show()
print( "#INPUT: %.1i" % (len(sid)))
print( "WITH ALL 5 PARAMETERS: %.1i" % (len(sid_new)))
print()
print( "--> NO 5 parameter sols for: %.1i" % (len(sid)-len(sid_new)))
print()
print( "RV exist for: %.1i" % (rv_0.count(0)))
print( "NO RV exist for: %.1i" % (len(sid)-rv_0.count(0)))
print()
print( "--> Fraction: %.3f" % (rv_0.count(0)/len(sid)))
print()
print()
print( "Distance: %.1f +/- %.1f" % (mu1, std1))
print( "PM RA: %.1f +/- %.1f" % (mu2, std2))
print( "PM DEC: %.1f +/- %.1f" % (mu3, std3))
print()
print()
plt.scatter(pmra_new_c, pmdec_new,s=5)
plt.xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
plt.ylabel('$\mu_\\delta$ (mas/yr)')
plt.show()
def x_both(lst):
tmp = lst + [-x for x in lst]
return tmp
#1 SIGMA
def x_ellipse1(a, b):
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(xel_both)
def y_ellipse1(a, b):
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(yel_both)
#2 SIGMA
def x_ellipse2(a, b):
a = 2*a
b = 2*b
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(xel_both)
def y_ellipse2(a, b):
a = 2*a
b = 2*b
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(yel_both)
#3 SIGMA
def x_ellipse3(a, b):
a = 3*a
b = 3*b
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(xel_both)
def y_ellipse3(a, b):
a = 3*a
b = 3*b
xel = np.arange(-a, a, 0.0001)
xel_pow = xel**2
dis = a**2-xel_pow
yel = b/a * np.sqrt(dis.tolist())
yel_both = []
for i in yel:
yel_both.append(i)
for i in yel:
yel_both.append(-i)
xel_both = x_both(xel.tolist())
return np.array(yel_both)
plt.subplots(1,1,figsize=(12,12))
plt.scatter(pmra_new_c, pmdec_new,s=5)
x_el1 = x_ellipse1(std2,std3)+mu2
y_el1 = y_ellipse1(std2,std3)+mu3
x_el2 = x_ellipse2(std2,std3)+mu2
y_el2 = y_ellipse2(std2,std3)+mu3
x_el3 = x_ellipse3(std2,std3)+mu2
y_el3 = y_ellipse3(std2,std3)+mu3
plt.plot(x_el1, y_el1, c='r', linewidth=1)
plt.plot(x_el2, y_el2, c='r', linewidth=1)
plt.plot(x_el3, y_el3, c='r', linewidth=1)
plt.xlabel('$\mu_\\alpha$ cos $\delta$ (mas/yr)')
plt.ylabel('$\mu_\\delta$ (mas/yr)')
plt.xlim(-10,10)
plt.ylim(-20,10)
#plt.xscale("symlog")
#plt.yscale("symlog")
plt.show()
###Output
_____no_output_____ |
analyses/seasonality_paper_nn/all/model_analysis_loco.ipynb | ###Markdown
Setup
###Code
from specific import *
###Output
_____no_output_____
###Markdown
Retrieve previous results from the 'model' notebook
###Code
X_train, X_test, y_train, y_test = data_split_cache.load()
rf = get_model()
###Output
_____no_output_____
###Markdown
Get Dask Client
###Code
def print_path():
import sys
return sys.path
client.submit(print_path).result()
# client.scheduler_info()['workers']['tcp://10.149.10.103:40991']
def p2():
import wildfires
return str(wildfires)
for worker in client.scheduler_info()["workers"]:
# print(worker)
# print(client.submit(print_path, pure=False, workers={worker}).result())
try:
print(client.submit(p2, pure=False, workers={worker}).result())
except:
print("Failed:", worker)
client.restart()
# client = Client(n_workers=1, threads_per_worker=8, resources={'threads': 8})
client = get_client()
client
###Output
_____no_output_____
###Markdown
Dask LOCO
###Code
loco_cache = SimpleCache("loco_results", cache_dir=CACHE_DIR)
leave_out = [""]
leave_out.extend(X_train.columns)
# XXX:
# loco_cache.clear()
@loco_cache
def get_loco_scores():
return dict(
dask_fit_loco(
rf, X_train, y_train, client, leave_out, local_n_jobs=31, verbose=True
)
)
scores = get_loco_scores()
scores
###Output
_____no_output_____ |
12_CNN_in_TF_Part2/12_CNN_in_TF.ipynb | ###Markdown
**Convolutional Neural Network (CNN) in TensorFlow** Import TensorFlow
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import datasets, layers, models, activations, Model, Input, regularizers
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Download and prepare the CIFAR10 datasetThe CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them.
###Code
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
###Output
_____no_output_____
###Markdown
Verify the dataTo verify that the dataset looks correct, let's plot the first 25 images from the training set and display the class name below each image.
###Code
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
###Output
_____no_output_____
###Markdown
**Create the convolutional base** The 6 lines of code below define the convolutional base using a common pattern: a stack of [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) and [MaxPooling2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) layers.As input, a CNN takes tensors of shape (image_height, image_width, color_channels), ignoring the batch size. If you are new to these dimensions, color_channels refers to (R,G,B). In this example, you will configure our CNN to process inputs of shape (32, 32, 3), which is the format of CIFAR images. You can do this by passing the argument `input_shape` to our first layer. **CNN Layer** **Max Pooling Layer** **Batch Normalization**1. Speeds up training2. Decreases importance of initial weights3. Regularizes the model (a little bit) **Regularization**Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model's performance on the unseen data as well.
###Code
def my_model():
inputs = Input(shape=(32, 32, 3))
x = layers.Conv2D(32, 3, padding="same", kernel_regularizer=regularizers.l2(0.01),)(
inputs
)
x = layers.BatchNormalization()(x)
x = activations.relu(x)
x = layers.MaxPooling2D()(x)
x = layers.Conv2D(64, 3, padding="same", kernel_regularizer=regularizers.l2(0.01),)(
x
)
x = layers.BatchNormalization()(x)
x = activations.relu(x)
x = layers.MaxPooling2D()(x)
x = layers.Conv2D(
128, 3, padding="same", kernel_regularizer=regularizers.l2(0.01),
)(x)
x = layers.BatchNormalization()(x)
x = activations.relu(x)
x = layers.Flatten()(x)
x = layers.Dense(64, activation="relu", kernel_regularizer=regularizers.l2(0.01),)(
x
)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)
model = Model(inputs=inputs, outputs=outputs)
return model
model = my_model()
###Output
_____no_output_____
###Markdown
Here's the complete architecture of our model.
###Code
model.summary()
###Output
Model: "model_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_7 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
conv2d_33 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
batch_normalization_18 (Batc (None, 32, 32, 32) 128
_________________________________________________________________
tf.nn.relu_18 (TFOpLambda) (None, 32, 32, 32) 0
_________________________________________________________________
max_pooling2d_22 (MaxPooling (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_34 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
batch_normalization_19 (Batc (None, 16, 16, 64) 256
_________________________________________________________________
tf.nn.relu_19 (TFOpLambda) (None, 16, 16, 64) 0
_________________________________________________________________
max_pooling2d_23 (MaxPooling (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_35 (Conv2D) (None, 8, 8, 128) 73856
_________________________________________________________________
batch_normalization_20 (Batc (None, 8, 8, 128) 512
_________________________________________________________________
tf.nn.relu_20 (TFOpLambda) (None, 8, 8, 128) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 8192) 0
_________________________________________________________________
dense_12 (Dense) (None, 64) 524352
_________________________________________________________________
dropout_3 (Dropout) (None, 64) 0
_________________________________________________________________
dense_13 (Dense) (None, 10) 650
=================================================================
Total params: 619,146
Trainable params: 618,698
Non-trainable params: 448
_________________________________________________________________
###Markdown
As you can see, our (4, 4, 64) outputs were flattened into vectors of shape (1024) before going through two Dense layers. Compile and train the model
###Code
model.compile(keras.optimizers.Adam(lr=3e-4),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels, batch_size=64,verbose=2,epochs=150,
validation_data=(test_images, test_labels))
###Output
Epoch 1/150
782/782 - 4s - loss: 3.0447 - accuracy: 0.3548 - val_loss: 2.0338 - val_accuracy: 0.5449
Epoch 2/150
782/782 - 3s - loss: 1.9154 - accuracy: 0.4687 - val_loss: 1.5710 - val_accuracy: 0.6087
Epoch 3/150
782/782 - 3s - loss: 1.6150 - accuracy: 0.5213 - val_loss: 1.3750 - val_accuracy: 0.6316
Epoch 4/150
782/782 - 3s - loss: 1.4983 - accuracy: 0.5463 - val_loss: 1.3236 - val_accuracy: 0.6238
Epoch 5/150
782/782 - 3s - loss: 1.4293 - accuracy: 0.5669 - val_loss: 1.3829 - val_accuracy: 0.5921
Epoch 6/150
782/782 - 3s - loss: 1.3960 - accuracy: 0.5742 - val_loss: 1.4841 - val_accuracy: 0.5424
Epoch 7/150
782/782 - 3s - loss: 1.3539 - accuracy: 0.5915 - val_loss: 1.1862 - val_accuracy: 0.6781
Epoch 8/150
782/782 - 3s - loss: 1.3309 - accuracy: 0.5983 - val_loss: 1.1942 - val_accuracy: 0.6576
Epoch 9/150
782/782 - 3s - loss: 1.3104 - accuracy: 0.6078 - val_loss: 1.1453 - val_accuracy: 0.6806
Epoch 10/150
782/782 - 3s - loss: 1.2908 - accuracy: 0.6148 - val_loss: 1.2737 - val_accuracy: 0.6428
Epoch 11/150
782/782 - 3s - loss: 1.2702 - accuracy: 0.6237 - val_loss: 1.6426 - val_accuracy: 0.5309
Epoch 12/150
782/782 - 3s - loss: 1.2581 - accuracy: 0.6298 - val_loss: 1.0651 - val_accuracy: 0.7221
Epoch 13/150
782/782 - 3s - loss: 1.2463 - accuracy: 0.6383 - val_loss: 1.2789 - val_accuracy: 0.6369
Epoch 14/150
782/782 - 3s - loss: 1.2333 - accuracy: 0.6416 - val_loss: 1.3313 - val_accuracy: 0.6235
Epoch 15/150
782/782 - 3s - loss: 1.2173 - accuracy: 0.6466 - val_loss: 1.2409 - val_accuracy: 0.6445
Epoch 16/150
782/782 - 3s - loss: 1.2098 - accuracy: 0.6516 - val_loss: 1.1779 - val_accuracy: 0.6757
Epoch 17/150
782/782 - 3s - loss: 1.1913 - accuracy: 0.6594 - val_loss: 1.1438 - val_accuracy: 0.6924
Epoch 18/150
782/782 - 3s - loss: 1.1869 - accuracy: 0.6620 - val_loss: 0.9685 - val_accuracy: 0.7549
Epoch 19/150
782/782 - 3s - loss: 1.1776 - accuracy: 0.6674 - val_loss: 1.1231 - val_accuracy: 0.6889
Epoch 20/150
782/782 - 3s - loss: 1.1681 - accuracy: 0.6702 - val_loss: 1.1981 - val_accuracy: 0.6753
Epoch 21/150
782/782 - 3s - loss: 1.1561 - accuracy: 0.6753 - val_loss: 1.0402 - val_accuracy: 0.7273
Epoch 22/150
782/782 - 3s - loss: 1.1519 - accuracy: 0.6800 - val_loss: 1.1846 - val_accuracy: 0.6670
Epoch 23/150
782/782 - 3s - loss: 1.1410 - accuracy: 0.6827 - val_loss: 1.1373 - val_accuracy: 0.6978
Epoch 24/150
782/782 - 3s - loss: 1.1319 - accuracy: 0.6875 - val_loss: 1.0876 - val_accuracy: 0.7146
Epoch 25/150
782/782 - 3s - loss: 1.1234 - accuracy: 0.6905 - val_loss: 1.1434 - val_accuracy: 0.7050
Epoch 26/150
782/782 - 3s - loss: 1.1195 - accuracy: 0.6914 - val_loss: 1.4046 - val_accuracy: 0.6139
Epoch 27/150
782/782 - 3s - loss: 1.1136 - accuracy: 0.6965 - val_loss: 1.0912 - val_accuracy: 0.7197
Epoch 28/150
782/782 - 3s - loss: 1.1048 - accuracy: 0.7000 - val_loss: 1.1225 - val_accuracy: 0.7022
Epoch 29/150
782/782 - 3s - loss: 1.0957 - accuracy: 0.7045 - val_loss: 1.1090 - val_accuracy: 0.7200
Epoch 30/150
782/782 - 3s - loss: 1.0878 - accuracy: 0.7070 - val_loss: 1.3491 - val_accuracy: 0.6164
Epoch 31/150
782/782 - 3s - loss: 1.0907 - accuracy: 0.7085 - val_loss: 1.0553 - val_accuracy: 0.7261
Epoch 32/150
782/782 - 3s - loss: 1.0869 - accuracy: 0.7115 - val_loss: 1.1252 - val_accuracy: 0.7097
Epoch 33/150
782/782 - 3s - loss: 1.0718 - accuracy: 0.7127 - val_loss: 1.3919 - val_accuracy: 0.6360
Epoch 34/150
782/782 - 3s - loss: 1.0645 - accuracy: 0.7168 - val_loss: 1.1687 - val_accuracy: 0.6940
Epoch 35/150
782/782 - 3s - loss: 1.0648 - accuracy: 0.7184 - val_loss: 1.1681 - val_accuracy: 0.6907
Epoch 36/150
782/782 - 3s - loss: 1.0697 - accuracy: 0.7180 - val_loss: 0.9257 - val_accuracy: 0.7810
Epoch 37/150
782/782 - 3s - loss: 1.0549 - accuracy: 0.7237 - val_loss: 1.1641 - val_accuracy: 0.6907
Epoch 38/150
782/782 - 3s - loss: 1.0519 - accuracy: 0.7237 - val_loss: 1.0386 - val_accuracy: 0.7423
Epoch 39/150
782/782 - 3s - loss: 1.0506 - accuracy: 0.7240 - val_loss: 1.3564 - val_accuracy: 0.6433
Epoch 40/150
782/782 - 3s - loss: 1.0460 - accuracy: 0.7284 - val_loss: 0.9866 - val_accuracy: 0.7589
Epoch 41/150
782/782 - 3s - loss: 1.0428 - accuracy: 0.7280 - val_loss: 1.0603 - val_accuracy: 0.7409
Epoch 42/150
782/782 - 3s - loss: 1.0286 - accuracy: 0.7324 - val_loss: 1.1154 - val_accuracy: 0.7310
Epoch 43/150
782/782 - 3s - loss: 1.0323 - accuracy: 0.7350 - val_loss: 1.0924 - val_accuracy: 0.7233
Epoch 44/150
782/782 - 3s - loss: 1.0329 - accuracy: 0.7335 - val_loss: 1.2308 - val_accuracy: 0.6813
Epoch 45/150
782/782 - 3s - loss: 1.0269 - accuracy: 0.7359 - val_loss: 1.0198 - val_accuracy: 0.7540
Epoch 46/150
782/782 - 3s - loss: 1.0212 - accuracy: 0.7387 - val_loss: 1.2861 - val_accuracy: 0.6742
Epoch 47/150
782/782 - 3s - loss: 1.0239 - accuracy: 0.7382 - val_loss: 1.0078 - val_accuracy: 0.7600
Epoch 48/150
782/782 - 3s - loss: 1.0073 - accuracy: 0.7441 - val_loss: 1.0322 - val_accuracy: 0.7537
Epoch 49/150
782/782 - 3s - loss: 1.0111 - accuracy: 0.7448 - val_loss: 1.0264 - val_accuracy: 0.7496
Epoch 50/150
782/782 - 3s - loss: 1.0103 - accuracy: 0.7407 - val_loss: 1.1848 - val_accuracy: 0.6947
Epoch 51/150
782/782 - 3s - loss: 0.9929 - accuracy: 0.7492 - val_loss: 0.9890 - val_accuracy: 0.7655
Epoch 52/150
782/782 - 3s - loss: 1.0078 - accuracy: 0.7453 - val_loss: 0.9409 - val_accuracy: 0.7818
Epoch 53/150
782/782 - 3s - loss: 1.0000 - accuracy: 0.7461 - val_loss: 1.2011 - val_accuracy: 0.7082
Epoch 54/150
782/782 - 3s - loss: 0.9954 - accuracy: 0.7479 - val_loss: 1.0508 - val_accuracy: 0.7505
Epoch 55/150
782/782 - 3s - loss: 0.9992 - accuracy: 0.7487 - val_loss: 1.0921 - val_accuracy: 0.7343
Epoch 56/150
782/782 - 3s - loss: 0.9908 - accuracy: 0.7524 - val_loss: 1.0021 - val_accuracy: 0.7583
Epoch 57/150
782/782 - 3s - loss: 0.9896 - accuracy: 0.7515 - val_loss: 1.1293 - val_accuracy: 0.7240
Epoch 58/150
782/782 - 3s - loss: 0.9893 - accuracy: 0.7513 - val_loss: 1.1180 - val_accuracy: 0.7241
Epoch 59/150
782/782 - 3s - loss: 0.9835 - accuracy: 0.7564 - val_loss: 0.9288 - val_accuracy: 0.7810
Epoch 60/150
782/782 - 3s - loss: 0.9836 - accuracy: 0.7546 - val_loss: 1.0944 - val_accuracy: 0.7411
Epoch 61/150
782/782 - 3s - loss: 0.9777 - accuracy: 0.7565 - val_loss: 0.9803 - val_accuracy: 0.7639
Epoch 62/150
782/782 - 3s - loss: 0.9828 - accuracy: 0.7559 - val_loss: 1.0648 - val_accuracy: 0.7337
Epoch 63/150
782/782 - 3s - loss: 0.9778 - accuracy: 0.7584 - val_loss: 1.2527 - val_accuracy: 0.6731
Epoch 64/150
782/782 - 3s - loss: 0.9653 - accuracy: 0.7619 - val_loss: 1.0337 - val_accuracy: 0.7578
Epoch 65/150
782/782 - 3s - loss: 0.9696 - accuracy: 0.7599 - val_loss: 1.2728 - val_accuracy: 0.6917
Epoch 66/150
782/782 - 3s - loss: 0.9678 - accuracy: 0.7633 - val_loss: 1.1066 - val_accuracy: 0.7244
Epoch 67/150
782/782 - 3s - loss: 0.9702 - accuracy: 0.7604 - val_loss: 1.0591 - val_accuracy: 0.7412
Epoch 68/150
782/782 - 3s - loss: 0.9598 - accuracy: 0.7650 - val_loss: 1.0395 - val_accuracy: 0.7612
Epoch 69/150
782/782 - 3s - loss: 0.9705 - accuracy: 0.7598 - val_loss: 0.9432 - val_accuracy: 0.7842
Epoch 70/150
782/782 - 3s - loss: 0.9634 - accuracy: 0.7624 - val_loss: 1.0362 - val_accuracy: 0.7591
Epoch 71/150
782/782 - 3s - loss: 0.9592 - accuracy: 0.7645 - val_loss: 1.0383 - val_accuracy: 0.7479
Epoch 72/150
782/782 - 3s - loss: 0.9598 - accuracy: 0.7653 - val_loss: 1.2102 - val_accuracy: 0.7004
Epoch 73/150
782/782 - 3s - loss: 0.9599 - accuracy: 0.7654 - val_loss: 1.4327 - val_accuracy: 0.6493
Epoch 74/150
782/782 - 3s - loss: 0.9588 - accuracy: 0.7666 - val_loss: 1.0675 - val_accuracy: 0.7589
Epoch 75/150
782/782 - 3s - loss: 0.9556 - accuracy: 0.7665 - val_loss: 1.0195 - val_accuracy: 0.7522
Epoch 76/150
782/782 - 3s - loss: 0.9539 - accuracy: 0.7688 - val_loss: 1.1200 - val_accuracy: 0.7247
Epoch 77/150
782/782 - 3s - loss: 0.9457 - accuracy: 0.7720 - val_loss: 0.9975 - val_accuracy: 0.7633
Epoch 78/150
782/782 - 3s - loss: 0.9508 - accuracy: 0.7703 - val_loss: 1.1751 - val_accuracy: 0.7189
Epoch 79/150
782/782 - 3s - loss: 0.9435 - accuracy: 0.7722 - val_loss: 0.9880 - val_accuracy: 0.7714
Epoch 80/150
782/782 - 3s - loss: 0.9433 - accuracy: 0.7716 - val_loss: 1.1713 - val_accuracy: 0.7068
Epoch 81/150
782/782 - 3s - loss: 0.9462 - accuracy: 0.7722 - val_loss: 1.1444 - val_accuracy: 0.7179
Epoch 82/150
782/782 - 3s - loss: 0.9441 - accuracy: 0.7729 - val_loss: 0.9559 - val_accuracy: 0.7797
Epoch 83/150
782/782 - 3s - loss: 0.9415 - accuracy: 0.7742 - val_loss: 1.4211 - val_accuracy: 0.6499
Epoch 84/150
782/782 - 3s - loss: 0.9478 - accuracy: 0.7717 - val_loss: 1.1729 - val_accuracy: 0.7179
Epoch 85/150
782/782 - 3s - loss: 0.9449 - accuracy: 0.7712 - val_loss: 1.0739 - val_accuracy: 0.7492
Epoch 86/150
782/782 - 3s - loss: 0.9390 - accuracy: 0.7756 - val_loss: 1.1650 - val_accuracy: 0.7309
Epoch 87/150
782/782 - 3s - loss: 0.9400 - accuracy: 0.7751 - val_loss: 1.3710 - val_accuracy: 0.6640
Epoch 88/150
782/782 - 3s - loss: 0.9366 - accuracy: 0.7746 - val_loss: 1.0425 - val_accuracy: 0.7594
Epoch 89/150
782/782 - 3s - loss: 0.9332 - accuracy: 0.7729 - val_loss: 1.0161 - val_accuracy: 0.7581
Epoch 90/150
782/782 - 3s - loss: 0.9338 - accuracy: 0.7761 - val_loss: 1.0541 - val_accuracy: 0.7617
Epoch 91/150
782/782 - 3s - loss: 0.9397 - accuracy: 0.7750 - val_loss: 1.0404 - val_accuracy: 0.7545
Epoch 92/150
782/782 - 3s - loss: 0.9329 - accuracy: 0.7768 - val_loss: 1.1247 - val_accuracy: 0.7345
Epoch 93/150
782/782 - 3s - loss: 0.9305 - accuracy: 0.7755 - val_loss: 1.0979 - val_accuracy: 0.7397
Epoch 94/150
782/782 - 3s - loss: 0.9375 - accuracy: 0.7760 - val_loss: 1.0610 - val_accuracy: 0.7633
Epoch 95/150
782/782 - 3s - loss: 0.9323 - accuracy: 0.7780 - val_loss: 1.0498 - val_accuracy: 0.7711
Epoch 96/150
782/782 - 3s - loss: 0.9232 - accuracy: 0.7800 - val_loss: 1.2032 - val_accuracy: 0.7193
Epoch 97/150
782/782 - 3s - loss: 0.9306 - accuracy: 0.7761 - val_loss: 1.0579 - val_accuracy: 0.7494
Epoch 98/150
782/782 - 3s - loss: 0.9268 - accuracy: 0.7778 - val_loss: 0.9634 - val_accuracy: 0.7769
Epoch 99/150
782/782 - 3s - loss: 0.9304 - accuracy: 0.7777 - val_loss: 1.0616 - val_accuracy: 0.7504
Epoch 100/150
782/782 - 3s - loss: 0.9198 - accuracy: 0.7836 - val_loss: 1.1787 - val_accuracy: 0.7461
Epoch 101/150
782/782 - 3s - loss: 0.9326 - accuracy: 0.7795 - val_loss: 1.1036 - val_accuracy: 0.7408
Epoch 102/150
782/782 - 3s - loss: 0.9204 - accuracy: 0.7842 - val_loss: 1.0056 - val_accuracy: 0.7717
Epoch 103/150
782/782 - 3s - loss: 0.9256 - accuracy: 0.7782 - val_loss: 1.1710 - val_accuracy: 0.7064
Epoch 104/150
782/782 - 3s - loss: 0.9316 - accuracy: 0.7808 - val_loss: 1.1014 - val_accuracy: 0.7368
Epoch 105/150
782/782 - 3s - loss: 0.9224 - accuracy: 0.7837 - val_loss: 1.1095 - val_accuracy: 0.7375
Epoch 106/150
782/782 - 3s - loss: 0.9215 - accuracy: 0.7813 - val_loss: 0.9953 - val_accuracy: 0.7722
Epoch 107/150
782/782 - 3s - loss: 0.9224 - accuracy: 0.7824 - val_loss: 1.0859 - val_accuracy: 0.7489
Epoch 108/150
782/782 - 3s - loss: 0.9232 - accuracy: 0.7842 - val_loss: 1.0918 - val_accuracy: 0.7407
Epoch 109/150
782/782 - 3s - loss: 0.9172 - accuracy: 0.7846 - val_loss: 1.1252 - val_accuracy: 0.7343
Epoch 110/150
782/782 - 3s - loss: 0.9244 - accuracy: 0.7826 - val_loss: 0.9713 - val_accuracy: 0.7855
Epoch 111/150
782/782 - 3s - loss: 0.9231 - accuracy: 0.7828 - val_loss: 1.3571 - val_accuracy: 0.6969
Epoch 112/150
782/782 - 3s - loss: 0.9208 - accuracy: 0.7841 - val_loss: 1.1541 - val_accuracy: 0.7345
Epoch 113/150
782/782 - 3s - loss: 0.9213 - accuracy: 0.7843 - val_loss: 1.0826 - val_accuracy: 0.7580
Epoch 114/150
782/782 - 3s - loss: 0.9205 - accuracy: 0.7837 - val_loss: 1.1317 - val_accuracy: 0.7263
Epoch 115/150
782/782 - 3s - loss: 0.9171 - accuracy: 0.7844 - val_loss: 1.0250 - val_accuracy: 0.7526
Epoch 116/150
782/782 - 3s - loss: 0.9151 - accuracy: 0.7874 - val_loss: 1.0694 - val_accuracy: 0.7516
Epoch 117/150
782/782 - 3s - loss: 0.9146 - accuracy: 0.7865 - val_loss: 1.0348 - val_accuracy: 0.7632
Epoch 118/150
782/782 - 3s - loss: 0.9159 - accuracy: 0.7877 - val_loss: 0.9962 - val_accuracy: 0.7846
Epoch 119/150
782/782 - 3s - loss: 0.9106 - accuracy: 0.7886 - val_loss: 1.2255 - val_accuracy: 0.7032
Epoch 120/150
782/782 - 3s - loss: 0.9184 - accuracy: 0.7862 - val_loss: 1.2291 - val_accuracy: 0.7136
Epoch 121/150
782/782 - 3s - loss: 0.9160 - accuracy: 0.7859 - val_loss: 1.0188 - val_accuracy: 0.7795
Epoch 122/150
782/782 - 3s - loss: 0.9181 - accuracy: 0.7880 - val_loss: 1.0867 - val_accuracy: 0.7418
Epoch 123/150
782/782 - 3s - loss: 0.9061 - accuracy: 0.7907 - val_loss: 1.2490 - val_accuracy: 0.7197
Epoch 124/150
782/782 - 3s - loss: 0.9131 - accuracy: 0.7886 - val_loss: 1.3769 - val_accuracy: 0.6645
Epoch 125/150
782/782 - 3s - loss: 0.9155 - accuracy: 0.7870 - val_loss: 1.2258 - val_accuracy: 0.7181
Epoch 126/150
782/782 - 3s - loss: 0.9108 - accuracy: 0.7890 - val_loss: 1.0425 - val_accuracy: 0.7655
Epoch 127/150
782/782 - 3s - loss: 0.9110 - accuracy: 0.7894 - val_loss: 0.9999 - val_accuracy: 0.7820
Epoch 128/150
782/782 - 3s - loss: 0.9114 - accuracy: 0.7905 - val_loss: 1.1094 - val_accuracy: 0.7428
Epoch 129/150
782/782 - 3s - loss: 0.9117 - accuracy: 0.7901 - val_loss: 1.0939 - val_accuracy: 0.7437
Epoch 130/150
782/782 - 3s - loss: 0.8981 - accuracy: 0.7949 - val_loss: 1.0239 - val_accuracy: 0.7735
Epoch 131/150
782/782 - 3s - loss: 0.9088 - accuracy: 0.7922 - val_loss: 1.1108 - val_accuracy: 0.7532
Epoch 132/150
782/782 - 3s - loss: 0.9079 - accuracy: 0.7925 - val_loss: 0.9933 - val_accuracy: 0.7877
Epoch 133/150
782/782 - 3s - loss: 0.9067 - accuracy: 0.7924 - val_loss: 0.9864 - val_accuracy: 0.7822
Epoch 134/150
782/782 - 3s - loss: 0.9130 - accuracy: 0.7889 - val_loss: 1.2213 - val_accuracy: 0.7353
Epoch 135/150
782/782 - 3s - loss: 0.9125 - accuracy: 0.7901 - val_loss: 1.0716 - val_accuracy: 0.7587
Epoch 136/150
782/782 - 3s - loss: 0.9052 - accuracy: 0.7933 - val_loss: 1.2643 - val_accuracy: 0.7087
Epoch 137/150
782/782 - 3s - loss: 0.9073 - accuracy: 0.7921 - val_loss: 1.1047 - val_accuracy: 0.7563
Epoch 138/150
782/782 - 3s - loss: 0.9103 - accuracy: 0.7931 - val_loss: 1.2009 - val_accuracy: 0.7345
Epoch 139/150
782/782 - 3s - loss: 0.9099 - accuracy: 0.7903 - val_loss: 1.0580 - val_accuracy: 0.7655
Epoch 140/150
782/782 - 3s - loss: 0.9003 - accuracy: 0.7948 - val_loss: 1.0344 - val_accuracy: 0.7681
Epoch 141/150
782/782 - 3s - loss: 0.9090 - accuracy: 0.7908 - val_loss: 1.0771 - val_accuracy: 0.7551
Epoch 142/150
782/782 - 3s - loss: 0.9070 - accuracy: 0.7923 - val_loss: 1.4452 - val_accuracy: 0.6912
Epoch 143/150
782/782 - 3s - loss: 0.9098 - accuracy: 0.7930 - val_loss: 1.0144 - val_accuracy: 0.7781
Epoch 144/150
782/782 - 3s - loss: 0.9060 - accuracy: 0.7927 - val_loss: 1.2071 - val_accuracy: 0.7395
Epoch 145/150
782/782 - 3s - loss: 0.8996 - accuracy: 0.7944 - val_loss: 1.2634 - val_accuracy: 0.7182
Epoch 146/150
782/782 - 3s - loss: 0.8980 - accuracy: 0.7968 - val_loss: 1.0398 - val_accuracy: 0.7612
Epoch 147/150
782/782 - 3s - loss: 0.9031 - accuracy: 0.7934 - val_loss: 1.0147 - val_accuracy: 0.7721
Epoch 148/150
782/782 - 3s - loss: 0.9021 - accuracy: 0.7952 - val_loss: 0.9947 - val_accuracy: 0.7842
Epoch 149/150
782/782 - 3s - loss: 0.9003 - accuracy: 0.7963 - val_loss: 1.6127 - val_accuracy: 0.6525
Epoch 150/150
782/782 - 3s - loss: 0.9002 - accuracy: 0.7955 - val_loss: 0.9970 - val_accuracy: 0.7861
###Markdown
Evaluate the model
###Code
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2,batch_size=64)
print(test_acc)
###Output
0.7860999703407288
|
python/210906-Python-basic2.ipynb | ###Markdown
์๋ฃ๊ตฌ์กฐ> - ๋ฐ์ดํฐ์ ์งํฉ - ๋ฐ์ดํฐ๊ฐ ๋ค์ด๊ฐ ์๋ ๋ฐ๊ตฌ๋ - ๋ณ์์ ์ ์ฅํ ์ ์๋ ๋ฐ์ดํฐ์ ์งํฉ >> - **`list`**(๋ฆฌ์คํธ) >> - **`tuple`**(ํํ) >> - **`dict`**(๋์
๋๋ฆฌ) >> - **`set`**(์
) ๋ฆฌ์คํธ(list)> - ํ์ด์ฌ์์ ๊ฐ์ฅ ํํ ์ฌ์ฉํ๋ ๋ฐ์ดํฐ์ ์๋ฃ๊ตฌ์กฐ - ์์๊ฐ ์๋ค. - **`[ ]`** ๋๊ดํธ๋ก ๋ฌถ์ด ์ฌ์ฉํ๋ค. - ๋ฆฌ์คํธ ์์ ๋ค์ด๊ฐ๋ ์์๋ค์ ,(์ผํ)๋ก ๊ตฌ๋ถํ๋ค. ๋ฆฌ์คํธ์ ์์ฑ
###Code
# ๋น ๋ฆฌ์คํธ ๋ง๋ค๊ธฐ
# ์ฉ์ด ์ ๋ฆฌ : ๋ณ์์ ๋ฐ์ดํฐ๋ฅผ ์ ์ฅํ๋ ์์
์ ์ ์ฅ, ํ ๋น, ๋ํ์ธ, ๋ฐ์ธ๋ฉ
empty_list = list()
# ๋๊ดํธ ๋ฟ ์๋๋ผ ๋ช
๋ น์ด๋ก๋ ์๋ํ๋ค.
empty_list2 = []
print(empty_list)
print(empty_list2)
# ๊ฐ๊ณผ ํจ๊ป ๋ฆฌ์คํธ ๋ง๋ค๊ธฐ
# wallet ์ด๋ผ๋ ๋ณ์์ 'coin', 'card', 'cash', 'id', 'licence' 5๊ฐ์ ํ
์คํธ ๋ฐ์ดํฐ๊ฐ ๋ค์ด๊ฐ ์๋ ๋ฆฌ์คํธ๋ฅผ ์ ์ฅ
wallet = ['coin', 'card', 'cash', 'id', 'licence']
wallet
# ๋ฆฌ์คํธ in ๋ฆฌ์คํธ
# ๋ฆฌ์คํธ ๋ด๋ถ์๋ ์ ์, ์ค์, ๋ฌธ์์ด ๋ฟ๋ง ์๋๋ผ ์๋ฃ๊ตฌ์กฐ์ธ list์ ์์ผ๋ก ํ์ตํ ๊ธฐํ ์๋ฃ๊ตฌ์กฐ๋ ํฌํจ์ํฌ ์ ์๋ค.
wallet2=[1,3.14,wallet]
wallet2
###Output
_____no_output_____
###Markdown
๋ฆฌ์คํธ ๊ฐฏ์์ธ๊ธฐ
###Code
# ๋ฆฌ์คํธ ์ ์ฒด ํญ๋ชฉ์ ๊ฐฏ์๋ฅผ ๋ณด๊ณ ์ถ๋ค๋ฉด
# len() ๋ช
๋ น์ด๋ ์๋ฃ๊ตฌ์กฐ ๋ด ์์์ ๊ฐฏ์๋ฅผ ์ธ๋ ๋ช
๋ น์ด์ด๋ค.
# python ๊ธฐ๋ณธ ๋ช
๋ น์ด๋ก ๋ค์ํ ์๋ฃ๊ตฌ์กฐ์๋ ์ฌ์ฉ์ด ๊ฐ๋ฅํ๋ค.
len(wallet)
###Output
_____no_output_____
###Markdown
๋ฆฌ์คํธ์ ์ธ๋ฑ์ฑ(indexing), ์ฌ๋ผ์ด์ฑ(slicing)> - ๋ฆฌ์คํธ์ ํน์ ํญ๋ชฉ์ ์ ๊ทผ(์์ธ)- ์ธ๋ฑ์ฑ์ ํ๋์ ๊ฐ์ ์ ๊ทผ, ์ฌ๋ผ์ด์ฑ์ ์ฌ๋ฌ๊ฐ์ ๊ฐ์ ๋ฌถ์์ ์ ๊ทผ - ๋ฆฌ์คํธ์ ์์๋ฅผ ์ธ๋ฑ์ค๋ผ๊ณ ํ๋ฉฐ **0**๋ถํฐ ์์
###Code
# ์ธ๋ฑ์ฑ
# ํ์ด์ฌ์ ์ธ๋ฑ์ค๋ 0 ๋ถํฐ ์์ํ๋ค.
# ๋ฆฌ์คํธ์ ์ฒซ๋ฒ์งธ ํญ๋ชฉ ๊ฐ์ ธ์ค๊ธฐ
wallet[0]
# ์ฌ๋ผ์ด์ฑ
# [start index : end index+1 : steps]
# ๋ฆฌ์คํธ์ ์ฒซ๋ฒ์งธ ํญ๋ชฉ๋ถํฐ 3๋ฒ์งธ ํญ๋ชฉ๊น์ง ๊ฐ์ ธ์ค๊ธฐ
wallet[0:3]
wallet[:3]
wallet[-5:-2]
# ๋ฆฌ์คํธ์ 4๋ฒ์งธ ํญ๋ชฉ๋ถํฐ ๋ง์ง๋ง ํญ๋ชฉ๊น์ง ๊ฐ์ ธ์ค๊ธฐ
wallet[3:]
wallet[-2:]
# ๋ฆฌ์คํธ์ 4๋ฒ์งธ ํญ๋ชฉ๊น์ง ๊ฐ์ ธ์ค๊ธฐ
wallet[:4]
wallet[0:4]
# ๋ฆฌ์คํธ์ ๋งจ ๋ง์ง๋ง ํญ๋ชฉ ๊ฐ์ ธ์ค๊ธฐ
# ์ธ๋ฑ์ค๋ก ์์๋ฅผ ์ ๋ฌํ๋ฉด ๋ง์ง๋ง ์ธ๋ฑ์ค๋ถํฐ ์ญ์์ผ๋ก ์ธ๋ฑ์ฑ์ ํ๋ค.
wallet[4]
wallet[-1]
# ๋ฆฌ์คํธ์ ์ ์ฒด ํญ๋ชฉ ์ค 2์ ๋ฐฐ์ ์ธ๋ฑ์ค ํญ๋ชฉ๊ฐ์ ธ์ค๊ธฐ
test_list = [1,2,3,4,5,6,7,8,9,10]
test_list[1::2]
# ์์ wallet2์ cash๋ฅผ ์ธ๋ฑ์ฑ ํด๋ณด์
wallet2
wallet2[2][2]
###Output
_____no_output_____
###Markdown
๋ฆฌ์คํธ ํธ์ง
###Code
# ๋ฆฌ์คํธ ๋ด ํน์ ํญ๋ชฉ ์
๋ฐ์ดํธ
# ์ธ๋ฑ์ค ํน์ ์ฌ๋ผ์ด์ฑ์ผ๋ก ์ ๊ทผ ํ ๋ฐ์ดํฐ๋ฅผ ์ง์ ๋ณ๊ฒฝํ๋ ๋ฐฉ๋ฒ
###Output
_____no_output_____
###Markdown
๋ด๋ถ๋ช
๋ น์ด> ๋ฆฌ์คํธ๋ ์๋ฃ๊ตฌ์กฐ๋ฅผ ๊ฐ๊ณ ์์ผ๋ฉด์ ๋ฆฌ์คํธ ์์
์ ์ํด ํ์ํ ๋ด๋ถ๋ช
๋ น์ด๋ ํจ๊ป ์ ๊ณตํ๋ค. ์ดํ ์์ธํ ๋ค๋ฃจ์ง๋ง ์ง๊ธ์ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ ์ ๋๋ง์ ๊ธฐ์ตํด๋์๋ฉด ๋ฉ๋๋ค ๋ฆฌ์คํธ์ด๋ฆ ๋ค .(์ฝค๋ง)๋ก ๋ช
๋ น์ด์ ์ ๊ทผ์ด ๊ฐ๋ฅํฉ๋๋ค. - **`list_name`**.**`func()`**
###Code
# ๋ฆฌ์คํธ์ ํญ๋ชฉ์ถ๊ฐ
# append() ๋ช
๋ น์ด์ ์ถ๊ฐํ ๊ฐ์ ์ ๋ฌํ์ฌ ๋ฆฌ์คํธ์ ์ถ๊ฐํฉ๋๋ค.
wallet2.append(500)
#extend
# wallet2.extend(wallet)
# wallet2
# ๋ฆฌ์คํธ์ ํน์ ํญ๋ชฉ ์ญ์
wallet2.remove(500)
# ๋ฆฌ์คํธ์ ๋งจ ๋ง์ง๋ง ํญ๋ชฉ ๊บผ๋ด์ค๊ธฐ (๊ฐ๋ ๊บผ๋ด๋ฉด์ ๋ฆฌ์คํธ์ ๋ง์ง๋ง ํญ๋ชฉ ์ญ์ )
wallet2.append(100)
pop_val = wallet2.pop()
wallet2
pop_val
# ์์์ ๋นผ๋ธ ๋งจ ๋ง์ง๋ง ํญ๋ชฉ์ ๋ณ์๋ก ์ ์ฅํ์ฌ ์ฌ์ฉ ๊ฐ๋ฅํฉ๋๋ค.
# ๊ฒฝ์ฐ์ ๋ฐ๋ผ์๋ ๋ณ์๋ก ์ ์ฅํ์ง ์๊ณ ๊ฐ์ ์ ๊ฑฐ ํ ๋๋ ์ ์ฉํ๊ฒ ์ฌ์ฉํฉ๋๋ค.
# ํ
์คํธ์ ์ฌ์ฉํ num_list ์์ฑ
num_list = [3, 4, 6, 1, 2, 8, 7]
# ๋ฆฌ์คํธ์ ๊ฐ ์ ๋ ฌ (์ซ์๋ ์ค๋ฆ์ฐจ์, ๋ฌธ์๋ ์ํ๋ฒณ์, ํ๊ธ์ ๊ฐ๋๋ค์)
num_list.sort()
# ์ญ์ ๋ ฌ๋ ๊ฐ๋ฅ
num_list.sort(reverse=True)
num_list
# ๋ฆฌ์คํธ ์์ ์ญ์(๋จ์ ์ญ์)
num_list = [3, 4, 6, 1, 2, 8, 7]
num_list.reverse()
num_list
num_list = [3, 4, 6, 1, 2, 8, 7]
# ๋ฆฌ์คํธ ๋ด 6์ ์ธ๋ฑ์ค ์์น๋ฒํธ๋ฅผ ๋ณด์ฌ์ค
num_list.index(6)
# ๋ฆฌ์คํธ์ 6์ด๋ผ๋ ๊ฐ์ด ๋ค์ด์๋ ์ธ๋ฑ์ค ๋ฒํธ๋ฅผ ๋ฃ์ด์ ๊ทธ ๊ฐ์ ์ฐพ๊ณ 100์ผ๋ก ๋ณํ
num_list[num_list.index(6)] =100
num_list
num_list
# ๋ฆฌ์คํธ ๋ด ํน์ ํญ๋ชฉ ๊ฐฏ์ ์ธ๊ธฐ
# append๋ก coin ํ๋ ์ถ๊ฐํ ๊ฐ
wallet.count('card')
###Output
_____no_output_____
###Markdown
๋ฆฌ์คํธ ์ฐ์ฐ
###Code
# intํ ๋ฐ์ดํฐ๊ฐ ๋ค์ด๊ฐ ์๋ num1, num2 ๋ฆฌ์คํธ ์์ฑ
num1 = [1, 2, 3, 4, 5]
num2 = [4, 5, 6, 7, 8]
# ๋ฆฌ์คํธ์ ๋ง์
์ฐ์ฐ
num1+num2
# ๋ฆฌ์คํธ ๋ด๋ถ ํญ๋ชฉ ํฉ, ์ต์๊ฐ, ์ต๋๊ฐ
sum(num2), min(num2), max(num2)
# ์ด๋ฏธ ํ์ตํ print, len ๋ช
๋ น์ด์ ๊ฐ์ด ๋ค๋ฅธ ์๋ฃ๊ตฌ์กฐ์๋ ์ ์ฉ๊ฐ๋ฅํฉ๋๋ค.
# ์กฐ๊ฑด์ฐ์ฐ์ ํ
์คํธ
1 not in num1
# ๋ฆฌ์คํธ์ ๊ณฑ์
์ฐ์ฐ
num1[0]*num2[1]
###Output
_____no_output_____
###Markdown
๋ฆฌ์คํธ ์ญ์
###Code
# ๋ฆฌ์คํธ ์ญ์ (๋ณ์ ์ญ์ , ๊ฐ ์ญ์ )
# ์์ฑํ ๋ฆฌ์คํธ๋ ๋จ์์์ง๋ง ๋ด๋ถ ํญ๋ชฉ์ ๋ชจ๋ ์ญ์ ํฉ๋๋ค.
wallet.clear()
wallet
# ๋ฉ๋ชจ๋ฆฌ์์ ์์ ํ ์ญ์
del wallet
wallet
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ(dict)> `key`์ `value`๊ฐ์ ์์ผ๋ก ์ ์ฅ๊ฐ๋ฅ ํ ์๋ฃ๊ตฌ์กฐ - `key` : `value` - **`{ }`** ์ค๊ดํธ๋ก ๋ฌถ์ด ์ฌ์ฉํ๋ค. - ๋์
๋๋ฆฌ ์์ ๊ตฌ๋ถ์ ๋ฆฌ์คํธ์ ๋ง์ฐฌ๊ฐ์ง๋ก ์ผํ - indexing์ด ๋์ง ์์ ๋์
๋๋ฆฌ ์์ฑ
###Code
# ๋น ๋์
๋๋ฆฌ ์์ฑ
test_dict = {}
# ๋์
๋๋ฆฌ๋ ์ค๊ดํธ ์ฌ์ฉ
test_dict = dict()
# ๊ฐ์ ์ถ๊ฐํ๋ฉด์ ๋์
๋๋ฆฌ ์์ฑ
wallet = {
'card':'SK์นด๋',
'cash':3000,
'coin':{'500์': 2, '100์' : 3}, #๋์
๋๋ฆฌ ๋ด ๋์
๋๋ฆฌ
'id':['์ฃผ๋ฏผ๋ฑ๋ก์ฆ','์ฌ๊ถ'], #๋์
๋๋ฆฌ ๋ด list
'licence':'์ด์ ๋ฉดํ์ฆ'
}
# ์ค๊ดํธ ๋ด key : value ๊ฐ์ ์ ๋ฌํ๊ณ ๊ฐ ํญ๋ชฉ์ ๊ตฌ๋ถ์ ์ผํ
# ๋ฆฌ์คํธ์ ๋ง์ฐฌ๊ฐ์ง๋ก ๋์
๋๋ฆฌ ๋ด๋ถ์ ๋ฆฌ์คํธ๋ฑ ํ ์๋ฃ๊ตฌ์กฐ ์ ์ฅ ๊ฐ๋ฅ
# ๋์
๋๋ฆฌ ํธ์ถ
wallet
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ ๊ฐฏ์์ธ๊ธฐ
###Code
len(wallet)
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ ๊ฐ์ ์ ๊ทผํ๊ธฐ
###Code
# key๊ฐ์ผ๋ก ๋์
๋๋ฆฌ ๊ฐ์ ์ ๊ทผ
wallet['card']
# ๋ฆฌ์คํธ์์ ์ธ๋ฑ์ค๋ฅผ ์ฌ์ฉํ์๋ค๋ฉด ๋์
๋๋ฆฌ๋ key๊ฐ์ ์ ๋ฌํ์ฌ ๊ฐ์ ์ ๊ทผํฉ๋๋ค.
# wallet 100์์ง๋ฆฌ ๊ฐ์ ์ ๊ทผ
wallet['coin']['100์']
# '์ฌ๊ถ'
wallet['id'][1]
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ ํธ์ง
###Code
# ๋์
๋๋ฆฌ์ point card key๋ฅผ ๊ฐ๋ ํดํผํฌ์ธํธ ๋ฌธ์์ด์ ๊ฐ์ผ๋ก ์ ์ฅ
wallet['point card'] = 'ํดํผํฌ์ธํธ'
wallet
# key๊ฐ์ด ์ซ์์ฌ๋ ๊ด๊ณ์์(์ธ๋ฑ์ฑ์ด๋ ํท๊ฐ๋ฆด ์ ์์)
# wallet[100] = 100
# wallet
# ๋์
๋๋ฆฌ ํน์ ํญ๋ชฉ ์
๋ฐ์ดํธ
wallet.update({'point card': 'CJ one'}) # ์์์ฒ๋ผ ์๋ก ์ง์ ํด์ฃผ๋๊ฒ ๋ ํธํ๊ธด ํจ
wallet
# ๋์
๋๋ฆฌ ํน์ value๊ฐ ๋นผ์ค๊ธฐ, value๋ฅผ ์ง์ ํด์ค์ผํจ
# pop ๋ช
๋ น์ด๋ก ๋นผ์จ ๊ฐ, wallet์๋ ํด๋นํญ๋ชฉ ์ฌ๋ผ์ ธ ์์
wallet.pop(100)
wallet
# ๋์
๋๋ฆฌ ๋ด๋ถ ์์ฑ ๊ฐ์ ์ ๊ทผํด์ ๊ฐ ์
๋ฐ์ดํธ๋ ๊ฐ๋ฅํ๋ค.
# wallet์ coin key๊ฐ์ ๊ฐ๋ ๊ฐ์ 50์:1 ๊ฐ์ ์ถ๊ฐํ๋ค.
wallet['coin']['50์'] = 1 # ๋๋ wallet['coin'].update({'50์': 1})
wallet
# wallet id key๊ฐ์ '์ด์ ๋ฉดํ์ฆ' ์ถ๊ฐ
wallet['id'].append('์ด์ ๋ฉดํ์ฆ')
wallet
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ ์ญ์
###Code
# ๋์
๋๋ฆฌ ํญ๋ชฉ ์ ๊ฑฐ
# ๋์
๋๋ฆฌ์ key๊ฐ๊น์ง ์ ๋ฌํ์ฌ ํด๋น key๊ฐ๊ณผ ๊ฐ์ ๋์์ ์ ๊ฑฐ
del wallet['id']
# ๋์
๋๋ฆฌ ์์ ์ ์ฒด ์ญ์
wallet.clear()
wallet
# ๋์
๋๋ฆฌ ๋ณ์ ์์ ์ญ์
del wallet
###Output
_____no_output_____
###Markdown
๋์
๋๋ฆฌ ์ถ๊ฐ ๋ช
๋ น์ด
###Code
# ๋์
๋๋ฆฌ ๋ด ํค ๊ฐ์ ํ์ธ
wallet.keys()
# ๋์
๋๋ฆฌ ๋ด ๊ฐ์ ํ์ธ
wallet.values()
wallet['coin'].keys()
# ๋์
๋๋ฆฌ์ key, value ์์ ํ์ธ
wallet.items()
###Output
_____no_output_____
###Markdown
ํํ(tuple)> - ๋ฐ์ดํฐ๊ฐ ๊ณ ์ ์ด ๋์ด ๋ณ๊ฒฝ์ด ๋ถ๊ฐ๋ฅ ํ ๋ฐ์ดํฐ ์งํฉ - ์ฐ์ฐ์ด๋ ์
๋ ฅ๊ฐ์ ์ ๋ฌํ๋ ์๋ฃ๊ตฌ์กฐ๋ก๋ ์ ์ฌ์ฉํ์ง ์๊ณ ๊ฒฐ๊ณผ๊ฐ์ ์ถ๋ ฅํ๋ ๊ฒฝ์ฐ ๋ง์ด ์ฌ์ฉํ๋ค. - **`( )`** ์๊ดํธ๋ก ๋ฌถ์ด ์ฌ์ฉํ๋ค. - ๊ตฌ๋ถ์ , ์ฝค๋ง๋ก ์ฌ์ฉํ๋ค. ํํ ์์ฑ
###Code
test_tuple = () # ์๋ฏธ๊ฐ ๋ชจํธํด์ ์ ์ฌ์ฉํ์ง ์๋๋ค.
test_tuple1 = tuple()
test_tuple2 = (1, 2, 3, 4)
test_tuple3 = 5, 6
###Output
_____no_output_____
###Markdown
ํํ ์ธ๋ฑ์ฑ
###Code
# ํํ ์ธ๋ฑ์ฑ ํ
์คํธ
test_tuple3
test_tuple2[1]
###Output
_____no_output_____
###Markdown
ํํ ํธ์งํ๋ฒ ํํ์ ๋ค์ด๊ฐ ๊ฐ์ ๋ณ๊ฒฝ์ด ๋ถ๊ฐ๋ฅํ๋ค. ๋ค๋ง ๊ฐ ์ถ๊ฐ๋ ๊ฐ๋ฅํ๋ค
###Code
# ํํ ์
๋ฐ์ดํธ ํ
์คํธ
test_tuple2[1] = 2 # ๊ฐ์ด ๋ณ๊ฒฝ๋์ง ์์(ํํ ํน์ง)
###Output
_____no_output_____
###Markdown
ํํ ์ญ์ ํํ ์ ์ฒด์ญ์ ๋ ๊ฐ๋ฅํ์ง๋ง ๊ฐ ์ญ์ ๋ ๋ถ๊ฐ๋ฅํ๋ค.
###Code
# ํํ ์ญ์ ํ
์คํธ
del test_tuple3
###Output
_____no_output_____
###Markdown
์ธํฉ (๋ฆฌ์คํธ, ๋ฌธ์์ด๋ ๊ฐ๋ฅ)
###Code
# ๊ฐ์ ๋ณ๊ฒฝํ์ง ๋ชปํ๊ธฐ ๋๋ฌธ์ ๊ฐ์ ๋ถ๋ฆฌ์ํฌ ์ ์์
a,b,c,d, = test_tuple2
print(a,b,c,d)
###Output
1 2 3 4
###Markdown
์
(set), ์งํฉ> - ์ค๋ณต๋ ๊ฐ์ ํ์ฉํ์ง ์๋ ์๋ฃ๊ตฌ์กฐ- ๋ฐ์ดํฐ์
์ ๊ณ ์ ๊ฐ์ ํ์ธํ๋ ์ฉ๋๋ก๋ ์ฌ์ฉ(์ค๋ณต์ ๊ฑฐ) - **`{ }`** ๋๊ดํธ๋ก ๋ฌถ์ด ์ฌ์ฉํ๋ค.- **์งํฉ ์ฐ์ฐ**์ ์ง์ํ๋ค. ์
์์ฑ
###Code
# ์
์์ฑ ํ
์คํธ
empty_set = {}
empty_set = set()
test_set1 = {1, 2, 3, 4}
test_set2 = {3, 4, 5, 6}
###Output
_____no_output_____
###Markdown
์
ํธ์ง
###Code
# ์
์ ๊ฐ ํ๋ ์ถ๊ฐ
# ์ค๋ณต์ ํ์ฉํ์ง ์๊ธฐ ๋๋ฌธ์ ์
์ ๊ฐ์ ๊ฐ์ด ์์ ๊ฒฝ์ฐ ์ถ๊ฐํ์ง ์์
test_set1.add(5)
test_set1
# ์
์ ๊ฐ ์ฌ๋ฌ๊ฐ ์ถ๊ฐ
test_set1.update({5,6,7})
test_set1
# ์
๊ฐ ํ๋์ญ์
test_set1.discard(5)
test_set1
###Output
_____no_output_____
###Markdown
์
์งํฉ ์ฐ์ฐ
###Code
# ๊ต์งํฉ
# test_set1 and test_set2
test_set1.intersection(test_set2)
# ํฉ์งํฉ
# test_set1 or test_set2
test_set1.union(test_set2)
# ์ฐจ์งํฉa
# test_set1 - test_set2
test_set1.difference(test_set2)
###Output
_____no_output_____
###Markdown
์
์ญ์
###Code
# ์
์ญ์ ํ
์คํธ
del test_set1
test_set1
###Output
_____no_output_____ |
dmu26/dmu26_XID+SPIRE_CDFS-SWIRE/XID+SPIRE_prior.ipynb | ###Markdown
This notebook uses all the raw data from the XID+MIPS catalogue, maps, PSF and relevant MOCs to create XID+ prior object and relevant tiling scheme Read in MOCsThe selection functions required are the main MOC associated with the masterlist.
###Code
Sel_func=pymoc.MOC()
Sel_func.read('../data/CDFS-SWIRE/holes_CDFS-SWIRE_irac1_O16_MOC.fits')
###Output
_____no_output_____
###Markdown
Read in XID+MIPS catalogue
###Code
XID_MIPS=Table.read('../data/CDFS-SWIRE/MIPS/dmu26_XID+MIPS_CDFS-SWIRE_cat_20170901.fits')
XID_MIPS[0:10]
skew=(XID_MIPS['FErr_MIPS_24_u']-XID_MIPS['F_MIPS_24'])/(XID_MIPS['F_MIPS_24']-XID_MIPS['FErr_MIPS_24_l'])
skew.name='(84th-50th)/(50th-16th) percentile'
g=sns.jointplot(x=np.log10(XID_MIPS['F_MIPS_24']),y=skew, kind='hex')
###Output
_____no_output_____
###Markdown
The uncertianties become Gaussian by $\sim 20 \mathrm{\mu Jy}$
###Code
good=XID_MIPS['F_MIPS_24']>20
good.sum()
###Output
_____no_output_____
###Markdown
Read in Maps
###Code
pswfits='../data/CDFS-SWIRE/SPIRE/CDFS-SWIRE-NEST_image_250_SMAP_v6.0.fits'#SPIRE 250 map
pmwfits='../data/CDFS-SWIRE/SPIRE/CDFS-SWIRE-NEST_image_350_SMAP_v6.0.fits'#SPIRE 350 map
plwfits='../data/CDFS-SWIRE/SPIRE/CDFS-SWIRE-NEST_image_500_SMAP_v6.0.fits'#SPIRE 500 map
#output folder
output_folder='./'
from astropy.io import fits
from astropy import wcs
#-----250-------------
hdulist = fits.open(pswfits)
im250phdu=hdulist[0].header
im250hdu=hdulist[1].header
im250=hdulist[1].data*1.0E3 #convert to mJy
nim250=hdulist[2].data*1.0E3 #convert to mJy
w_250 = wcs.WCS(hdulist[1].header)
pixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----350-------------
hdulist = fits.open(pmwfits)
im350phdu=hdulist[0].header
im350hdu=hdulist[1].header
im350=hdulist[1].data*1.0E3 #convert to mJy
nim350=hdulist[2].data*1.0E3 #convert to mJy
w_350 = wcs.WCS(hdulist[1].header)
pixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
#-----500-------------
hdulist = fits.open(plwfits)
im500phdu=hdulist[0].header
im500hdu=hdulist[1].header
im500=hdulist[1].data*1.0E3 #convert to mJy
nim500=hdulist[2].data*1.0E3 #convert to mJy
w_500 = wcs.WCS(hdulist[1].header)
pixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)
hdulist.close()
## Set XID+ prior class
#---prior250--------
prior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=Sel_func)#Initialise with map, uncertianty map, wcs info and primary header
prior250.prior_cat(XID_MIPS['RA'][good],XID_MIPS['Dec'][good],'dmu26_XID+MIPS_CDFS-SWIRE_cat_20170901.fits',ID=XID_MIPS['help_id'][good])#Set input catalogue
prior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)
#---prior350--------
prior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=Sel_func)
prior350.prior_cat(XID_MIPS['RA'][good],XID_MIPS['Dec'][good],'dmu26_XID+MIPS_CDFS-SWIRE_cat_20170901.fits',ID=XID_MIPS['help_id'][good])
prior350.prior_bkg(-5.0,5)
#---prior500--------
prior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=Sel_func)
prior500.prior_cat(XID_MIPS['RA'][good],XID_MIPS['Dec'][good],'dmu26_XID+MIPS_CDFS-SWIRE_cat_20170901.fits',ID=XID_MIPS['help_id'][good])
prior500.prior_bkg(-5.0,5)
#pixsize array (size of pixels in arcseconds)
pixsize=np.array([pixsize250,pixsize350,pixsize500])
#point response function for the three bands
prfsize=np.array([18.15,25.15,36.3])
#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)
from astropy.convolution import Gaussian2DKernel
##---------fit using Gaussian beam-----------------------
prf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)
prf250.normalize(mode='peak')
prf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)
prf350.normalize(mode='peak')
prf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)
prf500.normalize(mode='peak')
pind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map
pind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map
pind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map
prior250.set_prf(prf250.array,pind250,pind250)#requires psf as 2d grid, and x and y bins for grid (in pixel scale)
prior350.set_prf(prf350.array,pind350,pind350)
prior500.set_prf(prf500.array,pind500,pind500)
import pickle
#from moc, get healpix pixels at a given order
from xidplus import moc_routines
order=9
tiles=moc_routines.get_HEALPix_pixels(order,prior250.sra,prior250.sdec,unique=True)
order_large=6
tiles_large=moc_routines.get_HEALPix_pixels(order_large,prior250.sra,prior250.sdec,unique=True)
print('----- There are '+str(len(tiles))+' tiles required for input catalogue and '+str(len(tiles_large))+' large tiles')
output_folder='./'
outfile=output_folder+'Master_prior.pkl'
with open(outfile, 'wb') as f:
pickle.dump({'priors':[prior250,prior350,prior500],'tiles':tiles,'order':order,'version':xidplus.io.git_version()},f)
outfile=output_folder+'Tiles.pkl'
with open(outfile, 'wb') as f:
pickle.dump({'tiles':tiles,'order':order,'tiles_large':tiles_large,'order_large':order_large,'version':xidplus.io.git_version()},f)
raise SystemExit()
prior250.nsrc
###Output
_____no_output_____ |
05Natural Language Processing/01Lexical Processing/02Basic Lexical Processing/02Word Frequencies and Stop Words/stopwords.ipynb | ###Markdown
Plotting word frequencies
###Code
import requests
from nltk import FreqDist
from nltk.corpus import stopwords
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Download text of 'Alice in Wonderland' ebook from https://www.gutenberg.org/
###Code
url = "https://www.gutenberg.org/files/11/11-0.txt"
alice = requests.get(url)
print(alice.text)
###Output
รฏยปยฟThe Project Gutenberg EBook of Aliceรขยยs Adventures in Wonderland, by Lewis Carroll
This eBook is for the use of anyone anywhere in the United States and most
other parts of the world at no cost and with almost no restrictions
whatsoever. You may copy it, give it away or re-use it under the terms of
the Project Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United States, you'll have
to check the laws of the country where you are located before using this ebook.
Title: Aliceรขยยs Adventures in Wonderland
Author: Lewis Carroll
Release Date: June 25, 2008 [EBook #11]
[Most recently updated: October 12, 2020]
Language: English
Character set encoding: UTF-8
*** START OF THIS PROJECT GUTENBERG EBOOK ALICEรขยยS ADVENTURES IN WONDERLAND ***
Produced by Arthur DiBianca and David Widger
[Illustration]
Aliceรขยยs Adventures in Wonderland
by Lewis Carroll
THE MILLENNIUM FULCRUM EDITION 3.0
Contents
CHAPTER I. Down the Rabbit-Hole
CHAPTER II. The Pool of Tears
CHAPTER III. A Caucus-Race and a Long Tale
CHAPTER IV. The Rabbit Sends in a Little Bill
CHAPTER V. Advice from a Caterpillar
CHAPTER VI. Pig and Pepper
CHAPTER VII. A Mad Tea-Party
CHAPTER VIII. The Queenรขยยs Croquet-Ground
CHAPTER IX. The Mock Turtleรขยยs Story
CHAPTER X. The Lobster Quadrille
CHAPTER XI. Who Stole the Tarts?
CHAPTER XII. Aliceรขยยs Evidence
CHAPTER I.
Down the Rabbit-Hole
Alice was beginning to get very tired of sitting by her sister on the
bank, and of having nothing to do: once or twice she had peeped into
the book her sister was reading, but it had no pictures or
conversations in it, รขยยand what is the use of a book,รขยย thought Alice
รขยยwithout pictures or conversations?รขยย
So she was considering in her own mind (as well as she could, for the
hot day made her feel very sleepy and stupid), whether the pleasure of
making a daisy-chain would be worth the trouble of getting up and
picking the daisies, when suddenly a White Rabbit with pink eyes ran
close by her.
There was nothing so _very_ remarkable in that; nor did Alice think it
so _very_ much out of the way to hear the Rabbit say to itself, รขยยOh
dear! Oh dear! I shall be late!รขยย (when she thought it over afterwards,
it occurred to her that she ought to have wondered at this, but at the
time it all seemed quite natural); but when the Rabbit actually _took a
watch out of its waistcoat-pocket_, and looked at it, and then hurried
on, Alice started to her feet, for it flashed across her mind that she
had never before seen a rabbit with either a waistcoat-pocket, or a
watch to take out of it, and burning with curiosity, she ran across the
field after it, and fortunately was just in time to see it pop down a
large rabbit-hole under the hedge.
In another moment down went Alice after it, never once considering how
in the world she was to get out again.
The rabbit-hole went straight on like a tunnel for some way, and then
dipped suddenly down, so suddenly that Alice had not a moment to think
about stopping herself before she found herself falling down a very
deep well.
Either the well was very deep, or she fell very slowly, for she had
plenty of time as she went down to look about her and to wonder what
was going to happen next. First, she tried to look down and make out
what she was coming to, but it was too dark to see anything; then she
looked at the sides of the well, and noticed that they were filled with
cupboards and book-shelves; here and there she saw maps and pictures
hung upon pegs. She took down a jar from one of the shelves as she
passed; it was labelled รขยยORANGE MARMALADEรขยย, but to her great
disappointment it was empty: she did not like to drop the jar for fear
of killing somebody underneath, so managed to put it into one of the
cupboards as she fell past it.
รขยยWell!รขยย thought Alice to herself, รขยยafter such a fall as this, I shall
think nothing of tumbling down stairs! How brave theyรขยยll all think me
at home! Why, I wouldnรขยยt say anything about it, even if I fell off the
top of the house!รขยย (Which was very likely true.)
Down, down, down. Would the fall _never_ come to an end? รขยยI wonder how
many miles Iรขยยve fallen by this time?รขยย she said aloud. รขยยI must be
getting somewhere near the centre of the earth. Let me see: that would
be four thousand miles down, I thinkรขยยรขยย (for, you see, Alice had learnt
several things of this sort in her lessons in the schoolroom, and
though this was not a _very_ good opportunity for showing off her
knowledge, as there was no one to listen to her, still it was good
practice to say it over) รขยยรขยยyes, thatรขยยs about the right distanceรขยยbut
then I wonder what Latitude or Longitude Iรขยยve got to?รขยย (Alice had no
idea what Latitude was, or Longitude either, but thought they were nice
grand words to say.)
Presently she began again. รขยยI wonder if I shall fall right _through_
the earth! How funny itรขยยll seem to come out among the people that walk
with their heads downward! The Antipathies, I thinkรขยยรขยย (she was rather
glad there _was_ no one listening, this time, as it didnรขยยt sound at all
the right word) รขยยรขยยbut I shall have to ask them what the name of the
country is, you know. Please, Maรขยยam, is this New Zealand or Australia?รขยย
(and she tried to curtsey as she spokeรขยยfancy _curtseying_ as youรขยยre
falling through the air! Do you think you could manage it?) รขยยAnd what
an ignorant little girl sheรขยยll think me for asking! No, itรขยยll never do
to ask: perhaps I shall see it written up somewhere.รขยย
Down, down, down. There was nothing else to do, so Alice soon began
talking again. รขยยDinahรขยยll miss me very much to-night, I should think!รขยย
(Dinah was the cat.) รขยยI hope theyรขยยll remember her saucer of milk at
tea-time. Dinah my dear! I wish you were down here with me! There are
no mice in the air, Iรขยยm afraid, but you might catch a bat, and thatรขยยs
very like a mouse, you know. But do cats eat bats, I wonder?รขยย And here
Alice began to get rather sleepy, and went on saying to herself, in a
dreamy sort of way, รขยยDo cats eat bats? Do cats eat bats?รขยย and
sometimes, รขยยDo bats eat cats?รขยย for, you see, as she couldnรขยยt answer
either question, it didnรขยยt much matter which way she put it. She felt
that she was dozing off, and had just begun to dream that she was
walking hand in hand with Dinah, and saying to her very earnestly,
รขยยNow, Dinah, tell me the truth: did you ever eat a bat?รขยย when suddenly,
thump! thump! down she came upon a heap of sticks and dry leaves, and
the fall was over.
Alice was not a bit hurt, and she jumped up on to her feet in a moment:
she looked up, but it was all dark overhead; before her was another
long passage, and the White Rabbit was still in sight, hurrying down
it. There was not a moment to be lost: away went Alice like the wind,
and was just in time to hear it say, as it turned a corner, รขยยOh my ears
and whiskers, how late itรขยยs getting!รขยย She was close behind it when she
turned the corner, but the Rabbit was no longer to be seen: she found
herself in a long, low hall, which was lit up by a row of lamps hanging
from the roof.
There were doors all round the hall, but they were all locked; and when
Alice had been all the way down one side and up the other, trying every
door, she walked sadly down the middle, wondering how she was ever to
get out again.
Suddenly she came upon a little three-legged table, all made of solid
glass; there was nothing on it except a tiny golden key, and Aliceรขยยs
first thought was that it might belong to one of the doors of the hall;
but, alas! either the locks were too large, or the key was too small,
but at any rate it would not open any of them. However, on the second
time round, she came upon a low curtain she had not noticed before, and
behind it was a little door about fifteen inches high: she tried the
little golden key in the lock, and to her great delight it fitted!
Alice opened the door and found that it led into a small passage, not
much larger than a rat-hole: she knelt down and looked along the
passage into the loveliest garden you ever saw. How she longed to get
out of that dark hall, and wander about among those beds of bright
flowers and those cool fountains, but she could not even get her head
through the doorway; รขยยand even if my head would go through,รขยย thought
poor Alice, รขยยit would be of very little use without my shoulders. Oh,
how I wish I could shut up like a telescope! I think I could, if I only
knew how to begin.รขยย For, you see, so many out-of-the-way things had
happened lately, that Alice had begun to think that very few things
indeed were really impossible.
There seemed to be no use in waiting by the little door, so she went
back to the table, half hoping she might find another key on it, or at
any rate a book of rules for shutting people up like telescopes: this
time she found a little bottle on it, (รขยยwhich certainly was not here
before,รขยย said Alice,) and round the neck of the bottle was a paper
label, with the words รขยยDRINK ME,รขยย beautifully printed on it in large
letters.
It was all very well to say รขยยDrink me,รขยย but the wise little Alice was
not going to do _that_ in a hurry. รขยยNo, Iรขยยll look first,รขยย she said,
รขยยand see whether itรขยยs marked รขยย_poison_รขยย or notรขยย; for she had read
several nice little histories about children who had got burnt, and
eaten up by wild beasts and other unpleasant things, all because they
_would_ not remember the simple rules their friends had taught them:
such as, that a red-hot poker will burn you if you hold it too long;
and that if you cut your finger _very_ deeply with a knife, it usually
bleeds; and she had never forgotten that, if you drink much from a
bottle marked รขยยpoison,รขยย it is almost certain to disagree with you,
sooner or later.
However, this bottle was _not_ marked รขยยpoison,รขยย so Alice ventured to
taste it, and finding it very nice, (it had, in fact, a sort of mixed
flavour of cherry-tart, custard, pine-apple, roast turkey, toffee, and
hot buttered toast,) she very soon finished it off.
* * * * * * *
* * * * * *
* * * * * * *
รขยยWhat a curious feeling!รขยย said Alice; รขยยI must be shutting up like a
telescope.รขยย
And so it was indeed: she was now only ten inches high, and her face
brightened up at the thought that she was now the right size for going
through the little door into that lovely garden. First, however, she
waited for a few minutes to see if she was going to shrink any further:
she felt a little nervous about this; รขยยfor it might end, you know,รขยย
said Alice to herself, รขยยin my going out altogether, like a candle. I
wonder what I should be like then?รขยย And she tried to fancy what the
flame of a candle is like after the candle is blown out, for she could
not remember ever having seen such a thing.
After a while, finding that nothing more happened, she decided on going
into the garden at once; but, alas for poor Alice! when she got to the
door, she found she had forgotten the little golden key, and when she
went back to the table for it, she found she could not possibly reach
it: she could see it quite plainly through the glass, and she tried her
best to climb up one of the legs of the table, but it was too slippery;
and when she had tired herself out with trying, the poor little thing
sat down and cried.
รขยยCome, thereรขยยs no use in crying like that!รขยย said Alice to herself,
rather sharply; รขยยI advise you to leave off this minute!รขยย She generally
gave herself very good advice, (though she very seldom followed it),
and sometimes she scolded herself so severely as to bring tears into
her eyes; and once she remembered trying to box her own ears for having
cheated herself in a game of croquet she was playing against herself,
for this curious child was very fond of pretending to be two people.
รขยยBut itรขยยs no use now,รขยย thought poor Alice, รขยยto pretend to be two
people! Why, thereรขยยs hardly enough of me left to make _one_ respectable
person!รขยย
Soon her eye fell on a little glass box that was lying under the table:
she opened it, and found in it a very small cake, on which the words
รขยยEAT MEรขยย were beautifully marked in currants. รขยยWell, Iรขยยll eat it,รขยย said
Alice, รขยยand if it makes me grow larger, I can reach the key; and if it
makes me grow smaller, I can creep under the door; so either way Iรขยยll
get into the garden, and I donรขยยt care which happens!รขยย
She ate a little bit, and said anxiously to herself, รขยยWhich way? Which
way?รขยย, holding her hand on the top of her head to feel which way it was
growing, and she was quite surprised to find that she remained the same
size: to be sure, this generally happens when one eats cake, but Alice
had got so much into the way of expecting nothing but out-of-the-way
things to happen, that it seemed quite dull and stupid for life to go
on in the common way.
So she set to work, and very soon finished off the cake.
* * * * * * *
* * * * * *
* * * * * * *
CHAPTER II.
The Pool of Tears
รขยยCuriouser and curiouser!รขยย cried Alice (she was so much surprised, that
for the moment she quite forgot how to speak good English); รขยยnow Iรขยยm
opening out like the largest telescope that ever was! Good-bye, feet!รขยย
(for when she looked down at her feet, they seemed to be almost out of
sight, they were getting so far off). รขยยOh, my poor little feet, I
wonder who will put on your shoes and stockings for you now, dears? Iรขยยm
sure _I_ shanรขยยt be able! I shall be a great deal too far off to trouble
myself about you: you must manage the best way you can;รขยยbut I must be
kind to them,รขยย thought Alice, รขยยor perhaps they wonรขยยt walk the way I
want to go! Let me see: Iรขยยll give them a new pair of boots every
Christmas.รขยย
And she went on planning to herself how she would manage it. รขยยThey must
go by the carrier,รขยย she thought; รขยยand how funny itรขยยll seem, sending
presents to oneรขยยs own feet! And how odd the directions will look!
_Aliceรขยยs Right Foot, Esq., Hearthrug, near the Fender,_ (_with
Aliceรขยยs love_).
Oh dear, what nonsense Iรขยยm talking!รขยย
Just then her head struck against the roof of the hall: in fact she was
now more than nine feet high, and she at once took up the little golden
key and hurried off to the garden door.
Poor Alice! It was as much as she could do, lying down on one side, to
look through into the garden with one eye; but to get through was more
hopeless than ever: she sat down and began to cry again.
รขยยYou ought to be ashamed of yourself,รขยย said Alice, รขยยa great girl like
you,รขยย (she might well say this), รขยยto go on crying in this way! Stop
this moment, I tell you!รขยย But she went on all the same, shedding
gallons of tears, until there was a large pool all round her, about
four inches deep and reaching half down the hall.
After a time she heard a little pattering of feet in the distance, and
she hastily dried her eyes to see what was coming. It was the White
Rabbit returning, splendidly dressed, with a pair of white kid gloves
in one hand and a large fan in the other: he came trotting along in a
great hurry, muttering to himself as he came, รขยยOh! the Duchess, the
Duchess! Oh! wonรขยยt she be savage if Iรขยยve kept her waiting!รขยย Alice felt
so desperate that she was ready to ask help of any one; so, when the
Rabbit came near her, she began, in a low, timid voice, รขยยIf you please,
sirรขยยรขยย The Rabbit started violently, dropped the white kid gloves and
the fan, and skurried away into the darkness as hard as he could go.
Alice took up the fan and gloves, and, as the hall was very hot, she
kept fanning herself all the time she went on talking: รขยยDear, dear! How
queer everything is to-day! And yesterday things went on just as usual.
I wonder if Iรขยยve been changed in the night? Let me think: was I the
same when I got up this morning? I almost think I can remember feeling
a little different. But if Iรขยยm not the same, the next question is, Who
in the world am I? Ah, _thatรขยยs_ the great puzzle!รขยย And she began
thinking over all the children she knew that were of the same age as
herself, to see if she could have been changed for any of them.
รขยยIรขยยm sure Iรขยยm not Ada,รขยย she said, รขยยfor her hair goes in such long
ringlets, and mine doesnรขยยt go in ringlets at all; and Iรขยยm sure I canรขยยt
be Mabel, for I know all sorts of things, and she, oh! she knows such a
very little! Besides, _sheรขยยs_ she, and _Iรขยยm_ I, andรขยยoh dear, how
puzzling it all is! Iรขยยll try if I know all the things I used to know.
Let me see: four times five is twelve, and four times six is thirteen,
and four times seven isรขยยoh dear! I shall never get to twenty at that
rate! However, the Multiplication Table doesnรขยยt signify: letรขยยs try
Geography. London is the capital of Paris, and Paris is the capital of
Rome, and Romeรขยยno, _thatรขยยs_ all wrong, Iรขยยm certain! I must have been
changed for Mabel! Iรขยยll try and say รขยย_How doth the little_รขยยรขยยรขยย and she
crossed her hands on her lap as if she were saying lessons, and began
to repeat it, but her voice sounded hoarse and strange, and the words
did not come the same as they used to do:รขยย
รขยยHow doth the little crocodile
Improve his shining tail,
And pour the waters of the Nile
On every golden scale!
รขยยHow cheerfully he seems to grin,
How neatly spread his claws,
And welcome little fishes in
With gently smiling jaws!รขยย
รขยยIรขยยm sure those are not the right words,รขยย said poor Alice, and her eyes
filled with tears again as she went on, รขยยI must be Mabel after all, and
I shall have to go and live in that poky little house, and have next to
no toys to play with, and oh! ever so many lessons to learn! No, Iรขยยve
made up my mind about it; if Iรขยยm Mabel, Iรขยยll stay down here! Itรขยยll be
no use their putting their heads down and saying รขยยCome up again, dear!รขยย
I shall only look up and say รขยยWho am I then? Tell me that first, and
then, if I like being that person, Iรขยยll come up: if not, Iรขยยll stay down
here till Iรขยยm somebody elseรขยยรขยยbut, oh dear!รขยย cried Alice, with a sudden
burst of tears, รขยยI do wish they _would_ put their heads down! I am so
_very_ tired of being all alone here!รขยย
As she said this she looked down at her hands, and was surprised to see
that she had put on one of the Rabbitรขยยs little white kid gloves while
she was talking. รขยยHow _can_ I have done that?รขยย she thought. รขยยI must be
growing small again.รขยย She got up and went to the table to measure
herself by it, and found that, as nearly as she could guess, she was
now about two feet high, and was going on shrinking rapidly: she soon
found out that the cause of this was the fan she was holding, and she
dropped it hastily, just in time to avoid shrinking away altogether.
รขยยThat _was_ a narrow escape!รขยย said Alice, a good deal frightened at the
sudden change, but very glad to find herself still in existence; รขยยand
now for the garden!รขยย and she ran with all speed back to the little
door: but, alas! the little door was shut again, and the little golden
key was lying on the glass table as before, รขยยand things are worse than
ever,รขยย thought the poor child, รขยยfor I never was so small as this
before, never! And I declare itรขยยs too bad, that it is!รขยย
As she said these words her foot slipped, and in another moment,
splash! she was up to her chin in salt water. Her first idea was that
she had somehow fallen into the sea, รขยยand in that case I can go back by
railway,รขยย she said to herself. (Alice had been to the seaside once in
her life, and had come to the general conclusion, that wherever you go
to on the English coast you find a number of bathing machines in the
sea, some children digging in the sand with wooden spades, then a row
of lodging houses, and behind them a railway station.) However, she
soon made out that she was in the pool of tears which she had wept when
she was nine feet high.
รขยยI wish I hadnรขยยt cried so much!รขยย said Alice, as she swam about, trying
to find her way out. รขยยI shall be punished for it now, I suppose, by
being drowned in my own tears! That _will_ be a queer thing, to be
sure! However, everything is queer to-day.รขยย
Just then she heard something splashing about in the pool a little way
off, and she swam nearer to make out what it was: at first she thought
it must be a walrus or hippopotamus, but then she remembered how small
she was now, and she soon made out that it was only a mouse that had
slipped in like herself.
รขยยWould it be of any use, now,รขยย thought Alice, รขยยto speak to this mouse?
Everything is so out-of-the-way down here, that I should think very
likely it can talk: at any rate, thereรขยยs no harm in trying.รขยย So she
began: รขยยO Mouse, do you know the way out of this pool? I am very tired
of swimming about here, O Mouse!รขยย (Alice thought this must be the right
way of speaking to a mouse: she had never done such a thing before, but
she remembered having seen in her brotherรขยยs Latin Grammar, รขยยA mouseรขยยof
a mouseรขยยto a mouseรขยยa mouseรขยยO mouse!รขยย) The Mouse looked at her rather
inquisitively, and seemed to her to wink with one of its little eyes,
but it said nothing.
รขยยPerhaps it doesnรขยยt understand English,รขยย thought Alice; รขยยI daresay itรขยยs
a French mouse, come over with William the Conqueror.รขยย (For, with all
her knowledge of history, Alice had no very clear notion how long ago
anything had happened.) So she began again: รขยยOรยน est ma chatte?รขยย which
was the first sentence in her French lesson-book. The Mouse gave a
sudden leap out of the water, and seemed to quiver all over with
fright. รขยยOh, I beg your pardon!รขยย cried Alice hastily, afraid that she
had hurt the poor animalรขยยs feelings. รขยยI quite forgot you didnรขยยt like
cats.รขยย
รขยยNot like cats!รขยย cried the Mouse, in a shrill, passionate voice. รขยยWould
_you_ like cats if you were me?รขยย
รขยยWell, perhaps not,รขยย said Alice in a soothing tone: รขยยdonรขยยt be angry
about it. And yet I wish I could show you our cat Dinah: I think youรขยยd
take a fancy to cats if you could only see her. She is such a dear
quiet thing,รขยย Alice went on, half to herself, as she swam lazily about
in the pool, รขยยand she sits purring so nicely by the fire, licking her
paws and washing her faceรขยยand she is such a nice soft thing to
nurseรขยยand sheรขยยs such a capital one for catching miceรขยยoh, I beg your
pardon!รขยย cried Alice again, for this time the Mouse was bristling all
over, and she felt certain it must be really offended. รขยยWe wonรขยยt talk
about her any more if youรขยยd rather not.รขยย
รขยยWe indeed!รขยย cried the Mouse, who was trembling down to the end of his
tail. รขยยAs if _I_ would talk on such a subject! Our family always
_hated_ cats: nasty, low, vulgar things! Donรขยยt let me hear the name
again!รขยย
รขยยI wonรขยยt indeed!รขยย said Alice, in a great hurry to change the subject of
conversation. รขยยAre youรขยยare you fondรขยยofรขยยof dogs?รขยย The Mouse did not
answer, so Alice went on eagerly: รขยยThere is such a nice little dog near
our house I should like to show you! A little bright-eyed terrier, you
know, with oh, such long curly brown hair! And itรขยยll fetch things when
you throw them, and itรขยยll sit up and beg for its dinner, and all sorts
of thingsรขยยI canรขยยt remember half of themรขยยand it belongs to a farmer, you
know, and he says itรขยยs so useful, itรขยยs worth a hundred pounds! He says
it kills all the rats andรขยยoh dear!รขยย cried Alice in a sorrowful tone,
รขยยIรขยยm afraid Iรขยยve offended it again!รขยย For the Mouse was swimming away
from her as hard as it could go, and making quite a commotion in the
pool as it went.
So she called softly after it, รขยยMouse dear! Do come back again, and we
wonรขยยt talk about cats or dogs either, if you donรขยยt like them!รขยย When the
Mouse heard this, it turned round and swam slowly back to her: its face
was quite pale (with passion, Alice thought), and it said in a low
trembling voice, รขยยLet us get to the shore, and then Iรขยยll tell you my
history, and youรขยยll understand why it is I hate cats and dogs.รขยย
It was high time to go, for the pool was getting quite crowded with the
birds and animals that had fallen into it: there were a Duck and a
Dodo, a Lory and an Eaglet, and several other curious creatures. Alice
led the way, and the whole party swam to the shore.
CHAPTER III.
A Caucus-Race and a Long Tale
They were indeed a queer-looking party that assembled on the bankรขยยthe
birds with draggled feathers, the animals with their fur clinging close
to them, and all dripping wet, cross, and uncomfortable.
The first question of course was, how to get dry again: they had a
consultation about this, and after a few minutes it seemed quite
natural to Alice to find herself talking familiarly with them, as if
she had known them all her life. Indeed, she had quite a long argument
with the Lory, who at last turned sulky, and would only say, รขยยI am
older than you, and must know better;รขยย and this Alice would not allow
without knowing how old it was, and, as the Lory positively refused to
tell its age, there was no more to be said.
At last the Mouse, who seemed to be a person of authority among them,
called out, รขยยSit down, all of you, and listen to me! _Iรขยยll_ soon make
you dry enough!รขยย They all sat down at once, in a large ring, with the
Mouse in the middle. Alice kept her eyes anxiously fixed on it, for she
felt sure she would catch a bad cold if she did not get dry very soon.
รขยยAhem!รขยย said the Mouse with an important air, รขยยare you all ready? This
is the driest thing I know. Silence all round, if you please! รขยยWilliam
the Conqueror, whose cause was favoured by the pope, was soon submitted
to by the English, who wanted leaders, and had been of late much
accustomed to usurpation and conquest. Edwin and Morcar, the earls of
Mercia and Northumbriaรขยยรขยยรขยย
รขยยUgh!รขยย said the Lory, with a shiver.
รขยยI beg your pardon!รขยย said the Mouse, frowning, but very politely: รขยยDid
you speak?รขยย
รขยยNot I!รขยย said the Lory hastily.
รขยยI thought you did,รขยย said the Mouse. รขยยรขยยI proceed. รขยยEdwin and Morcar,
the earls of Mercia and Northumbria, declared for him: and even
Stigand, the patriotic archbishop of Canterbury, found it advisableรขยยรขยยรขยย
รขยยFound _what_?รขยย said the Duck.
รขยยFound _it_,รขยย the Mouse replied rather crossly: รขยยof course you know
what รขยยitรขยย means.รขยย
รขยยI know what รขยยitรขยย means well enough, when _I_ find a thing,รขยย said the
Duck: รขยยitรขยยs generally a frog or a worm. The question is, what did the
archbishop find?รขยย
The Mouse did not notice this question, but hurriedly went on, รขยยรขยยรขยยfound
it advisable to go with Edgar Atheling to meet William and offer him
the crown. Williamรขยยs conduct at first was moderate. But the insolence
of his Normansรขยยรขยย How are you getting on now, my dear?รขยย it continued,
turning to Alice as it spoke.
รขยยAs wet as ever,รขยย said Alice in a melancholy tone: รขยยit doesnรขยยt seem to
dry me at all.รขยย
รขยยIn that case,รขยย said the Dodo solemnly, rising to its feet, รขยยI move
that the meeting adjourn, for the immediate adoption of more energetic
remediesรขยยรขยย
รขยยSpeak English!รขยย said the Eaglet. รขยยI donรขยยt know the meaning of half
those long words, and, whatรขยยs more, I donรขยยt believe you do either!รขยย And
the Eaglet bent down its head to hide a smile: some of the other birds
tittered audibly.
รขยยWhat I was going to say,รขยย said the Dodo in an offended tone, รขยยwas,
that the best thing to get us dry would be a Caucus-race.รขยย
รขยยWhat _is_ a Caucus-race?รขยย said Alice; not that she wanted much to
know, but the Dodo had paused as if it thought that _somebody_ ought to
speak, and no one else seemed inclined to say anything.
รขยยWhy,รขยย said the Dodo, รขยยthe best way to explain it is to do it.รขยย (And,
as you might like to try the thing yourself, some winter day, I will
tell you how the Dodo managed it.)
First it marked out a race-course, in a sort of circle, (รขยยthe exact
shape doesnรขยยt matter,รขยย it said,) and then all the party were placed
along the course, here and there. There was no รขยยOne, two, three, and
away,รขยย but they began running when they liked, and left off when they
liked, so that it was not easy to know when the race was over. However,
when they had been running half an hour or so, and were quite dry
again, the Dodo suddenly called out รขยยThe race is over!รขยย and they all
crowded round it, panting, and asking, รขยยBut who has won?รขยย
This question the Dodo could not answer without a great deal of
thought, and it sat for a long time with one finger pressed upon its
forehead (the position in which you usually see Shakespeare, in the
pictures of him), while the rest waited in silence. At last the Dodo
said, รขยย_Everybody_ has won, and all must have prizes.รขยย
รขยยBut who is to give the prizes?รขยย quite a chorus of voices asked.
รขยยWhy, _she_, of course,รขยย said the Dodo, pointing to Alice with one
finger; and the whole party at once crowded round her, calling out in a
confused way, รขยยPrizes! Prizes!รขยย
Alice had no idea what to do, and in despair she put her hand in her
pocket, and pulled out a box of comfits, (luckily the salt water had
not got into it), and handed them round as prizes. There was exactly
one a-piece, all round.
รขยยBut she must have a prize herself, you know,รขยย said the Mouse.
รขยยOf course,รขยย the Dodo replied very gravely. รขยยWhat else have you got in
your pocket?รขยย he went on, turning to Alice.
รขยยOnly a thimble,รขยย said Alice sadly.
รขยยHand it over here,รขยย said the Dodo.
Then they all crowded round her once more, while the Dodo solemnly
presented the thimble, saying รขยยWe beg your acceptance of this elegant
thimble;รขยย and, when it had finished this short speech, they all
cheered.
Alice thought the whole thing very absurd, but they all looked so grave
that she did not dare to laugh; and, as she could not think of anything
to say, she simply bowed, and took the thimble, looking as solemn as
she could.
The next thing was to eat the comfits: this caused some noise and
confusion, as the large birds complained that they could not taste
theirs, and the small ones choked and had to be patted on the back.
However, it was over at last, and they sat down again in a ring, and
begged the Mouse to tell them something more.
รขยยYou promised to tell me your history, you know,รขยย said Alice, รขยยand why
it is you hateรขยยC and D,รขยย she added in a whisper, half afraid that it
would be offended again.
รขยยMine is a long and a sad tale!รขยย said the Mouse, turning to Alice, and
sighing.
รขยยIt _is_ a long tail, certainly,รขยย said Alice, looking down with wonder
at the Mouseรขยยs tail; รขยยbut why do you call it sad?รขยย And she kept on
puzzling about it while the Mouse was speaking, so that her idea of the
tale was something like this:รขยย
รขยยFury said to a mouse, That he met in the house, รขยยLet us both
go to law: _I_ will prosecute _you_.รขยยCome, Iรขยยll take no
denial; We must have a trial: For really this morning Iรขยยve
nothing to do.รขยย Said the mouse to the cur, รขยยSuch a trial, dear
sir, With no jury or judge, would be wasting our breath.รขยย
รขยยIรขยยll be judge, Iรขยยll be jury,รขยย Said cunning old Fury: รขยยIรขยยll
try the whole cause, and condemn you to death.รขยยรขยย
รขยยYou are not attending!รขยย said the Mouse to Alice severely. รขยยWhat are
you thinking of?รขยย
รขยยI beg your pardon,รขยย said Alice very humbly: รขยยyou had got to the fifth
bend, I think?รขยย
รขยยI had _not!_รขยย cried the Mouse, sharply and very angrily.
รขยยA knot!รขยย said Alice, always ready to make herself useful, and looking
anxiously about her. รขยยOh, do let me help to undo it!รขยย
รขยยI shall do nothing of the sort,รขยย said the Mouse, getting up and
walking away. รขยยYou insult me by talking such nonsense!รขยย
รขยยI didnรขยยt mean it!รขยย pleaded poor Alice. รขยยBut youรขยยre so easily offended,
you know!รขยย
The Mouse only growled in reply.
รขยยPlease come back and finish your story!รขยย Alice called after it; and
the others all joined in chorus, รขยยYes, please do!รขยย but the Mouse only
shook its head impatiently, and walked a little quicker.
รขยยWhat a pity it wouldnรขยยt stay!รขยย sighed the Lory, as soon as it was
quite out of sight; and an old Crab took the opportunity of saying to
her daughter รขยยAh, my dear! Let this be a lesson to you never to lose
_your_ temper!รขยย รขยยHold your tongue, Ma!รขยย said the young Crab, a little
snappishly. รขยยYouรขยยre enough to try the patience of an oyster!รขยย
รขยยI wish I had our Dinah here, I know I do!รขยย said Alice aloud,
addressing nobody in particular. รขยยSheรขยยd soon fetch it back!รขยย
รขยยAnd who is Dinah, if I might venture to ask the question?รขยย said the
Lory.
Alice replied eagerly, for she was always ready to talk about her pet:
รขยยDinahรขยยs our cat. And sheรขยยs such a capital one for catching mice you
canรขยยt think! And oh, I wish you could see her after the birds! Why,
sheรขยยll eat a little bird as soon as look at it!รขยย
This speech caused a remarkable sensation among the party. Some of the
birds hurried off at once: one old Magpie began wrapping itself up very
carefully, remarking, รขยยI really must be getting home; the night-air
doesnรขยยt suit my throat!รขยย and a Canary called out in a trembling voice
to its children, รขยยCome away, my dears! Itรขยยs high time you were all in
bed!รขยย On various pretexts they all moved off, and Alice was soon left
alone.
รขยยI wish I hadnรขยยt mentioned Dinah!รขยย she said to herself in a melancholy
tone. รขยยNobody seems to like her, down here, and Iรขยยm sure sheรขยยs the best
cat in the world! Oh, my dear Dinah! I wonder if I shall ever see you
any more!รขยย And here poor Alice began to cry again, for she felt very
lonely and low-spirited. In a little while, however, she again heard a
little pattering of footsteps in the distance, and she looked up
eagerly, half hoping that the Mouse had changed his mind, and was
coming back to finish his story.
CHAPTER IV.
The Rabbit Sends in a Little Bill
It was the White Rabbit, trotting slowly back again, and looking
anxiously about as it went, as if it had lost something; and she heard
it muttering to itself รขยยThe Duchess! The Duchess! Oh my dear paws! Oh
my fur and whiskers! Sheรขยยll get me executed, as sure as ferrets are
ferrets! Where _can_ I have dropped them, I wonder?รขยย Alice guessed in a
moment that it was looking for the fan and the pair of white kid
gloves, and she very good-naturedly began hunting about for them, but
they were nowhere to be seenรขยยeverything seemed to have changed since
her swim in the pool, and the great hall, with the glass table and the
little door, had vanished completely.
Very soon the Rabbit noticed Alice, as she went hunting about, and
called out to her in an angry tone, รขยยWhy, Mary Ann, what _are_ you
doing out here? Run home this moment, and fetch me a pair of gloves and
a fan! Quick, now!รขยย And Alice was so much frightened that she ran off
at once in the direction it pointed to, without trying to explain the
mistake it had made.
รขยยHe took me for his housemaid,รขยย she said to herself as she ran. รขยยHow
surprised heรขยยll be when he finds out who I am! But Iรขยยd better take him
his fan and glovesรขยยthat is, if I can find them.รขยย As she said this, she
came upon a neat little house, on the door of which was a bright brass
plate with the name รขยยW. RABBIT,รขยย engraved upon it. She went in without
knocking, and hurried upstairs, in great fear lest she should meet the
real Mary Ann, and be turned out of the house before she had found the
fan and gloves.
รขยยHow queer it seems,รขยย Alice said to herself, รขยยto be going messages for
a rabbit! I suppose Dinahรขยยll be sending me on messages next!รขยย And she
began fancying the sort of thing that would happen: รขยยรขยยMiss Alice! Come
here directly, and get ready for your walk!รขยย รขยยComing in a minute,
nurse! But Iรขยยve got to see that the mouse doesnรขยยt get out.รขยย Only I
donรขยยt think,รขยย Alice went on, รขยยthat theyรขยยd let Dinah stop in the house
if it began ordering people about like that!รขยย
By this time she had found her way into a tidy little room with a table
in the window, and on it (as she had hoped) a fan and two or three
pairs of tiny white kid gloves: she took up the fan and a pair of the
gloves, and was just going to leave the room, when her eye fell upon a
little bottle that stood near the looking-glass. There was no label
this time with the words รขยยDRINK ME,รขยย but nevertheless she uncorked it
and put it to her lips. รขยยI know _something_ interesting is sure to
happen,รขยย she said to herself, รขยยwhenever I eat or drink anything; so
Iรขยยll just see what this bottle does. I do hope itรขยยll make me grow large
again, for really Iรขยยm quite tired of being such a tiny little thing!รขยย
It did so indeed, and much sooner than she had expected: before she had
drunk half the bottle, she found her head pressing against the ceiling,
and had to stoop to save her neck from being broken. She hastily put
down the bottle, saying to herself รขยยThatรขยยs quite enoughรขยยI hope I shanรขยยt
grow any moreรขยยAs it is, I canรขยยt get out at the doorรขยยI do wish I hadnรขยยt
drunk quite so much!รขยย
Alas! it was too late to wish that! She went on growing, and growing,
and very soon had to kneel down on the floor: in another minute there
was not even room for this, and she tried the effect of lying down with
one elbow against the door, and the other arm curled round her head.
Still she went on growing, and, as a last resource, she put one arm out
of the window, and one foot up the chimney, and said to herself รขยยNow I
can do no more, whatever happens. What _will_ become of me?รขยย
Luckily for Alice, the little magic bottle had now had its full effect,
and she grew no larger: still it was very uncomfortable, and, as there
seemed to be no sort of chance of her ever getting out of the room
again, no wonder she felt unhappy.
รขยยIt was much pleasanter at home,รขยย thought poor Alice, รขยยwhen one wasnรขยยt
always growing larger and smaller, and being ordered about by mice and
rabbits. I almost wish I hadnรขยยt gone down that rabbit-holeรขยยand yetรขยยand
yetรขยยitรขยยs rather curious, you know, this sort of life! I do wonder what
_can_ have happened to me! When I used to read fairy-tales, I fancied
that kind of thing never happened, and now here I am in the middle of
one! There ought to be a book written about me, that there ought! And
when I grow up, Iรขยยll write oneรขยยbut Iรขยยm grown up now,รขยย she added in a
sorrowful tone; รขยยat least thereรขยยs no room to grow up any more _here_.รขยย
รขยยBut then,รขยย thought Alice, รขยยshall I _never_ get any older than I am
now? Thatรขยยll be a comfort, one wayรขยยnever to be an old womanรขยยbut
thenรขยยalways to have lessons to learn! Oh, I shouldnรขยยt like _that!_รขยย
รขยยOh, you foolish Alice!รขยย she answered herself. รขยยHow can you learn
lessons in here? Why, thereรขยยs hardly room for _you_, and no room at all
for any lesson-books!รขยย
And so she went on, taking first one side and then the other, and
making quite a conversation of it altogether; but after a few minutes
she heard a voice outside, and stopped to listen.
รขยยMary Ann! Mary Ann!รขยย said the voice. รขยยFetch me my gloves this moment!รขยย
Then came a little pattering of feet on the stairs. Alice knew it was
the Rabbit coming to look for her, and she trembled till she shook the
house, quite forgetting that she was now about a thousand times as
large as the Rabbit, and had no reason to be afraid of it.
Presently the Rabbit came up to the door, and tried to open it; but, as
the door opened inwards, and Aliceรขยยs elbow was pressed hard against it,
that attempt proved a failure. Alice heard it say to itself รขยยThen Iรขยยll
go round and get in at the window.รขยย
รขยย_That_ you wonรขยยt!รขยย thought Alice, and, after waiting till she fancied
she heard the Rabbit just under the window, she suddenly spread out her
hand, and made a snatch in the air. She did not get hold of anything,
but she heard a little shriek and a fall, and a crash of broken glass,
from which she concluded that it was just possible it had fallen into a
cucumber-frame, or something of the sort.
Next came an angry voiceรขยยthe RabbitรขยยsรขยยรขยยPat! Pat! Where are you?รขยย And
then a voice she had never heard before, รขยยSure then Iรขยยm here! Digging
for apples, yer honour!รขยย
รขยยDigging for apples, indeed!รขยย said the Rabbit angrily. รขยยHere! Come and
help me out of _this!_รขยย (Sounds of more broken glass.)
รขยยNow tell me, Pat, whatรขยยs that in the window?รขยย
รขยยSure, itรขยยs an arm, yer honour!รขยย (He pronounced it รขยยarrum.รขยย)
รขยยAn arm, you goose! Who ever saw one that size? Why, it fills the whole
window!รขยย
รขยยSure, it does, yer honour: but itรขยยs an arm for all that.รขยย
รขยยWell, itรขยยs got no business there, at any rate: go and take it away!รขยย
There was a long silence after this, and Alice could only hear whispers
now and then; such as, รขยยSure, I donรขยยt like it, yer honour, at all, at
all!รขยย รขยยDo as I tell you, you coward!รขยย and at last she spread out her
hand again, and made another snatch in the air. This time there were
_two_ little shrieks, and more sounds of broken glass. รขยยWhat a number
of cucumber-frames there must be!รขยย thought Alice. รขยยI wonder what
theyรขยยll do next! As for pulling me out of the window, I only wish they
_could!_ Iรขยยm sure _I_ donรขยยt want to stay in here any longer!รขยย
She waited for some time without hearing anything more: at last came a
rumbling of little cartwheels, and the sound of a good many voices all
talking together: she made out the words: รขยยWhereรขยยs the other
ladder?รขยยWhy, I hadnรขยยt to bring but one; Billรขยยs got the otherรขยยBill!
fetch it here, lad!รขยยHere, put รขยยem up at this cornerรขยยNo, tie รขยยem
together firstรขยยthey donรขยยt reach half high enough yetรขยยOh! theyรขยยll do
well enough; donรขยยt be particularรขยยHere, Bill! catch hold of this
ropeรขยยWill the roof bear?รขยยMind that loose slateรขยยOh, itรขยยs coming down!
Heads below!รขยย (a loud crash)รขยยรขยยNow, who did that?รขยยIt was Bill, I
fancyรขยยWhoรขยยs to go down the chimney?รขยยNay, _I_ shanรขยยt! _You_ do
it!รขยย_That_ I wonรขยยt, then!รขยยBillรขยยs to go downรขยยHere, Bill! the master says
youรขยยre to go down the chimney!รขยย
รขยยOh! So Billรขยยs got to come down the chimney, has he?รขยย said Alice to
herself. รขยยShy, they seem to put everything upon Bill! I wouldnรขยยt be in
Billรขยยs place for a good deal: this fireplace is narrow, to be sure; but
I _think_ I can kick a little!รขยย
She drew her foot as far down the chimney as she could, and waited till
she heard a little animal (she couldnรขยยt guess of what sort it was)
scratching and scrambling about in the chimney close above her: then,
saying to herself รขยยThis is Bill,รขยย she gave one sharp kick, and waited
to see what would happen next.
The first thing she heard was a general chorus of รขยยThere goes Bill!รขยย
then the Rabbitรขยยs voice alongรขยยรขยยCatch him, you by the hedge!รขยย then
silence, and then another confusion of voicesรขยยรขยยHold up his headรขยยBrandy
nowรขยยDonรขยยt choke himรขยยHow was it, old fellow? What happened to you? Tell
us all about it!รขยย
Last came a little feeble, squeaking voice, (รขยยThatรขยยs Bill,รขยย thought
Alice,) รขยยWell, I hardly knowรขยยNo more, thank ye; Iรขยยm better nowรขยยbut Iรขยยm
a deal too flustered to tell youรขยยall I know is, something comes at me
like a Jack-in-the-box, and up I goes like a sky-rocket!รขยย
รขยยSo you did, old fellow!รขยย said the others.
รขยยWe must burn the house down!รขยย said the Rabbitรขยยs voice; and Alice
called out as loud as she could, รขยยIf you do, Iรขยยll set Dinah at you!รขยย
There was a dead silence instantly, and Alice thought to herself, รขยยI
wonder what they _will_ do next! If they had any sense, theyรขยยd take the
roof off.รขยย After a minute or two, they began moving about again, and
Alice heard the Rabbit say, รขยยA barrowful will do, to begin with.รขยย
รขยยA barrowful of _what?_รขยย thought Alice; but she had not long to doubt,
for the next moment a shower of little pebbles came rattling in at the
window, and some of them hit her in the face. รขยยIรขยยll put a stop to
this,รขยย she said to herself, and shouted out, รขยยYouรขยยd better not do that
again!รขยย which produced another dead silence.
Alice noticed with some surprise that the pebbles were all turning into
little cakes as they lay on the floor, and a bright idea came into her
head. รขยยIf I eat one of these cakes,รขยย she thought, รขยยitรขยยs sure to make
_some_ change in my size; and as it canรขยยt possibly make me larger, it
must make me smaller, I suppose.รขยย
So she swallowed one of the cakes, and was delighted to find that she
began shrinking directly. As soon as she was small enough to get
through the door, she ran out of the house, and found quite a crowd of
little animals and birds waiting outside. The poor little Lizard, Bill,
was in the middle, being held up by two guinea-pigs, who were giving it
something out of a bottle. They all made a rush at Alice the moment she
appeared; but she ran off as hard as she could, and soon found herself
safe in a thick wood.
รขยยThe first thing Iรขยยve got to do,รขยย said Alice to herself, as she
wandered about in the wood, รขยยis to grow to my right size again; and the
second thing is to find my way into that lovely garden. I think that
will be the best plan.รขยย
It sounded an excellent plan, no doubt, and very neatly and simply
arranged; the only difficulty was, that she had not the smallest idea
how to set about it; and while she was peering about anxiously among
the trees, a little sharp bark just over her head made her look up in a
great hurry.
An enormous puppy was looking down at her with large round eyes, and
feebly stretching out one paw, trying to touch her. รขยยPoor little
thing!รขยย said Alice, in a coaxing tone, and she tried hard to whistle to
it; but she was terribly frightened all the time at the thought that it
might be hungry, in which case it would be very likely to eat her up in
spite of all her coaxing.
Hardly knowing what she did, she picked up a little bit of stick, and
held it out to the puppy; whereupon the puppy jumped into the air off
all its feet at once, with a yelp of delight, and rushed at the stick,
and made believe to worry it; then Alice dodged behind a great thistle,
to keep herself from being run over; and the moment she appeared on the
other side, the puppy made another rush at the stick, and tumbled head
over heels in its hurry to get hold of it; then Alice, thinking it was
very like having a game of play with a cart-horse, and expecting every
moment to be trampled under its feet, ran round the thistle again; then
the puppy began a series of short charges at the stick, running a very
little way forwards each time and a long way back, and barking hoarsely
all the while, till at last it sat down a good way off, panting, with
its tongue hanging out of its mouth, and its great eyes half shut.
This seemed to Alice a good opportunity for making her escape; so she
set off at once, and ran till she was quite tired and out of breath,
and till the puppyรขยยs bark sounded quite faint in the distance.
รขยยAnd yet what a dear little puppy it was!รขยย said Alice, as she leant
against a buttercup to rest herself, and fanned herself with one of the
leaves: รขยยI should have liked teaching it tricks very much, ifรขยยif Iรขยยd
only been the right size to do it! Oh dear! Iรขยยd nearly forgotten that
Iรขยยve got to grow up again! Let me seeรขยยhow _is_ it to be managed? I
suppose I ought to eat or drink something or other; but the great
question is, what?รขยย
The great question certainly was, what? Alice looked all round her at
the flowers and the blades of grass, but she did not see anything that
looked like the right thing to eat or drink under the circumstances.
There was a large mushroom growing near her, about the same height as
herself; and when she had looked under it, and on both sides of it, and
behind it, it occurred to her that she might as well look and see what
was on the top of it.
She stretched herself up on tiptoe, and peeped over the edge of the
mushroom, and her eyes immediately met those of a large blue
caterpillar, that was sitting on the top with its arms folded, quietly
smoking a long hookah, and taking not the smallest notice of her or of
anything else.
CHAPTER V.
Advice from a Caterpillar
The Caterpillar and Alice looked at each other for some time in
silence: at last the Caterpillar took the hookah out of its mouth, and
addressed her in a languid, sleepy voice.
รขยยWho are _you?_รขยย said the Caterpillar.
This was not an encouraging opening for a conversation. Alice replied,
rather shyly, รขยยIรขยยI hardly know, sir, just at presentรขยยat least I know
who I _was_ when I got up this morning, but I think I must have been
changed several times since then.รขยย
รขยยWhat do you mean by that?รขยย said the Caterpillar sternly. รขยยExplain
yourself!รขยย
รขยยI canรขยยt explain _myself_, Iรขยยm afraid, sir,รขยย said Alice, รขยยbecause Iรขยยm
not myself, you see.รขยย
รขยยI donรขยยt see,รขยย said the Caterpillar.
รขยยIรขยยm afraid I canรขยยt put it more clearly,รขยย Alice replied very politely,
รขยยfor I canรขยยt understand it myself to begin with; and being so many
different sizes in a day is very confusing.รขยย
รขยยIt isnรขยยt,รขยย said the Caterpillar.
รขยยWell, perhaps you havenรขยยt found it so yet,รขยย said Alice; รขยยbut when you
have to turn into a chrysalisรขยยyou will some day, you knowรขยยand then
after that into a butterfly, I should think youรขยยll feel it a little
queer, wonรขยยt you?รขยย
รขยยNot a bit,รขยย said the Caterpillar.
รขยยWell, perhaps your feelings may be different,รขยย said Alice; รขยยall I know
is, it would feel very queer to _me_.รขยย
รขยยYou!รขยย said the Caterpillar contemptuously. รขยยWho are _you?_รขยย
Which brought them back again to the beginning of the conversation.
Alice felt a little irritated at the Caterpillarรขยยs making such _very_
short remarks, and she drew herself up and said, very gravely, รขยยI
think, you ought to tell me who _you_ are, first.รขยย
รขยยWhy?รขยย said the Caterpillar.
Here was another puzzling question; and as Alice could not think of any
good reason, and as the Caterpillar seemed to be in a _very_ unpleasant
state of mind, she turned away.
รขยยCome back!รขยย the Caterpillar called after her. รขยยIรขยยve something
important to say!รขยย
This sounded promising, certainly: Alice turned and came back again.
รขยยKeep your temper,รขยย said the Caterpillar.
รขยยIs that all?รขยย said Alice, swallowing down her anger as well as she
could.
รขยยNo,รขยย said the Caterpillar.
Alice thought she might as well wait, as she had nothing else to do,
and perhaps after all it might tell her something worth hearing. For
some minutes it puffed away without speaking, but at last it unfolded
its arms, took the hookah out of its mouth again, and said, รขยยSo you
think youรขยยre changed, do you?รขยย
รขยยIรขยยm afraid I am, sir,รขยย said Alice; รขยยI canรขยยt remember things as I
usedรขยยand I donรขยยt keep the same size for ten minutes together!รขยย
รขยยCanรขยยt remember _what_ things?รขยย said the Caterpillar.
รขยยWell, Iรขยยve tried to say รขยยHow doth the little busy bee,รขยย but it all
came different!รขยย Alice replied in a very melancholy voice.
รขยยRepeat, รขยย_You are old, Father William_,รขยยรขยย said the Caterpillar.
Alice folded her hands, and began:รขยย
รขยยYou are old, Father William,รขยย the young man said,
รขยยAnd your hair has become very white;
And yet you incessantly stand on your headรขยย
Do you think, at your age, it is right?รขยย
รขยยIn my youth,รขยย Father William replied to his son,
รขยยI feared it might injure the brain;
But, now that Iรขยยm perfectly sure I have none,
Why, I do it again and again.รขยย
รขยยYou are old,รขยย said the youth, รขยยas I mentioned before,
And have grown most uncommonly fat;
Yet you turned a back-somersault in at the doorรขยย
Pray, what is the reason of that?รขยย
รขยยIn my youth,รขยย said the sage, as he shook his grey locks,
รขยยI kept all my limbs very supple
By the use of this ointmentรขยยone shilling the boxรขยย
Allow me to sell you a couple?รขยย
รขยยYou are old,รขยย said the youth, รขยยand your jaws are too weak
For anything tougher than suet;
Yet you finished the goose, with the bones and the beakรขยย
Pray, how did you manage to do it?รขยย
รขยยIn my youth,รขยย said his father, รขยยI took to the law,
And argued each case with my wife;
And the muscular strength, which it gave to my jaw,
Has lasted the rest of my life.รขยย
รขยยYou are old,รขยย said the youth, รขยยone would hardly suppose
That your eye was as steady as ever;
Yet you balanced an eel on the end of your noseรขยย
What made you so awfully clever?รขยย
รขยยI have answered three questions, and that is enough,รขยย
Said his father; รขยยdonรขยยt give yourself airs!
Do you think I can listen all day to such stuff?
Be off, or Iรขยยll kick you down stairs!รขยย
รขยยThat is not said right,รขยย said the Caterpillar.
รขยยNot _quite_ right, Iรขยยm afraid,รขยย said Alice, timidly; รขยยsome of the
words have got altered.รขยย
รขยยIt is wrong from beginning to end,รขยย said the Caterpillar decidedly,
and there was silence for some minutes.
The Caterpillar was the first to speak.
รขยยWhat size do you want to be?รขยย it asked.
รขยยOh, Iรขยยm not particular as to size,รขยย Alice hastily replied; รขยยonly one
doesnรขยยt like changing so often, you know.รขยย
รขยยI _donรขยยt_ know,รขยย said the Caterpillar.
Alice said nothing: she had never been so much contradicted in her life
before, and she felt that she was losing her temper.
รขยยAre you content now?รขยย said the Caterpillar.
รขยยWell, I should like to be a _little_ larger, sir, if you wouldnรขยยt
mind,รขยย said Alice: รขยยthree inches is such a wretched height to be.รขยย
รขยยIt is a very good height indeed!รขยย said the Caterpillar angrily,
rearing itself upright as it spoke (it was exactly three inches high).
รขยยBut Iรขยยm not used to it!รขยย pleaded poor Alice in a piteous tone. And she
thought of herself, รขยยI wish the creatures wouldnรขยยt be so easily
offended!รขยย
รขยยYouรขยยll get used to it in time,รขยย said the Caterpillar; and it put the
hookah into its mouth and began smoking again.
This time Alice waited patiently until it chose to speak again. In a
minute or two the Caterpillar took the hookah out of its mouth and
yawned once or twice, and shook itself. Then it got down off the
mushroom, and crawled away in the grass, merely remarking as it went,
รขยยOne side will make you grow taller, and the other side will make you
grow shorter.รขยย
รขยยOne side of _what?_ The other side of _what?_รขยย thought Alice to
herself.
รขยยOf the mushroom,รขยย said the Caterpillar, just as if she had asked it
aloud; and in another moment it was out of sight.
Alice remained looking thoughtfully at the mushroom for a minute,
trying to make out which were the two sides of it; and as it was
perfectly round, she found this a very difficult question. However, at
last she stretched her arms round it as far as they would go, and broke
off a bit of the edge with each hand.
รขยยAnd now which is which?รขยย she said to herself, and nibbled a little of
the right-hand bit to try the effect: the next moment she felt a
violent blow underneath her chin: it had struck her foot!
She was a good deal frightened by this very sudden change, but she felt
that there was no time to be lost, as she was shrinking rapidly; so she
set to work at once to eat some of the other bit. Her chin was pressed
so closely against her foot, that there was hardly room to open her
mouth; but she did it at last, and managed to swallow a morsel of the
lefthand bit.
* * * * * * *
* * * * * *
* * * * * * *
รขยยCome, my headรขยยs free at last!รขยย said Alice in a tone of delight, which
changed into alarm in another moment, when she found that her shoulders
were nowhere to be found: all she could see, when she looked down, was
an immense length of neck, which seemed to rise like a stalk out of a
sea of green leaves that lay far below her.
รขยยWhat _can_ all that green stuff be?รขยย said Alice. รขยยAnd where _have_ my
shoulders got to? And oh, my poor hands, how is it I canรขยยt see you?รขยย
She was moving them about as she spoke, but no result seemed to follow,
except a little shaking among the distant green leaves.
As there seemed to be no chance of getting her hands up to her head,
she tried to get her head down to them, and was delighted to find that
her neck would bend about easily in any direction, like a serpent. She
had just succeeded in curving it down into a graceful zigzag, and was
going to dive in among the leaves, which she found to be nothing but
the tops of the trees under which she had been wandering, when a sharp
hiss made her draw back in a hurry: a large pigeon had flown into her
face, and was beating her violently with its wings.
รขยยSerpent!รขยย screamed the Pigeon.
รขยยIรขยยm _not_ a serpent!รขยย said Alice indignantly. รขยยLet me alone!รขยย
รขยยSerpent, I say again!รขยย repeated the Pigeon, but in a more subdued
tone, and added with a kind of sob, รขยยIรขยยve tried every way, and nothing
seems to suit them!รขยย
รขยยI havenรขยยt the least idea what youรขยยre talking about,รขยย said Alice.
รขยยIรขยยve tried the roots of trees, and Iรขยยve tried banks, and Iรขยยve tried
hedges,รขยย the Pigeon went on, without attending to her; รขยยbut those
serpents! Thereรขยยs no pleasing them!รขยย
Alice was more and more puzzled, but she thought there was no use in
saying anything more till the Pigeon had finished.
รขยยAs if it wasnรขยยt trouble enough hatching the eggs,รขยย said the Pigeon;
รขยยbut I must be on the look-out for serpents night and day! Why, I
havenรขยยt had a wink of sleep these three weeks!รขยย
รขยยIรขยยm very sorry youรขยยve been annoyed,รขยย said Alice, who was beginning to
see its meaning.
รขยยAnd just as Iรขยยd taken the highest tree in the wood,รขยย continued the
Pigeon, raising its voice to a shriek, รขยยand just as I was thinking I
should be free of them at last, they must needs come wriggling down
from the sky! Ugh, Serpent!รขยย
รขยยBut Iรขยยm _not_ a serpent, I tell you!รขยย said Alice. รขยยIรขยยm aรขยยIรขยยm aรขยยรขยย
รขยยWell! _What_ are you?รขยย said the Pigeon. รขยยI can see youรขยยre trying to
invent something!รขยย
รขยยIรขยยIรขยยm a little girl,รขยย said Alice, rather doubtfully, as she remembered
the number of changes she had gone through that day.
รขยยA likely story indeed!รขยย said the Pigeon in a tone of the deepest
contempt. รขยยIรขยยve seen a good many little girls in my time, but never
_one_ with such a neck as that! No, no! Youรขยยre a serpent; and thereรขยยs
no use denying it. I suppose youรขยยll be telling me next that you never
tasted an egg!รขยย
รขยยI _have_ tasted eggs, certainly,รขยย said Alice, who was a very truthful
child; รขยยbut little girls eat eggs quite as much as serpents do, you
know.รขยย
รขยยI donรขยยt believe it,รขยย said the Pigeon; รขยยbut if they do, why then
theyรขยยre a kind of serpent, thatรขยยs all I can say.รขยย
This was such a new idea to Alice, that she was quite silent for a
minute or two, which gave the Pigeon the opportunity of adding, รขยยYouรขยยre
looking for eggs, I know _that_ well enough; and what does it matter to
me whether youรขยยre a little girl or a serpent?รขยย
รขยยIt matters a good deal to _me_,รขยย said Alice hastily; รขยยbut Iรขยยm not
looking for eggs, as it happens; and if I was, I shouldnรขยยt want
_yours_: I donรขยยt like them raw.รขยย
รขยยWell, be off, then!รขยย said the Pigeon in a sulky tone, as it settled
down again into its nest. Alice crouched down among the trees as well
as she could, for her neck kept getting entangled among the branches,
and every now and then she had to stop and untwist it. After a while
she remembered that she still held the pieces of mushroom in her hands,
and she set to work very carefully, nibbling first at one and then at
the other, and growing sometimes taller and sometimes shorter, until
she had succeeded in bringing herself down to her usual height.
It was so long since she had been anything near the right size, that it
felt quite strange at first; but she got used to it in a few minutes,
and began talking to herself, as usual. รขยยCome, thereรขยยs half my plan
done now! How puzzling all these changes are! Iรขยยm never sure what Iรขยยm
going to be, from one minute to another! However, Iรขยยve got back to my
right size: the next thing is, to get into that beautiful gardenรขยยhow
_is_ that to be done, I wonder?รขยย As she said this, she came suddenly
upon an open place, with a little house in it about four feet high.
รขยยWhoever lives there,รขยย thought Alice, รขยยitรขยยll never do to come upon them
_this_ size: why, I should frighten them out of their wits!รขยย So she
began nibbling at the righthand bit again, and did not venture to go
near the house till she had brought herself down to nine inches high.
CHAPTER VI.
Pig and Pepper
For a minute or two she stood looking at the house, and wondering what
to do next, when suddenly a footman in livery came running out of the
woodรขยย(she considered him to be a footman because he was in livery:
otherwise, judging by his face only, she would have called him a
fish)รขยยand rapped loudly at the door with his knuckles. It was opened by
another footman in livery, with a round face, and large eyes like a
frog; and both footmen, Alice noticed, had powdered hair that curled
all over their heads. She felt very curious to know what it was all
about, and crept a little way out of the wood to listen.
The Fish-Footman began by producing from under his arm a great letter,
nearly as large as himself, and this he handed over to the other,
saying, in a solemn tone, รขยยFor the Duchess. An invitation from the
Queen to play croquet.รขยย The Frog-Footman repeated, in the same solemn
tone, only changing the order of the words a little, รขยยFrom the Queen.
An invitation for the Duchess to play croquet.รขยย
Then they both bowed low, and their curls got entangled together.
Alice laughed so much at this, that she had to run back into the wood
for fear of their hearing her; and when she next peeped out the
Fish-Footman was gone, and the other was sitting on the ground near the
door, staring stupidly up into the sky.
Alice went timidly up to the door, and knocked.
รขยยThereรขยยs no sort of use in knocking,รขยย said the Footman, รขยยand that for
two reasons. First, because Iรขยยm on the same side of the door as you
are; secondly, because theyรขยยre making such a noise inside, no one could
possibly hear you.รขยย And certainly there _was_ a most extraordinary
noise going on withinรขยยa constant howling and sneezing, and every now
and then a great crash, as if a dish or kettle had been broken to
pieces.
รขยยPlease, then,รขยย said Alice, รขยยhow am I to get in?รขยย
รขยยThere might be some sense in your knocking,รขยย the Footman went on
without attending to her, รขยยif we had the door between us. For instance,
if you were _inside_, you might knock, and I could let you out, you
know.รขยย He was looking up into the sky all the time he was speaking, and
this Alice thought decidedly uncivil. รขยยBut perhaps he canรขยยt help it,รขยย
she said to herself; รขยยhis eyes are so _very_ nearly at the top of his
head. But at any rate he might answer questions.รขยยHow am I to get in?รขยย
she repeated, aloud.
รขยยI shall sit here,รขยย the Footman remarked, รขยยtill tomorrowรขยยรขยย
At this moment the door of the house opened, and a large plate came
skimming out, straight at the Footmanรขยยs head: it just grazed his nose,
and broke to pieces against one of the trees behind him.
รขยยรขยยor next day, maybe,รขยย the Footman continued in the same tone, exactly
as if nothing had happened.
รขยยHow am I to get in?รขยย asked Alice again, in a louder tone.
รขยย_Are_ you to get in at all?รขยย said the Footman. รขยยThatรขยยs the first
question, you know.รขยย
It was, no doubt: only Alice did not like to be told so. รขยยItรขยยs really
dreadful,รขยย she muttered to herself, รขยยthe way all the creatures argue.
Itรขยยs enough to drive one crazy!รขยย
The Footman seemed to think this a good opportunity for repeating his
remark, with variations. รขยยI shall sit here,รขยย he said, รขยยon and off, for
days and days.รขยย
รขยยBut what am _I_ to do?รขยย said Alice.
รขยยAnything you like,รขยย said the Footman, and began whistling.
รขยยOh, thereรขยยs no use in talking to him,รขยย said Alice desperately: รขยยheรขยยs
perfectly idiotic!รขยย And she opened the door and went in.
The door led right into a large kitchen, which was full of smoke from
one end to the other: the Duchess was sitting on a three-legged stool
in the middle, nursing a baby; the cook was leaning over the fire,
stirring a large cauldron which seemed to be full of soup.
รขยยThereรขยยs certainly too much pepper in that soup!รขยย Alice said to
herself, as well as she could for sneezing.
There was certainly too much of it in the air. Even the Duchess sneezed
occasionally; and as for the baby, it was sneezing and howling
alternately without a momentรขยยs pause. The only things in the kitchen
that did not sneeze, were the cook, and a large cat which was sitting
on the hearth and grinning from ear to ear.
รขยยPlease would you tell me,รขยย said Alice, a little timidly, for she was
not quite sure whether it was good manners for her to speak first, รขยยwhy
your cat grins like that?รขยย
รขยยItรขยยs a Cheshire cat,รขยย said the Duchess, รขยยand thatรขยยs why. Pig!รขยย
She said the last word with such sudden violence that Alice quite
jumped; but she saw in another moment that it was addressed to the
baby, and not to her, so she took courage, and went on again:รขยย
รขยยI didnรขยยt know that Cheshire cats always grinned; in fact, I didnรขยยt
know that cats _could_ grin.รขยย
รขยยThey all can,รขยย said the Duchess; รขยยand most of รขยยem do.รขยย
รขยยI donรขยยt know of any that do,รขยย Alice said very politely, feeling quite
pleased to have got into a conversation.
รขยยYou donรขยยt know much,รขยย said the Duchess; รขยยand thatรขยยs a fact.รขยย
Alice did not at all like the tone of this remark, and thought it would
be as well to introduce some other subject of conversation. While she
was trying to fix on one, the cook took the cauldron of soup off the
fire, and at once set to work throwing everything within her reach at
the Duchess and the babyรขยยthe fire-irons came first; then followed a
shower of saucepans, plates, and dishes. The Duchess took no notice of
them even when they hit her; and the baby was howling so much already,
that it was quite impossible to say whether the blows hurt it or not.
รขยยOh, _please_ mind what youรขยยre doing!รขยย cried Alice, jumping up and down
in an agony of terror. รขยยOh, there goes his _precious_ nose!รขยย as an
unusually large saucepan flew close by it, and very nearly carried it
off.
รขยยIf everybody minded their own business,รขยย the Duchess said in a hoarse
growl, รขยยthe world would go round a deal faster than it does.รขยย
รขยยWhich would _not_ be an advantage,รขยย said Alice, who felt very glad to
get an opportunity of showing off a little of her knowledge. รขยยJust
think of what work it would make with the day and night! You see the
earth takes twenty-four hours to turn round on its axisรขยยรขยย
รขยยTalking of axes,รขยย said the Duchess, รขยยchop off her head!รขยย
Alice glanced rather anxiously at the cook, to see if she meant to take
the hint; but the cook was busily stirring the soup, and seemed not to
be listening, so she went on again: รขยยTwenty-four hours, I _think_; or
is it twelve? Iรขยยรขยย
รขยยOh, donรขยยt bother _me_,รขยย said the Duchess; รขยยI never could abide
figures!รขยย And with that she began nursing her child again, singing a
sort of lullaby to it as she did so, and giving it a violent shake at
the end of every line:
รขยยSpeak roughly to your little boy,
And beat him when he sneezes:
He only does it to annoy,
Because he knows it teases.รขยย
CHORUS.
(In which the cook and the baby joined):
รขยยWow! wow! wow!รขยย
While the Duchess sang the second verse of the song, she kept tossing
the baby violently up and down, and the poor little thing howled so,
that Alice could hardly hear the words:รขยย
รขยยI speak severely to my boy,
I beat him when he sneezes;
For he can thoroughly enjoy
The pepper when he pleases!รขยย
CHORUS.
รขยยWow! wow! wow!รขยย
รขยยHere! you may nurse it a bit, if you like!รขยย the Duchess said to Alice,
flinging the baby at her as she spoke. รขยยI must go and get ready to play
croquet with the Queen,รขยย and she hurried out of the room. The cook
threw a frying-pan after her as she went out, but it just missed her.
Alice caught the baby with some difficulty, as it was a queer-shaped
little creature, and held out its arms and legs in all directions,
รขยยjust like a star-fish,รขยย thought Alice. The poor little thing was
snorting like a steam-engine when she caught it, and kept doubling
itself up and straightening itself out again, so that altogether, for
the first minute or two, it was as much as she could do to hold it.
As soon as she had made out the proper way of nursing it, (which was to
twist it up into a sort of knot, and then keep tight hold of its right
ear and left foot, so as to prevent its undoing itself,) she carried it
out into the open air. รขยยIf I donรขยยt take this child away with me,รขยย
thought Alice, รขยยtheyรขยยre sure to kill it in a day or two: wouldnรขยยt it be
murder to leave it behind?รขยย She said the last words out loud, and the
little thing grunted in reply (it had left off sneezing by this time).
รขยยDonรขยยt grunt,รขยย said Alice; รขยยthatรขยยs not at all a proper way of
expressing yourself.รขยย
The baby grunted again, and Alice looked very anxiously into its face
to see what was the matter with it. There could be no doubt that it had
a _very_ turn-up nose, much more like a snout than a real nose; also
its eyes were getting extremely small for a baby: altogether Alice did
not like the look of the thing at all. รขยยBut perhaps it was only
sobbing,รขยย she thought, and looked into its eyes again, to see if there
were any tears.
No, there were no tears. รขยยIf youรขยยre going to turn into a pig, my dear,รขยย
said Alice, seriously, รขยยIรขยยll have nothing more to do with you. Mind
now!รขยย The poor little thing sobbed again (or grunted, it was impossible
to say which), and they went on for some while in silence.
Alice was just beginning to think to herself, รขยยNow, what am I to do
with this creature when I get it home?รขยย when it grunted again, so
violently, that she looked down into its face in some alarm. This time
there could be _no_ mistake about it: it was neither more nor less than
a pig, and she felt that it would be quite absurd for her to carry it
further.
So she set the little creature down, and felt quite relieved to see it
trot away quietly into the wood. รขยยIf it had grown up,รขยย she said to
herself, รขยยit would have made a dreadfully ugly child: but it makes
rather a handsome pig, I think.รขยย And she began thinking over other
children she knew, who might do very well as pigs, and was just saying
to herself, รขยยif one only knew the right way to change themรขยยรขยย when she
was a little startled by seeing the Cheshire Cat sitting on a bough of
a tree a few yards off.
The Cat only grinned when it saw Alice. It looked good-natured, she
thought: still it had _very_ long claws and a great many teeth, so she
felt that it ought to be treated with respect.
รขยยCheshire Puss,รขยย she began, rather timidly, as she did not at all know
whether it would like the name: however, it only grinned a little
wider. รขยยCome, itรขยยs pleased so far,รขยย thought Alice, and she went on.
รขยยWould you tell me, please, which way I ought to go from here?รขยย
รขยยThat depends a good deal on where you want to get to,รขยย said the Cat.
รขยยI donรขยยt much care whereรขยยรขยย said Alice.
รขยยThen it doesnรขยยt matter which way you go,รขยย said the Cat.
รขยยรขยยso long as I get _somewhere_,รขยย Alice added as an explanation.
รขยยOh, youรขยยre sure to do that,รขยย said the Cat, รขยยif you only walk long
enough.รขยย
Alice felt that this could not be denied, so she tried another
question. รขยยWhat sort of people live about here?รขยย
รขยยIn _that_ direction,รขยย the Cat said, waving its right paw round, รขยยlives
a Hatter: and in _that_ direction,รขยย waving the other paw, รขยยlives a
March Hare. Visit either you like: theyรขยยre both mad.รขยย
รขยยBut I donรขยยt want to go among mad people,รขยย Alice remarked.
รขยยOh, you canรขยยt help that,รขยย said the Cat: รขยยweรขยยre all mad here. Iรขยยm mad.
Youรขยยre mad.รขยย
รขยยHow do you know Iรขยยm mad?รขยย said Alice.
รขยยYou must be,รขยย said the Cat, รขยยor you wouldnรขยยt have come here.รขยย
Alice didnรขยยt think that proved it at all; however, she went on รขยยAnd how
do you know that youรขยยre mad?รขยย
รขยยTo begin with,รขยย said the Cat, รขยยa dogรขยยs not mad. You grant that?รขยย
รขยยI suppose so,รขยย said Alice.
รขยยWell, then,รขยย the Cat went on, รขยยyou see, a dog growls when itรขยยs angry,
and wags its tail when itรขยยs pleased. Now _I_ growl when Iรขยยm pleased,
and wag my tail when Iรขยยm angry. Therefore Iรขยยm mad.รขยย
รขยย_I_ call it purring, not growling,รขยย said Alice.
รขยยCall it what you like,รขยย said the Cat. รขยยDo you play croquet with the
Queen to-day?รขยย
รขยยI should like it very much,รขยย said Alice, รขยยbut I havenรขยยt been invited
yet.รขยย
รขยยYouรขยยll see me there,รขยย said the Cat, and vanished.
Alice was not much surprised at this, she was getting so used to queer
things happening. While she was looking at the place where it had been,
it suddenly appeared again.
รขยยBy-the-bye, what became of the baby?รขยย said the Cat. รขยยIรขยยd nearly
forgotten to ask.รขยย
รขยยIt turned into a pig,รขยย Alice quietly said, just as if it had come back
in a natural way.
รขยยI thought it would,รขยย said the Cat, and vanished again.
Alice waited a little, half expecting to see it again, but it did not
appear, and after a minute or two she walked on in the direction in
which the March Hare was said to live. รขยยIรขยยve seen hatters before,รขยย she
said to herself; รขยยthe March Hare will be much the most interesting, and
perhaps as this is May it wonรขยยt be raving madรขยยat least not so mad as it
was in March.รขยย As she said this, she looked up, and there was the Cat
again, sitting on a branch of a tree.
รขยยDid you say pig, or fig?รขยย said the Cat.
รขยยI said pig,รขยย replied Alice; รขยยand I wish you wouldnรขยยt keep appearing
and vanishing so suddenly: you make one quite giddy.รขยย
รขยยAll right,รขยย said the Cat; and this time it vanished quite slowly,
beginning with the end of the tail, and ending with the grin, which
remained some time after the rest of it had gone.
รขยยWell! Iรขยยve often seen a cat without a grin,รขยย thought Alice; รขยยbut a
grin without a cat! Itรขยยs the most curious thing I ever saw in my life!รขยย
She had not gone much farther before she came in sight of the house of
the March Hare: she thought it must be the right house, because the
chimneys were shaped like ears and the roof was thatched with fur. It
was so large a house, that she did not like to go nearer till she had
nibbled some more of the lefthand bit of mushroom, and raised herself
to about two feet high: even then she walked up towards it rather
timidly, saying to herself รขยยSuppose it should be raving mad after all!
I almost wish Iรขยยd gone to see the Hatter instead!รขยย
CHAPTER VII.
A Mad Tea-Party
There was a table set out under a tree in front of the house, and the
March Hare and the Hatter were having tea at it: a Dormouse was sitting
between them, fast asleep, and the other two were using it as a
cushion, resting their elbows on it, and talking over its head. รขยยVery
uncomfortable for the Dormouse,รขยย thought Alice; รขยยonly, as itรขยยs asleep,
I suppose it doesnรขยยt mind.รขยย
The table was a large one, but the three were all crowded together at
one corner of it: รขยยNo room! No room!รขยย they cried out when they saw
Alice coming. รขยยThereรขยยs _plenty_ of room!รขยย said Alice indignantly, and
she sat down in a large arm-chair at one end of the table.
รขยยHave some wine,รขยย the March Hare said in an encouraging tone.
Alice looked all round the table, but there was nothing on it but tea.
รขยยI donรขยยt see any wine,รขยย she remarked.
รขยยThere isnรขยยt any,รขยย said the March Hare.
รขยยThen it wasnรขยยt very civil of you to offer it,รขยย said Alice angrily.
รขยยIt wasnรขยยt very civil of you to sit down without being invited,รขยย said
the March Hare.
รขยยI didnรขยยt know it was _your_ table,รขยย said Alice; รขยยitรขยยs laid for a great
many more than three.รขยย
รขยยYour hair wants cutting,รขยย said the Hatter. He had been looking at
Alice for some time with great curiosity, and this was his first
speech.
รขยยYou should learn not to make personal remarks,รขยย Alice said with some
severity; รขยยitรขยยs very rude.รขยย
The Hatter opened his eyes very wide on hearing this; but all he _said_
was, รขยยWhy is a raven like a writing-desk?รขยย
รขยยCome, we shall have some fun now!รขยย thought Alice. รขยยIรขยยm glad theyรขยยve
begun asking riddles.รขยยI believe I can guess that,รขยย she added aloud.
รขยยDo you mean that you think you can find out the answer to it?รขยย said
the March Hare.
รขยยExactly so,รขยย said Alice.
รขยยThen you should say what you mean,รขยย the March Hare went on.
รขยยI do,รขยย Alice hastily replied; รขยยat leastรขยยat least I mean what I
sayรขยยthatรขยยs the same thing, you know.รขยย
รขยยNot the same thing a bit!รขยย said the Hatter. รขยยYou might just as well
say that รขยยI see what I eatรขยย is the same thing as รขยยI eat what I seeรขยย!รขยย
รขยยYou might just as well say,รขยย added the March Hare, รขยยthat รขยยI like what
I getรขยย is the same thing as รขยยI get what I likeรขยย!รขยย
รขยยYou might just as well say,รขยย added the Dormouse, who seemed to be
talking in his sleep, รขยยthat รขยยI breathe when I sleepรขยย is the same thing
as รขยยI sleep when I breatheรขยย!รขยย
รขยยIt _is_ the same thing with you,รขยย said the Hatter, and here the
conversation dropped, and the party sat silent for a minute, while
Alice thought over all she could remember about ravens and
writing-desks, which wasnรขยยt much.
The Hatter was the first to break the silence. รขยยWhat day of the month
is it?รขยย he said, turning to Alice: he had taken his watch out of his
pocket, and was looking at it uneasily, shaking it every now and then,
and holding it to his ear.
Alice considered a little, and then said รขยยThe fourth.รขยย
รขยยTwo days wrong!รขยย sighed the Hatter. รขยยI told you butter wouldnรขยยt suit
the works!รขยย he added looking angrily at the March Hare.
รขยยIt was the _best_ butter,รขยย the March Hare meekly replied.
รขยยYes, but some crumbs must have got in as well,รขยย the Hatter grumbled:
รขยยyou shouldnรขยยt have put it in with the bread-knife.รขยย
The March Hare took the watch and looked at it gloomily: then he dipped
it into his cup of tea, and looked at it again: but he could think of
nothing better to say than his first remark, รขยยIt was the _best_ butter,
you know.รขยย
Alice had been looking over his shoulder with some curiosity. รขยยWhat a
funny watch!รขยย she remarked. รขยยIt tells the day of the month, and doesnรขยยt
tell what oรขยยclock it is!รขยย
รขยยWhy should it?รขยย muttered the Hatter. รขยยDoes _your_ watch tell you what
year it is?รขยย
รขยยOf course not,รขยย Alice replied very readily: รขยยbut thatรขยยs because it
stays the same year for such a long time together.รขยย
รขยยWhich is just the case with _mine_,รขยย said the Hatter.
Alice felt dreadfully puzzled. The Hatterรขยยs remark seemed to have no
sort of meaning in it, and yet it was certainly English. รขยยI donรขยยt quite
understand you,รขยย she said, as politely as she could.
รขยยThe Dormouse is asleep again,รขยย said the Hatter, and he poured a little
hot tea upon its nose.
The Dormouse shook its head impatiently, and said, without opening its
eyes, รขยยOf course, of course; just what I was going to remark myself.รขยย
รขยยHave you guessed the riddle yet?รขยย the Hatter said, turning to Alice
again.
รขยยNo, I give it up,รขยย Alice replied: รขยยwhatรขยยs the answer?รขยย
รขยยI havenรขยยt the slightest idea,รขยย said the Hatter.
รขยยNor I,รขยย said the March Hare.
Alice sighed wearily. รขยยI think you might do something better with the
time,รขยย she said, รขยยthan waste it in asking riddles that have no
answers.รขยย
รขยยIf you knew Time as well as I do,รขยย said the Hatter, รขยยyou wouldnรขยยt talk
about wasting _it_. Itรขยยs _him_.รขยย
รขยยI donรขยยt know what you mean,รขยย said Alice.
รขยยOf course you donรขยยt!รขยย the Hatter said, tossing his head
contemptuously. รขยยI dare say you never even spoke to Time!รขยย
รขยยPerhaps not,รขยย Alice cautiously replied: รขยยbut I know I have to beat
time when I learn music.รขยย
รขยยAh! that accounts for it,รขยย said the Hatter. รขยยHe wonรขยยt stand beating.
Now, if you only kept on good terms with him, heรขยยd do almost anything
you liked with the clock. For instance, suppose it were nine oรขยยclock in
the morning, just time to begin lessons: youรขยยd only have to whisper a
hint to Time, and round goes the clock in a twinkling! Half-past one,
time for dinner!รขยย
(รขยยI only wish it was,รขยย the March Hare said to itself in a whisper.)
รขยยThat would be grand, certainly,รขยย said Alice thoughtfully: รขยยbut thenรขยยI
shouldnรขยยt be hungry for it, you know.รขยย
รขยยNot at first, perhaps,รขยย said the Hatter: รขยยbut you could keep it to
half-past one as long as you liked.รขยย
รขยยIs that the way _you_ manage?รขยย Alice asked.
The Hatter shook his head mournfully. รขยยNot I!รขยย he replied. รขยยWe
quarrelled last Marchรขยยjust before _he_ went mad, you knowรขยยรขยย (pointing
with his tea spoon at the March Hare,) รขยยรขยยit was at the great concert
given by the Queen of Hearts, and I had to sing
รขยยTwinkle, twinkle, little bat!
How I wonder what youรขยยre at!รขยย
You know the song, perhaps?รขยย
รขยยIรขยยve heard something like it,รขยย said Alice.
รขยยIt goes on, you know,รขยย the Hatter continued, รขยยin this way:รขยย
รขยยUp above the world you fly,
Like a tea-tray in the sky.
Twinkle, twinkleรขยยรขยยรขยย
Here the Dormouse shook itself, and began singing in its sleep
รขยย_Twinkle, twinkle, twinkle, twinkle_รขยยรขยย and went on so long that they
had to pinch it to make it stop.
รขยยWell, Iรขยยd hardly finished the first verse,รขยย said the Hatter, รขยยwhen the
Queen jumped up and bawled out, รขยยHeรขยยs murdering the time! Off with his
head!รขยยรขยย
รขยยHow dreadfully savage!รขยย exclaimed Alice.
รขยยAnd ever since that,รขยย the Hatter went on in a mournful tone, รขยยhe wonรขยยt
do a thing I ask! Itรขยยs always six oรขยยclock now.รขยย
A bright idea came into Aliceรขยยs head. รขยยIs that the reason so many
tea-things are put out here?รขยย she asked.
รขยยYes, thatรขยยs it,รขยย said the Hatter with a sigh: รขยยitรขยยs always tea-time,
and weรขยยve no time to wash the things between whiles.รขยย
รขยยThen you keep moving round, I suppose?รขยย said Alice.
รขยยExactly so,รขยย said the Hatter: รขยยas the things get used up.รขยย
รขยยBut what happens when you come to the beginning again?รขยย Alice ventured
to ask.
รขยยSuppose we change the subject,รขยย the March Hare interrupted, yawning.
รขยยIรขยยm getting tired of this. I vote the young lady tells us a story.รขยย
รขยยIรขยยm afraid I donรขยยt know one,รขยย said Alice, rather alarmed at the
proposal.
รขยยThen the Dormouse shall!รขยย they both cried. รขยยWake up, Dormouse!รขยย And
they pinched it on both sides at once.
The Dormouse slowly opened his eyes. รขยยI wasnรขยยt asleep,รขยย he said in a
hoarse, feeble voice: รขยยI heard every word you fellows were saying.รขยย
รขยยTell us a story!รขยย said the March Hare.
รขยยYes, please do!รขยย pleaded Alice.
รขยยAnd be quick about it,รขยย added the Hatter, รขยยor youรขยยll be asleep again
before itรขยยs done.รขยย
รขยยOnce upon a time there were three little sisters,รขยย the Dormouse began
in a great hurry; รขยยand their names were Elsie, Lacie, and Tillie; and
they lived at the bottom of a wellรขยยรขยย
รขยยWhat did they live on?รขยย said Alice, who always took a great interest
in questions of eating and drinking.
รขยยThey lived on treacle,รขยย said the Dormouse, after thinking a minute or
two.
รขยยThey couldnรขยยt have done that, you know,รขยย Alice gently remarked;
รขยยtheyรขยยd have been ill.รขยย
รขยยSo they were,รขยย said the Dormouse; รขยย_very_ ill.รขยย
Alice tried to fancy to herself what such an extraordinary ways of
living would be like, but it puzzled her too much, so she went on: รขยยBut
why did they live at the bottom of a well?รขยย
รขยยTake some more tea,รขยย the March Hare said to Alice, very earnestly.
รขยยIรขยยve had nothing yet,รขยย Alice replied in an offended tone, รขยยso I canรขยยt
take more.รขยย
รขยยYou mean you canรขยยt take _less_,รขยย said the Hatter: รขยยitรขยยs very easy to
take _more_ than nothing.รขยย
รขยยNobody asked _your_ opinion,รขยย said Alice.
รขยยWhoรขยยs making personal remarks now?รขยย the Hatter asked triumphantly.
Alice did not quite know what to say to this: so she helped herself to
some tea and bread-and-butter, and then turned to the Dormouse, and
repeated her question. รขยยWhy did they live at the bottom of a well?รขยย
The Dormouse again took a minute or two to think about it, and then
said, รขยยIt was a treacle-well.รขยย
รขยยThereรขยยs no such thing!รขยย Alice was beginning very angrily, but the
Hatter and the March Hare went รขยยSh! sh!รขยย and the Dormouse sulkily
remarked, รขยยIf you canรขยยt be civil, youรขยยd better finish the story for
yourself.รขยย
รขยยNo, please go on!รขยย Alice said very humbly; รขยยI wonรขยยt interrupt again. I
dare say there may be _one_.รขยย
รขยยOne, indeed!รขยย said the Dormouse indignantly. However, he consented to
go on. รขยยAnd so these three little sistersรขยยthey were learning to draw,
you knowรขยยรขยย
รขยยWhat did they draw?รขยย said Alice, quite forgetting her promise.
รขยยTreacle,รขยย said the Dormouse, without considering at all this time.
รขยยI want a clean cup,รขยย interrupted the Hatter: รขยยletรขยยs all move one place
on.รขยย
He moved on as he spoke, and the Dormouse followed him: the March Hare
moved into the Dormouseรขยยs place, and Alice rather unwillingly took the
place of the March Hare. The Hatter was the only one who got any
advantage from the change: and Alice was a good deal worse off than
before, as the March Hare had just upset the milk-jug into his plate.
Alice did not wish to offend the Dormouse again, so she began very
cautiously: รขยยBut I donรขยยt understand. Where did they draw the treacle
from?รขยย
รขยยYou can draw water out of a water-well,รขยย said the Hatter; รขยยso I should
think you could draw treacle out of a treacle-wellรขยยeh, stupid?รขยย
รขยยBut they were _in_ the well,รขยย Alice said to the Dormouse, not choosing
to notice this last remark.
รขยยOf course they were,รขยย said the Dormouse; รขยยรขยยwell in.รขยย
This answer so confused poor Alice, that she let the Dormouse go on for
some time without interrupting it.
รขยยThey were learning to draw,รขยย the Dormouse went on, yawning and rubbing
its eyes, for it was getting very sleepy; รขยยand they drew all manner of
thingsรขยยeverything that begins with an Mรขยยรขยย
รขยยWhy with an M?รขยย said Alice.
รขยยWhy not?รขยย said the March Hare.
Alice was silent.
The Dormouse had closed its eyes by this time, and was going off into a
doze; but, on being pinched by the Hatter, it woke up again with a
little shriek, and went on: รขยยรขยยthat begins with an M, such as
mouse-traps, and the moon, and memory, and muchnessรขยยyou know you say
things are รขยยmuch of a muchnessรขยยรขยยdid you ever see such a thing as a
drawing of a muchness?รขยย
รขยยReally, now you ask me,รขยย said Alice, very much confused, รขยยI donรขยยt
thinkรขยยรขยย
รขยยThen you shouldnรขยยt talk,รขยย said the Hatter.
This piece of rudeness was more than Alice could bear: she got up in
great disgust, and walked off; the Dormouse fell asleep instantly, and
neither of the others took the least notice of her going, though she
looked back once or twice, half hoping that they would call after her:
the last time she saw them, they were trying to put the Dormouse into
the teapot.
รขยยAt any rate Iรขยยll never go _there_ again!รขยย said Alice as she picked her
way through the wood. รขยยItรขยยs the stupidest tea-party I ever was at in
all my life!รขยย
Just as she said this, she noticed that one of the trees had a door
leading right into it. รขยยThatรขยยs very curious!รขยย she thought. รขยยBut
everythingรขยยs curious today. I think I may as well go in at once.รขยย And
in she went.
Once more she found herself in the long hall, and close to the little
glass table. รขยยNow, Iรขยยll manage better this time,รขยย she said to herself,
and began by taking the little golden key, and unlocking the door that
led into the garden. Then she went to work nibbling at the mushroom
(she had kept a piece of it in her pocket) till she was about a foot
high: then she walked down the little passage: and _then_รขยยshe found
herself at last in the beautiful garden, among the bright flower-beds
and the cool fountains.
CHAPTER VIII.
The Queenรขยยs Croquet-Ground
A large rose-tree stood near the entrance of the garden: the roses
growing on it were white, but there were three gardeners at it, busily
painting them red. Alice thought this a very curious thing, and she
went nearer to watch them, and just as she came up to them she heard
one of them say, รขยยLook out now, Five! Donรขยยt go splashing paint over me
like that!รขยย
รขยยI couldnรขยยt help it,รขยย said Five, in a sulky tone; รขยยSeven jogged my
elbow.รขยย
On which Seven looked up and said, รขยยThatรขยยs right, Five! Always lay the
blame on others!รขยย
รขยย_Youรขยยd_ better not talk!รขยย said Five. รขยยI heard the Queen say only
yesterday you deserved to be beheaded!รขยย
รขยยWhat for?รขยย said the one who had spoken first.
รขยยThatรขยยs none of _your_ business, Two!รขยย said Seven.
รขยยYes, it _is_ his business!รขยย said Five, รขยยand Iรขยยll tell himรขยยit was for
bringing the cook tulip-roots instead of onions.รขยย
Seven flung down his brush, and had just begun รขยยWell, of all the unjust
thingsรขยยรขยย when his eye chanced to fall upon Alice, as she stood watching
them, and he checked himself suddenly: the others looked round also,
and all of them bowed low.
รขยยWould you tell me,รขยย said Alice, a little timidly, รขยยwhy you are
painting those roses?รขยย
Five and Seven said nothing, but looked at Two. Two began in a low
voice, รขยยWhy the fact is, you see, Miss, this here ought to have been a
_red_ rose-tree, and we put a white one in by mistake; and if the Queen
was to find it out, we should all have our heads cut off, you know. So
you see, Miss, weรขยยre doing our best, afore she comes, toรขยยรขยย At this
moment Five, who had been anxiously looking across the garden, called
out รขยยThe Queen! The Queen!รขยย and the three gardeners instantly threw
themselves flat upon their faces. There was a sound of many footsteps,
and Alice looked round, eager to see the Queen.
First came ten soldiers carrying clubs; these were all shaped like the
three gardeners, oblong and flat, with their hands and feet at the
corners: next the ten courtiers; these were ornamented all over with
diamonds, and walked two and two, as the soldiers did. After these came
the royal children; there were ten of them, and the little dears came
jumping merrily along hand in hand, in couples: they were all
ornamented with hearts. Next came the guests, mostly Kings and Queens,
and among them Alice recognised the White Rabbit: it was talking in a
hurried nervous manner, smiling at everything that was said, and went
by without noticing her. Then followed the Knave of Hearts, carrying
the Kingรขยยs crown on a crimson velvet cushion; and, last of all this
grand procession, came THE KING AND QUEEN OF HEARTS.
Alice was rather doubtful whether she ought not to lie down on her face
like the three gardeners, but she could not remember ever having heard
of such a rule at processions; รขยยand besides, what would be the use of a
procession,รขยย thought she, รขยยif people had all to lie down upon their
faces, so that they couldnรขยยt see it?รขยย So she stood still where she was,
and waited.
When the procession came opposite to Alice, they all stopped and looked
at her, and the Queen said severely รขยยWho is this?รขยย She said it to the
Knave of Hearts, who only bowed and smiled in reply.
รขยยIdiot!รขยย said the Queen, tossing her head impatiently; and, turning to
Alice, she went on, รขยยWhatรขยยs your name, child?รขยย
รขยยMy name is Alice, so please your Majesty,รขยย said Alice very politely;
but she added, to herself, รขยยWhy, theyรขยยre only a pack of cards, after
all. I neednรขยยt be afraid of them!รขยย
รขยยAnd who are _these?_รขยย said the Queen, pointing to the three gardeners
who were lying round the rose-tree; for, you see, as they were lying on
their faces, and the pattern on their backs was the same as the rest of
the pack, she could not tell whether they were gardeners, or soldiers,
or courtiers, or three of her own children.
รขยยHow should _I_ know?รขยย said Alice, surprised at her own courage. รขยยItรขยยs
no business of _mine_.รขยย
The Queen turned crimson with fury, and, after glaring at her for a
moment like a wild beast, screamed รขยยOff with her head! Offรขยยรขยย
รขยยNonsense!รขยย said Alice, very loudly and decidedly, and the Queen was
silent.
The King laid his hand upon her arm, and timidly said รขยยConsider, my
dear: she is only a child!รขยย
The Queen turned angrily away from him, and said to the Knave รขยยTurn
them over!รขยย
The Knave did so, very carefully, with one foot.
รขยยGet up!รขยย said the Queen, in a shrill, loud voice, and the three
gardeners instantly jumped up, and began bowing to the King, the Queen,
the royal children, and everybody else.
รขยยLeave off that!รขยย screamed the Queen. รขยยYou make me giddy.รขยย And then,
turning to the rose-tree, she went on, รขยยWhat _have_ you been doing
here?รขยย
รขยยMay it please your Majesty,รขยย said Two, in a very humble tone, going
down on one knee as he spoke, รขยยwe were tryingรขยยรขยย
รขยย_I_ see!รขยย said the Queen, who had meanwhile been examining the roses.
รขยยOff with their heads!รขยย and the procession moved on, three of the
soldiers remaining behind to execute the unfortunate gardeners, who ran
to Alice for protection.
รขยยYou shanรขยยt be beheaded!รขยย said Alice, and she put them into a large
flower-pot that stood near. The three soldiers wandered about for a
minute or two, looking for them, and then quietly marched off after the
others.
รขยยAre their heads off?รขยย shouted the Queen.
รขยยTheir heads are gone, if it please your Majesty!รขยย the soldiers shouted
in reply.
รขยยThatรขยยs right!รขยย shouted the Queen. รขยยCan you play croquet?รขยย
The soldiers were silent, and looked at Alice, as the question was
evidently meant for her.
รขยยYes!รขยย shouted Alice.
รขยยCome on, then!รขยย roared the Queen, and Alice joined the procession,
wondering very much what would happen next.
รขยยItรขยยsรขยยitรขยยs a very fine day!รขยย said a timid voice at her side. She was
walking by the White Rabbit, who was peeping anxiously into her face.
รขยยVery,รขยย said Alice: รขยยรขยยwhereรขยยs the Duchess?รขยย
รขยยHush! Hush!รขยย said the Rabbit in a low, hurried tone. He looked
anxiously over his shoulder as he spoke, and then raised himself upon
tiptoe, put his mouth close to her ear, and whispered รขยยSheรขยยs under
sentence of execution.รขยย
รขยยWhat for?รขยย said Alice.
รขยยDid you say รขยยWhat a pity!รขยย?รขยย the Rabbit asked.
รขยยNo, I didnรขยยt,รขยย said Alice: รขยยI donรขยยt think itรขยยs at all a pity. I said
รขยยWhat for?รขยยรขยย
รขยยShe boxed the Queenรขยยs earsรขยยรขยย the Rabbit began. Alice gave a little
scream of laughter. รขยยOh, hush!รขยย the Rabbit whispered in a frightened
tone. รขยยThe Queen will hear you! You see, she came rather late, and the
Queen saidรขยยรขยย
รขยยGet to your places!รขยย shouted the Queen in a voice of thunder, and
people began running about in all directions, tumbling up against each
other; however, they got settled down in a minute or two, and the game
began. Alice thought she had never seen such a curious croquet-ground
in her life; it was all ridges and furrows; the balls were live
hedgehogs, the mallets live flamingoes, and the soldiers had to double
themselves up and to stand on their hands and feet, to make the arches.
The chief difficulty Alice found at first was in managing her flamingo:
she succeeded in getting its body tucked away, comfortably enough,
under her arm, with its legs hanging down, but generally, just as she
had got its neck nicely straightened out, and was going to give the
hedgehog a blow with its head, it _would_ twist itself round and look
up in her face, with such a puzzled expression that she could not help
bursting out laughing: and when she had got its head down, and was
going to begin again, it was very provoking to find that the hedgehog
had unrolled itself, and was in the act of crawling away: besides all
this, there was generally a ridge or furrow in the way wherever she
wanted to send the hedgehog to, and, as the doubled-up soldiers were
always getting up and walking off to other parts of the ground, Alice
soon came to the conclusion that it was a very difficult game indeed.
The players all played at once without waiting for turns, quarrelling
all the while, and fighting for the hedgehogs; and in a very short time
the Queen was in a furious passion, and went stamping about, and
shouting รขยยOff with his head!รขยย or รขยยOff with her head!รขยย about once in a
minute.
Alice began to feel very uneasy: to be sure, she had not as yet had any
dispute with the Queen, but she knew that it might happen any minute,
รขยยand then,รขยย thought she, รขยยwhat would become of me? Theyรขยยre dreadfully
fond of beheading people here; the great wonder is, that thereรขยยs any
one left alive!รขยย
She was looking about for some way of escape, and wondering whether she
could get away without being seen, when she noticed a curious
appearance in the air: it puzzled her very much at first, but, after
watching it a minute or two, she made it out to be a grin, and she said
to herself รขยยItรขยยs the Cheshire Cat: now I shall have somebody to talk
to.รขยย
รขยยHow are you getting on?รขยย said the Cat, as soon as there was mouth
enough for it to speak with.
Alice waited till the eyes appeared, and then nodded. รขยยItรขยยs no use
speaking to it,รขยย she thought, รขยยtill its ears have come, or at least one
of them.รขยย In another minute the whole head appeared, and then Alice put
down her flamingo, and began an account of the game, feeling very glad
she had someone to listen to her. The Cat seemed to think that there
was enough of it now in sight, and no more of it appeared.
รขยยI donรขยยt think they play at all fairly,รขยย Alice began, in rather a
complaining tone, รขยยand they all quarrel so dreadfully one canรขยยt hear
oneself speakรขยยand they donรขยยt seem to have any rules in particular; at
least, if there are, nobody attends to themรขยยand youรขยยve no idea how
confusing it is all the things being alive; for instance, thereรขยยs the
arch Iรขยยve got to go through next walking about at the other end of the
groundรขยยand I should have croqueted the Queenรขยยs hedgehog just now, only
it ran away when it saw mine coming!รขยย
รขยยHow do you like the Queen?รขยย said the Cat in a low voice.
รขยยNot at all,รขยย said Alice: รขยยsheรขยยs so extremelyรขยยรขยย Just then she noticed
that the Queen was close behind her, listening: so she went on,
รขยยรขยยlikely to win, that itรขยยs hardly worth while finishing the game.รขยย
The Queen smiled and passed on.
รขยยWho _are_ you talking to?รขยย said the King, going up to Alice, and
looking at the Catรขยยs head with great curiosity.
รขยยItรขยยs a friend of mineรขยยa Cheshire Cat,รขยย said Alice: รขยยallow me to
introduce it.รขยย
รขยยI donรขยยt like the look of it at all,รขยย said the King: รขยยhowever, it may
kiss my hand if it likes.รขยย
รขยยIรขยยd rather not,รขยย the Cat remarked.
รขยยDonรขยยt be impertinent,รขยย said the King, รขยยand donรขยยt look at me like
that!รขยย He got behind Alice as he spoke.
รขยยA cat may look at a king,รขยย said Alice. รขยยIรขยยve read that in some book,
but I donรขยยt remember where.รขยย
รขยยWell, it must be removed,รขยย said the King very decidedly, and he called
the Queen, who was passing at the moment, รขยยMy dear! I wish you would
have this cat removed!รขยย
The Queen had only one way of settling all difficulties, great or
small. รขยยOff with his head!รขยย she said, without even looking round.
รขยยIรขยยll fetch the executioner myself,รขยย said the King eagerly, and he
hurried off.
Alice thought she might as well go back, and see how the game was going
on, as she heard the Queenรขยยs voice in the distance, screaming with
passion. She had already heard her sentence three of the players to be
executed for having missed their turns, and she did not like the look
of things at all, as the game was in such confusion that she never knew
whether it was her turn or not. So she went in search of her hedgehog.
The hedgehog was engaged in a fight with another hedgehog, which seemed
to Alice an excellent opportunity for croqueting one of them with the
other: the only difficulty was, that her flamingo was gone across to
the other side of the garden, where Alice could see it trying in a
helpless sort of way to fly up into a tree.
By the time she had caught the flamingo and brought it back, the fight
was over, and both the hedgehogs were out of sight: รขยยbut it doesnรขยยt
matter much,รขยย thought Alice, รขยยas all the arches are gone from this side
of the ground.รขยย So she tucked it away under her arm, that it might not
escape again, and went back for a little more conversation with her
friend.
When she got back to the Cheshire Cat, she was surprised to find quite
a large crowd collected round it: there was a dispute going on between
the executioner, the King, and the Queen, who were all talking at once,
while all the rest were quite silent, and looked very uncomfortable.
The moment Alice appeared, she was appealed to by all three to settle
the question, and they repeated their arguments to her, though, as they
all spoke at once, she found it very hard indeed to make out exactly
what they said.
The executionerรขยยs argument was, that you couldnรขยยt cut off a head unless
there was a body to cut it off from: that he had never had to do such a
thing before, and he wasnรขยยt going to begin at _his_ time of life.
The Kingรขยยs argument was, that anything that had a head could be
beheaded, and that you werenรขยยt to talk nonsense.
The Queenรขยยs argument was, that if something wasnรขยยt done about it in
less than no time sheรขยยd have everybody executed, all round. (It was
this last remark that had made the whole party look so grave and
anxious.)
Alice could think of nothing else to say but รขยยIt belongs to the
Duchess: youรขยยd better ask _her_ about it.รขยย
รขยยSheรขยยs in prison,รขยย the Queen said to the executioner: รขยยfetch her here.รขยย
And the executioner went off like an arrow.
The Catรขยยs head began fading away the moment he was gone, and, by the
time he had come back with the Duchess, it had entirely disappeared; so
the King and the executioner ran wildly up and down looking for it,
while the rest of the party went back to the game.
CHAPTER IX.
The Mock Turtleรขยยs Story
รขยยYou canรขยยt think how glad I am to see you again, you dear old thing!รขยย
said the Duchess, as she tucked her arm affectionately into Aliceรขยยs,
and they walked off together.
Alice was very glad to find her in such a pleasant temper, and thought
to herself that perhaps it was only the pepper that had made her so
savage when they met in the kitchen.
รขยยWhen _Iรขยยm_ a Duchess,รขยย she said to herself, (not in a very hopeful
tone though), รขยยI wonรขยยt have any pepper in my kitchen _at all_. Soup
does very well withoutรขยยMaybe itรขยยs always pepper that makes people
hot-tempered,รขยย she went on, very much pleased at having found out a new
kind of rule, รขยยand vinegar that makes them sourรขยยand camomile that makes
them bitterรขยยandรขยยand barley-sugar and such things that make children
sweet-tempered. I only wish people knew _that_: then they wouldnรขยยt be
so stingy about it, you knowรขยยรขยย
She had quite forgotten the Duchess by this time, and was a little
startled when she heard her voice close to her ear. รขยยYouรขยยre thinking
about something, my dear, and that makes you forget to talk. I canรขยยt
tell you just now what the moral of that is, but I shall remember it in
a bit.รขยย
รขยยPerhaps it hasnรขยยt one,รขยย Alice ventured to remark.
รขยยTut, tut, child!รขยย said the Duchess. รขยยEverythingรขยยs got a moral, if only
you can find it.รขยย And she squeezed herself up closer to Aliceรขยยs side as
she spoke.
Alice did not much like keeping so close to her: first, because the
Duchess was _very_ ugly; and secondly, because she was exactly the
right height to rest her chin upon Aliceรขยยs shoulder, and it was an
uncomfortably sharp chin. However, she did not like to be rude, so she
bore it as well as she could.
รขยยThe gameรขยยs going on rather better now,รขยย she said, by way of keeping up
the conversation a little.
รขยยรขยยTis so,รขยย said the Duchess: รขยยand the moral of that isรขยยรขยยOh, รขยยtis love,
รขยยtis love, that makes the world go round!รขยยรขยย
รขยยSomebody said,รขยย Alice whispered, รขยยthat itรขยยs done by everybody minding
their own business!รขยย
รขยยAh, well! It means much the same thing,รขยย said the Duchess, digging her
sharp little chin into Aliceรขยยs shoulder as she added, รขยยand the moral of
_that_ isรขยยรขยยTake care of the sense, and the sounds will take care of
themselves.รขยยรขยย
รขยยHow fond she is of finding morals in things!รขยย Alice thought to
herself.
รขยยI dare say youรขยยre wondering why I donรขยยt put my arm round your waist,รขยย
the Duchess said after a pause: รขยยthe reason is, that Iรขยยm doubtful about
the temper of your flamingo. Shall I try the experiment?รขยย
รขยยHe might bite,รขยย Alice cautiously replied, not feeling at all anxious
to have the experiment tried.
รขยยVery true,รขยย said the Duchess: รขยยflamingoes and mustard both bite. And
the moral of that isรขยยรขยยBirds of a feather flock together.รขยยรขยย
รขยยOnly mustard isnรขยยt a bird,รขยย Alice remarked.
รขยยRight, as usual,รขยย said the Duchess: รขยยwhat a clear way you have of
putting things!รขยย
รขยยItรขยยs a mineral, I _think_,รขยย said Alice.
รขยยOf course it is,รขยย said the Duchess, who seemed ready to agree to
everything that Alice said; รขยยthereรขยยs a large mustard-mine near here.
And the moral of that isรขยยรขยยThe more there is of mine, the less there is
of yours.รขยยรขยย
รขยยOh, I know!รขยย exclaimed Alice, who had not attended to this last
remark, รขยยitรขยยs a vegetable. It doesnรขยยt look like one, but it is.รขยย
รขยยI quite agree with you,รขยย said the Duchess; รขยยand the moral of that
isรขยยรขยยBe what you would seem to beรขยยรขยยor if youรขยยd like it put more
simplyรขยยรขยยNever imagine yourself not to be otherwise than what it might
appear to others that what you were or might have been was not
otherwise than what you had been would have appeared to them to be
otherwise.รขยยรขยย
รขยยI think I should understand that better,รขยย Alice said very politely,
รขยยif I had it written down: but I canรขยยt quite follow it as you say it.รขยย
รขยยThatรขยยs nothing to what I could say if I chose,รขยย the Duchess replied,
in a pleased tone.
รขยยPray donรขยยt trouble yourself to say it any longer than that,รขยย said
Alice.
รขยยOh, donรขยยt talk about trouble!รขยย said the Duchess. รขยยI make you a present
of everything Iรขยยve said as yet.รขยย
รขยยA cheap sort of present!รขยย thought Alice. รขยยIรขยยm glad they donรขยยt give
birthday presents like that!รขยย But she did not venture to say it out
loud.
รขยยThinking again?รขยย the Duchess asked, with another dig of her sharp
little chin.
รขยยIรขยยve a right to think,รขยย said Alice sharply, for she was beginning to
feel a little worried.
รขยยJust about as much right,รขยย said the Duchess, รขยยas pigs have to fly; and
the mรขยยรขยย
But here, to Aliceรขยยs great surprise, the Duchessรขยยs voice died away,
even in the middle of her favourite word รขยยmoral,รขยย and the arm that was
linked into hers began to tremble. Alice looked up, and there stood the
Queen in front of them, with her arms folded, frowning like a
thunderstorm.
รขยยA fine day, your Majesty!รขยย the Duchess began in a low, weak voice.
รขยยNow, I give you fair warning,รขยย shouted the Queen, stamping on the
ground as she spoke; รขยยeither you or your head must be off, and that in
about half no time! Take your choice!รขยย
The Duchess took her choice, and was gone in a moment.
รขยยLetรขยยs go on with the game,รขยย the Queen said to Alice; and Alice was too
much frightened to say a word, but slowly followed her back to the
croquet-ground.
The other guests had taken advantage of the Queenรขยยs absence, and were
resting in the shade: however, the moment they saw her, they hurried
back to the game, the Queen merely remarking that a momentรขยยs delay
would cost them their lives.
All the time they were playing the Queen never left off quarrelling
with the other players, and shouting รขยยOff with his head!รขยย or รขยยOff with
her head!รขยย Those whom she sentenced were taken into custody by the
soldiers, who of course had to leave off being arches to do this, so
that by the end of half an hour or so there were no arches left, and
all the players, except the King, the Queen, and Alice, were in custody
and under sentence of execution.
Then the Queen left off, quite out of breath, and said to Alice, รขยยHave
you seen the Mock Turtle yet?รขยย
รขยยNo,รขยย said Alice. รขยยI donรขยยt even know what a Mock Turtle is.รขยย
รขยยItรขยยs the thing Mock Turtle Soup is made from,รขยย said the Queen.
รขยยI never saw one, or heard of one,รขยย said Alice.
รขยยCome on, then,รขยย said the Queen, รขยยand he shall tell you his history,รขยย
As they walked off together, Alice heard the King say in a low voice,
to the company generally, รขยยYou are all pardoned.รขยย รขยยCome, _thatรขยยs_ a
good thing!รขยย she said to herself, for she had felt quite unhappy at the
number of executions the Queen had ordered.
They very soon came upon a Gryphon, lying fast asleep in the sun. (If
you donรขยยt know what a Gryphon is, look at the picture.) รขยยUp, lazy
thing!รขยย said the Queen, รขยยand take this young lady to see the Mock
Turtle, and to hear his history. I must go back and see after some
executions I have ordered;รขยย and she walked off, leaving Alice alone
with the Gryphon. Alice did not quite like the look of the creature,
but on the whole she thought it would be quite as safe to stay with it
as to go after that savage Queen: so she waited.
The Gryphon sat up and rubbed its eyes: then it watched the Queen till
she was out of sight: then it chuckled. รขยยWhat fun!รขยย said the Gryphon,
half to itself, half to Alice.
รขยยWhat _is_ the fun?รขยย said Alice.
รขยยWhy, _she_,รขยย said the Gryphon. รขยยItรขยยs all her fancy, that: they never
executes nobody, you know. Come on!รขยย
รขยยEverybody says รขยยcome on!รขยย here,รขยย thought Alice, as she went slowly
after it: รขยยI never was so ordered about in all my life, never!รขยย
They had not gone far before they saw the Mock Turtle in the distance,
sitting sad and lonely on a little ledge of rock, and, as they came
nearer, Alice could hear him sighing as if his heart would break. She
pitied him deeply. รขยยWhat is his sorrow?รขยย she asked the Gryphon, and the
Gryphon answered, very nearly in the same words as before, รขยยItรขยยs all
his fancy, that: he hasnรขยยt got no sorrow, you know. Come on!รขยย
So they went up to the Mock Turtle, who looked at them with large eyes
full of tears, but said nothing.
รขยยThis here young lady,รขยย said the Gryphon, รขยยshe wants for to know your
history, she do.รขยย
รขยยIรขยยll tell it her,รขยย said the Mock Turtle in a deep, hollow tone: รขยยsit
down, both of you, and donรขยยt speak a word till Iรขยยve finished.รขยย
So they sat down, and nobody spoke for some minutes. Alice thought to
herself, รขยยI donรขยยt see how he can _ever_ finish, if he doesnรขยยt begin.รขยย
But she waited patiently.
รขยยOnce,รขยย said the Mock Turtle at last, with a deep sigh, รขยยI was a real
Turtle.รขยย
These words were followed by a very long silence, broken only by an
occasional exclamation of รขยยHjckrrh!รขยย from the Gryphon, and the constant
heavy sobbing of the Mock Turtle. Alice was very nearly getting up and
saying, รขยยThank you, sir, for your interesting story,รขยย but she could not
help thinking there _must_ be more to come, so she sat still and said
nothing.
รขยยWhen we were little,รขยย the Mock Turtle went on at last, more calmly,
though still sobbing a little now and then, รขยยwe went to school in the
sea. The master was an old Turtleรขยยwe used to call him Tortoiseรขยยรขยย
รขยยWhy did you call him Tortoise, if he wasnรขยยt one?รขยย Alice asked.
รขยยWe called him Tortoise because he taught us,รขยย said the Mock Turtle
angrily: รขยยreally you are very dull!รขยย
รขยยYou ought to be ashamed of yourself for asking such a simple
question,รขยย added the Gryphon; and then they both sat silent and looked
at poor Alice, who felt ready to sink into the earth. At last the
Gryphon said to the Mock Turtle, รขยยDrive on, old fellow! Donรขยยt be all
day about it!รขยย and he went on in these words:
รขยยYes, we went to school in the sea, though you maynรขยยt believe itรขยยรขยย
รขยยI never said I didnรขยยt!รขยย interrupted Alice.
รขยยYou did,รขยย said the Mock Turtle.
รขยยHold your tongue!รขยย added the Gryphon, before Alice could speak again.
The Mock Turtle went on.
รขยยWe had the best of educationsรขยยin fact, we went to school every dayรขยยรขยย
รขยย_Iรขยยve_ been to a day-school, too,รขยย said Alice; รขยยyou neednรขยยt be so
proud as all that.รขยย
รขยยWith extras?รขยย asked the Mock Turtle a little anxiously.
รขยยYes,รขยย said Alice, รขยยwe learned French and music.รขยย
รขยยAnd washing?รขยย said the Mock Turtle.
รขยยCertainly not!รขยย said Alice indignantly.
รขยยAh! then yours wasnรขยยt a really good school,รขยย said the Mock Turtle in a
tone of great relief. รขยยNow at _ours_ they had at the end of the bill,
รขยยFrench, music, _and washing_รขยยextra.รขยยรขยย
รขยยYou couldnรขยยt have wanted it much,รขยย said Alice; รขยยliving at the bottom
of the sea.รขยย
รขยยI couldnรขยยt afford to learn it.รขยย said the Mock Turtle with a sigh. รขยยI
only took the regular course.รขยย
รขยยWhat was that?รขยย inquired Alice.
รขยยReeling and Writhing, of course, to begin with,รขยย the Mock Turtle
replied; รขยยand then the different branches of ArithmeticรขยยAmbition,
Distraction, Uglification, and Derision.รขยย
รขยยI never heard of รขยยUglification,รขยยรขยย Alice ventured to say. รขยยWhat is it?รขยย
The Gryphon lifted up both its paws in surprise. รขยยWhat! Never heard of
uglifying!รขยย it exclaimed. รขยยYou know what to beautify is, I suppose?รขยย
รขยยYes,รขยย said Alice doubtfully: รขยยit meansรขยยtoรขยยmakeรขยยanythingรขยยprettier.รขยย
รขยยWell, then,รขยย the Gryphon went on, รขยยif you donรขยยt know what to uglify
is, you _are_ a simpleton.รขยย
Alice did not feel encouraged to ask any more questions about it, so
she turned to the Mock Turtle, and said รขยยWhat else had you to learn?รขยย
รขยยWell, there was Mystery,รขยย the Mock Turtle replied, counting off the
subjects on his flappers, รขยยรขยยMystery, ancient and modern, with
Seaography: then Drawlingรขยยthe Drawling-master was an old conger-eel,
that used to come once a week: _he_ taught us Drawling, Stretching, and
Fainting in Coils.รขยย
รขยยWhat was _that_ like?รขยย said Alice.
รขยยWell, I canรขยยt show it you myself,รขยย the Mock Turtle said: รขยยIรขยยm too
stiff. And the Gryphon never learnt it.รขยย
รขยยHadnรขยยt time,รขยย said the Gryphon: รขยยI went to the Classics master,
though. He was an old crab, _he_ was.รขยย
รขยยI never went to him,รขยย the Mock Turtle said with a sigh: รขยยhe taught
Laughing and Grief, they used to say.รขยย
รขยยSo he did, so he did,รขยย said the Gryphon, sighing in his turn; and both
creatures hid their faces in their paws.
รขยยAnd how many hours a day did you do lessons?รขยย said Alice, in a hurry
to change the subject.
รขยยTen hours the first day,รขยย said the Mock Turtle: รขยยnine the next, and so
on.รขยย
รขยยWhat a curious plan!รขยย exclaimed Alice.
รขยยThatรขยยs the reason theyรขยยre called lessons,รขยย the Gryphon remarked:
รขยยbecause they lessen from day to day.รขยย
This was quite a new idea to Alice, and she thought it over a little
before she made her next remark. รขยยThen the eleventh day must have been
a holiday?รขยย
รขยยOf course it was,รขยย said the Mock Turtle.
รขยยAnd how did you manage on the twelfth?รขยย Alice went on eagerly.
รขยยThatรขยยs enough about lessons,รขยย the Gryphon interrupted in a very
decided tone: รขยยtell her something about the games now.รขยย
CHAPTER X.
The Lobster Quadrille
The Mock Turtle sighed deeply, and drew the back of one flapper across
his eyes. He looked at Alice, and tried to speak, but for a minute or
two sobs choked his voice. รขยยSame as if he had a bone in his throat,รขยย
said the Gryphon: and it set to work shaking him and punching him in
the back. At last the Mock Turtle recovered his voice, and, with tears
running down his cheeks, he went on again:รขยย
รขยยYou may not have lived much under the seaรขยยรขยย (รขยยI havenรขยยt,รขยย said
Alice)รขยยรขยยand perhaps you were never even introduced to a lobsterรขยยรขยย
(Alice began to say รขยยI once tastedรขยยรขยย but checked herself hastily, and
said รขยยNo, neverรขยย) รขยยรขยยso you can have no idea what a delightful thing a
Lobster Quadrille is!รขยย
รขยยNo, indeed,รขยย said Alice. รขยยWhat sort of a dance is it?รขยย
รขยยWhy,รขยย said the Gryphon, รขยยyou first form into a line along the
sea-shoreรขยยรขยย
รขยยTwo lines!รขยย cried the Mock Turtle. รขยยSeals, turtles, salmon, and so on;
then, when youรขยยve cleared all the jelly-fish out of the wayรขยยรขยย
รขยย_That_ generally takes some time,รขยย interrupted the Gryphon.
รขยยรขยยyou advance twiceรขยยรขยย
รขยยEach with a lobster as a partner!รขยย cried the Gryphon.
รขยยOf course,รขยย the Mock Turtle said: รขยยadvance twice, set to partnersรขยยรขยย
รขยยรขยยchange lobsters, and retire in same order,รขยย continued the Gryphon.
รขยยThen, you know,รขยย the Mock Turtle went on, รขยยyou throw theรขยยรขยย
รขยยThe lobsters!รขยย shouted the Gryphon, with a bound into the air.
รขยยรขยยas far out to sea as you canรขยยรขยย
รขยยSwim after them!รขยย screamed the Gryphon.
รขยยTurn a somersault in the sea!รขยย cried the Mock Turtle, capering wildly
about.
รขยยChange lobsters again!รขยย yelled the Gryphon at the top of its voice.
รขยยBack to land again, and thatรขยยs all the first figure,รขยย said the Mock
Turtle, suddenly dropping his voice; and the two creatures, who had
been jumping about like mad things all this time, sat down again very
sadly and quietly, and looked at Alice.
รขยยIt must be a very pretty dance,รขยย said Alice timidly.
รขยยWould you like to see a little of it?รขยย said the Mock Turtle.
รขยยVery much indeed,รขยย said Alice.
รขยยCome, letรขยยs try the first figure!รขยย said the Mock Turtle to the
Gryphon. รขยยWe can do without lobsters, you know. Which shall sing?รขยย
รขยยOh, _you_ sing,รขยย said the Gryphon. รขยยIรขยยve forgotten the words.รขยย
So they began solemnly dancing round and round Alice, every now and
then treading on her toes when they passed too close, and waving their
forepaws to mark the time, while the Mock Turtle sang this, very slowly
and sadly:รขยย
รขยยWill you walk a little faster?รขยย said a whiting to a snail.
รขยยThereรขยยs a porpoise close behind us, and heรขยยs treading on my tail.
See how eagerly the lobsters and the turtles all advance!
They are waiting on the shingleรขยยwill you come and join the dance?
Will you, wonรขยยt you, will you, wonรขยยt you, will you join the dance?
Will you, wonรขยยt you, will you, wonรขยยt you, wonรขยยt you join the dance?
รขยยYou can really have no notion how delightful it will be
When they take us up and throw us, with the lobsters, out to sea!รขยย
But the snail replied รขยยToo far, too far!รขยย and gave a look askanceรขยย
Said he thanked the whiting kindly, but he would not join the dance.
Would not, could not, would not, could not, would not join the dance.
Would not, could not, would not, could not, could not join the dance.
รขยยWhat matters it how far we go?รขยย his scaly friend replied.
รขยยThere is another shore, you know, upon the other side.
The further off from England the nearer is to Franceรขยย
Then turn not pale, beloved snail, but come and join the dance.
Will you, wonรขยยt you, will you, wonรขยยt you, will you join the dance?
Will you, wonรขยยt you, will you, wonรขยยt you, wonรขยยt you join the dance?รขยย
รขยยThank you, itรขยยs a very interesting dance to watch,รขยย said Alice,
feeling very glad that it was over at last: รขยยand I do so like that
curious song about the whiting!รขยย
รขยยOh, as to the whiting,รขยย said the Mock Turtle, รขยยtheyรขยยyouรขยยve seen them,
of course?รขยย
รขยยYes,รขยย said Alice, รขยยIรขยยve often seen them at dinnรขยยรขยย she checked herself
hastily.
รขยยI donรขยยt know where Dinn may be,รขยย said the Mock Turtle, รขยยbut if youรขยยve
seen them so often, of course you know what theyรขยยre like.รขยย
รขยยI believe so,รขยย Alice replied thoughtfully. รขยยThey have their tails in
their mouthsรขยยand theyรขยยre all over crumbs.รขยย
รขยยYouรขยยre wrong about the crumbs,รขยย said the Mock Turtle: รขยยcrumbs would
all wash off in the sea. But they _have_ their tails in their mouths;
and the reason isรขยยรขยย here the Mock Turtle yawned and shut his
eyes.รขยยรขยยTell her about the reason and all that,รขยย he said to the Gryphon.
รขยยThe reason is,รขยย said the Gryphon, รขยยthat they _would_ go with the
lobsters to the dance. So they got thrown out to sea. So they had to
fall a long way. So they got their tails fast in their mouths. So they
couldnรขยยt get them out again. Thatรขยยs all.รขยย
รขยยThank you,รขยย said Alice, รขยยitรขยยs very interesting. I never knew so much
about a whiting before.รขยย
รขยยI can tell you more than that, if you like,รขยย said the Gryphon. รขยยDo you
know why itรขยยs called a whiting?รขยย
รขยยI never thought about it,รขยย said Alice. รขยยWhy?รขยย
รขยย_It does the boots and shoes_,รขยย the Gryphon replied very solemnly.
Alice was thoroughly puzzled. รขยยDoes the boots and shoes!รขยย she repeated
in a wondering tone.
รขยยWhy, what are _your_ shoes done with?รขยย said the Gryphon. รขยยI mean, what
makes them so shiny?รขยย
Alice looked down at them, and considered a little before she gave her
answer. รขยยTheyรขยยre done with blacking, I believe.รขยย
รขยยBoots and shoes under the sea,รขยย the Gryphon went on in a deep voice,
รขยยare done with a whiting. Now you know.รขยย
รขยยAnd what are they made of?รขยย Alice asked in a tone of great curiosity.
รขยยSoles and eels, of course,รขยย the Gryphon replied rather impatiently:
รขยยany shrimp could have told you that.รขยย
รขยยIf Iรขยยd been the whiting,รขยย said Alice, whose thoughts were still
running on the song, รขยยIรขยยd have said to the porpoise, รขยยKeep back,
please: we donรขยยt want _you_ with us!รขยยรขยย
รขยยThey were obliged to have him with them,รขยย the Mock Turtle said: รขยยno
wise fish would go anywhere without a porpoise.รขยย
รขยยWouldnรขยยt it really?รขยย said Alice in a tone of great surprise.
รขยยOf course not,รขยย said the Mock Turtle: รขยยwhy, if a fish came to _me_,
and told me he was going a journey, I should say รขยยWith what porpoise?รขยยรขยย
รขยยDonรขยยt you mean รขยยpurposeรขยย?รขยย said Alice.
รขยยI mean what I say,รขยย the Mock Turtle replied in an offended tone. And
the Gryphon added รขยยCome, letรขยยs hear some of _your_ adventures.รขยย
รขยยI could tell you my adventuresรขยยbeginning from this morning,รขยย said
Alice a little timidly: รขยยbut itรขยยs no use going back to yesterday,
because I was a different person then.รขยย
รขยยExplain all that,รขยย said the Mock Turtle.
รขยยNo, no! The adventures first,รขยย said the Gryphon in an impatient tone:
รขยยexplanations take such a dreadful time.รขยย
So Alice began telling them her adventures from the time when she first
saw the White Rabbit. She was a little nervous about it just at first,
the two creatures got so close to her, one on each side, and opened
their eyes and mouths so _very_ wide, but she gained courage as she
went on. Her listeners were perfectly quiet till she got to the part
about her repeating รขยย_You are old, Father William_,รขยย to the
Caterpillar, and the words all coming different, and then the Mock
Turtle drew a long breath, and said รขยยThatรขยยs very curious.รขยย
รขยยItรขยยs all about as curious as it can be,รขยย said the Gryphon.
รขยยIt all came different!รขยย the Mock Turtle repeated thoughtfully. รขยยI
should like to hear her try and repeat something now. Tell her to
begin.รขยย He looked at the Gryphon as if he thought it had some kind of
authority over Alice.
รขยยStand up and repeat รขยยรขยย_Tis the voice of the sluggard_,รขยยรขยย said the
Gryphon.
รขยยHow the creatures order one about, and make one repeat lessons!รขยย
thought Alice; รขยยI might as well be at school at once.รขยย However, she got
up, and began to repeat it, but her head was so full of the Lobster
Quadrille, that she hardly knew what she was saying, and the words came
very queer indeed:รขยย
รขยยรขยยTis the voice of the Lobster; I heard him declare,
รขยยYou have baked me too brown, I must sugar my hair.รขยย
As a duck with its eyelids, so he with his nose
Trims his belt and his buttons, and turns out his toes.รขยย
[later editions continued as follows
When the sands are all dry, he is gay as a lark,
And will talk in contemptuous tones of the Shark,
But, when the tide rises and sharks are around,
His voice has a timid and tremulous sound.]
รขยยThatรขยยs different from what _I_ used to say when I was a child,รขยย said
the Gryphon.
รขยยWell, I never heard it before,รขยย said the Mock Turtle; รขยยbut it sounds
uncommon nonsense.รขยย
Alice said nothing; she had sat down with her face in her hands,
wondering if anything would _ever_ happen in a natural way again.
รขยยI should like to have it explained,รขยย said the Mock Turtle.
รขยยShe canรขยยt explain it,รขยย said the Gryphon hastily. รขยยGo on with the next
verse.รขยย
รขยยBut about his toes?รขยย the Mock Turtle persisted. รขยยHow _could_ he turn
them out with his nose, you know?รขยย
รขยยItรขยยs the first position in dancing.รขยย Alice said; but was dreadfully
puzzled by the whole thing, and longed to change the subject.
รขยยGo on with the next verse,รขยย the Gryphon repeated impatiently: รขยยit
begins รขยย_I passed by his garden_.รขยยรขยย
Alice did not dare to disobey, though she felt sure it would all come
wrong, and she went on in a trembling voice:รขยย
รขยยI passed by his garden, and marked, with one eye,
How the Owl and the Panther were sharing a pieรขยยรขยย
[later editions continued as follows
The Panther took pie-crust, and gravy, and meat,
While the Owl had the dish as its share of the treat.
When the pie was all finished, the Owl, as a boon,
Was kindly permitted to pocket the spoon:
While the Panther received knife and fork with a growl,
And concluded the banquetรขยย]
รขยยWhat _is_ the use of repeating all that stuff,รขยย the Mock Turtle
interrupted, รขยยif you donรขยยt explain it as you go on? Itรขยยs by far the
most confusing thing _I_ ever heard!รขยย
รขยยYes, I think youรขยยd better leave off,รขยย said the Gryphon: and Alice was
only too glad to do so.
รขยยShall we try another figure of the Lobster Quadrille?รขยย the Gryphon
went on. รขยยOr would you like the Mock Turtle to sing you a song?รขยย
รขยยOh, a song, please, if the Mock Turtle would be so kind,รขยย Alice
replied, so eagerly that the Gryphon said, in a rather offended tone,
รขยยHm! No accounting for tastes! Sing her รขยย_Turtle Soup_,รขยย will you, old
fellow?รขยย
The Mock Turtle sighed deeply, and began, in a voice sometimes choked
with sobs, to sing this:รขยย
รขยยBeautiful Soup, so rich and green,
Waiting in a hot tureen!
Who for such dainties would not stoop?
Soup of the evening, beautiful Soup!
Soup of the evening, beautiful Soup!
Beauรขยยootiful Sooรขยยoop!
Beauรขยยootiful Sooรขยยoop!
Sooรขยยoop of the eรขยยeรขยยevening,
Beautiful, beautiful Soup!
รขยยBeautiful Soup! Who cares for fish,
Game, or any other dish?
Who would not give all else for two p
ennyworth only of beautiful Soup?
Pennyworth only of beautiful Soup?
Beauรขยยootiful Sooรขยยoop!
Beauรขยยootiful Sooรขยยoop!
Sooรขยยoop of the eรขยยeรขยยevening,
Beautiful, beautiรขยยFUL SOUP!รขยย
รขยยChorus again!รขยย cried the Gryphon, and the Mock Turtle had just begun
to repeat it, when a cry of รขยยThe trialรขยยs beginning!รขยย was heard in the
distance.
รขยยCome on!รขยย cried the Gryphon, and, taking Alice by the hand, it hurried
off, without waiting for the end of the song.
รขยยWhat trial is it?รขยย Alice panted as she ran; but the Gryphon only
answered รขยยCome on!รขยย and ran the faster, while more and more faintly
came, carried on the breeze that followed them, the melancholy words:รขยย
รขยยSooรขยยoop of the eรขยยeรขยยevening,
Beautiful, beautiful Soup!รขยย
CHAPTER XI.
Who Stole the Tarts?
The King and Queen of Hearts were seated on their throne when they
arrived, with a great crowd assembled about themรขยยall sorts of little
birds and beasts, as well as the whole pack of cards: the Knave was
standing before them, in chains, with a soldier on each side to guard
him; and near the King was the White Rabbit, with a trumpet in one
hand, and a scroll of parchment in the other. In the very middle of the
court was a table, with a large dish of tarts upon it: they looked so
good, that it made Alice quite hungry to look at themรขยยรขยยI wish theyรขยยd
get the trial done,รขยย she thought, รขยยand hand round the refreshments!รขยย
But there seemed to be no chance of this, so she began looking at
everything about her, to pass away the time.
Alice had never been in a court of justice before, but she had read
about them in books, and she was quite pleased to find that she knew
the name of nearly everything there. รขยยThatรขยยs the judge,รขยย she said to
herself, รขยยbecause of his great wig.รขยย
The judge, by the way, was the King; and as he wore his crown over the
wig, (look at the frontispiece if you want to see how he did it,) he
did not look at all comfortable, and it was certainly not becoming.
รขยยAnd thatรขยยs the jury-box,รขยย thought Alice, รขยยand those twelve creatures,รขยย
(she was obliged to say รขยยcreatures,รขยย you see, because some of them were
animals, and some were birds,) รขยยI suppose they are the jurors.รขยย She
said this last word two or three times over to herself, being rather
proud of it: for she thought, and rightly too, that very few little
girls of her age knew the meaning of it at all. However, รขยยjury-menรขยย
would have done just as well.
The twelve jurors were all writing very busily on slates. รขยยWhat are
they doing?รขยย Alice whispered to the Gryphon. รขยยThey canรขยยt have anything
to put down yet, before the trialรขยยs begun.รขยย
รขยยTheyรขยยre putting down their names,รขยย the Gryphon whispered in reply,
รขยยfor fear they should forget them before the end of the trial.รขยย
รขยยStupid things!รขยย Alice began in a loud, indignant voice, but she
stopped hastily, for the White Rabbit cried out, รขยยSilence in the
court!รขยย and the King put on his spectacles and looked anxiously round,
to make out who was talking.
Alice could see, as well as if she were looking over their shoulders,
that all the jurors were writing down รขยยstupid things!รขยย on their slates,
and she could even make out that one of them didnรขยยt know how to spell
รขยยstupid,รขยย and that he had to ask his neighbour to tell him. รขยยA nice
muddle their slatesรขยยll be in before the trialรขยยs over!รขยย thought Alice.
One of the jurors had a pencil that squeaked. This of course, Alice
could _not_ stand, and she went round the court and got behind him, and
very soon found an opportunity of taking it away. She did it so quickly
that the poor little juror (it was Bill, the Lizard) could not make out
at all what had become of it; so, after hunting all about for it, he
was obliged to write with one finger for the rest of the day; and this
was of very little use, as it left no mark on the slate.
รขยยHerald, read the accusation!รขยย said the King.
On this the White Rabbit blew three blasts on the trumpet, and then
unrolled the parchment scroll, and read as follows:รขยย
รขยยThe Queen of Hearts, she made some tarts,
All on a summer day:
The Knave of Hearts, he stole those tarts,
And took them quite away!รขยย
รขยยConsider your verdict,รขยย the King said to the jury.
รขยยNot yet, not yet!รขยย the Rabbit hastily interrupted. รขยยThereรขยยs a great
deal to come before that!รขยย
รขยยCall the first witness,รขยย said the King; and the White Rabbit blew
three blasts on the trumpet, and called out, รขยยFirst witness!รขยย
The first witness was the Hatter. He came in with a teacup in one hand
and a piece of bread-and-butter in the other. รขยยI beg pardon, your
Majesty,รขยย he began, รขยยfor bringing these in: but I hadnรขยยt quite finished
my tea when I was sent for.รขยย
รขยยYou ought to have finished,รขยย said the King. รขยยWhen did you begin?รขยย
The Hatter looked at the March Hare, who had followed him into the
court, arm-in-arm with the Dormouse. รขยยFourteenth of March, I _think_ it
was,รขยย he said.
รขยยFifteenth,รขยย said the March Hare.
รขยยSixteenth,รขยย added the Dormouse.
รขยยWrite that down,รขยย the King said to the jury, and the jury eagerly
wrote down all three dates on their slates, and then added them up, and
reduced the answer to shillings and pence.
รขยยTake off your hat,รขยย the King said to the Hatter.
รขยยIt isnรขยยt mine,รขยย said the Hatter.
รขยย_Stolen!_รขยย the King exclaimed, turning to the jury, who instantly made
a memorandum of the fact.
รขยยI keep them to sell,รขยย the Hatter added as an explanation; รขยยIรขยยve none
of my own. Iรขยยm a hatter.รขยย
Here the Queen put on her spectacles, and began staring at the Hatter,
who turned pale and fidgeted.
รขยยGive your evidence,รขยย said the King; รขยยand donรขยยt be nervous, or Iรขยยll
have you executed on the spot.รขยย
This did not seem to encourage the witness at all: he kept shifting
from one foot to the other, looking uneasily at the Queen, and in his
confusion he bit a large piece out of his teacup instead of the
bread-and-butter.
Just at this moment Alice felt a very curious sensation, which puzzled
her a good deal until she made out what it was: she was beginning to
grow larger again, and she thought at first she would get up and leave
the court; but on second thoughts she decided to remain where she was
as long as there was room for her.
รขยยI wish you wouldnรขยยt squeeze so.รขยย said the Dormouse, who was sitting
next to her. รขยยI can hardly breathe.รขยย
รขยยI canรขยยt help it,รขยย said Alice very meekly: รขยยIรขยยm growing.รขยย
รขยยYouรขยยve no right to grow _here_,รขยย said the Dormouse.
รขยยDonรขยยt talk nonsense,รขยย said Alice more boldly: รขยยyou know youรขยยre growing
too.รขยย
รขยยYes, but _I_ grow at a reasonable pace,รขยย said the Dormouse: รขยยnot in
that ridiculous fashion.รขยย And he got up very sulkily and crossed over
to the other side of the court.
All this time the Queen had never left off staring at the Hatter, and,
just as the Dormouse crossed the court, she said to one of the officers
of the court, รขยยBring me the list of the singers in the last concert!รขยย
on which the wretched Hatter trembled so, that he shook both his shoes
off.
รขยยGive your evidence,รขยย the King repeated angrily, รขยยor Iรขยยll have you
executed, whether youรขยยre nervous or not.รขยย
รขยยIรขยยm a poor man, your Majesty,รขยย the Hatter began, in a trembling voice,
รขยยรขยยand I hadnรขยยt begun my teaรขยยnot above a week or soรขยยand what with the
bread-and-butter getting so thinรขยยand the twinkling of the teaรขยยรขยย
รขยยThe twinkling of the _what?_รขยย said the King.
รขยยIt _began_ with the tea,รขยย the Hatter replied.
รขยยOf course twinkling begins with a T!รขยย said the King sharply. รขยยDo you
take me for a dunce? Go on!รขยย
รขยยIรขยยm a poor man,รขยย the Hatter went on, รขยยand most things twinkled after
thatรขยยonly the March Hare saidรขยยรขยย
รขยยI didnรขยยt!รขยย the March Hare interrupted in a great hurry.
รขยยYou did!รขยย said the Hatter.
รขยยI deny it!รขยย said the March Hare.
รขยยHe denies it,รขยย said the King: รขยยleave out that part.รขยย
รขยยWell, at any rate, the Dormouse saidรขยยรขยย the Hatter went on, looking
anxiously round to see if he would deny it too: but the Dormouse denied
nothing, being fast asleep.
รขยยAfter that,รขยย continued the Hatter, รขยยI cut some more bread-and-butterรขยยรขยย
รขยยBut what did the Dormouse say?รขยย one of the jury asked.
รขยยThat I canรขยยt remember,รขยย said the Hatter.
รขยยYou _must_ remember,รขยย remarked the King, รขยยor Iรขยยll have you executed.รขยย
The miserable Hatter dropped his teacup and bread-and-butter, and went
down on one knee. รขยยIรขยยm a poor man, your Majesty,รขยย he began.
รขยยYouรขยยre a _very_ poor _speaker_,รขยย said the King.
Here one of the guinea-pigs cheered, and was immediately suppressed by
the officers of the court. (As that is rather a hard word, I will just
explain to you how it was done. They had a large canvas bag, which tied
up at the mouth with strings: into this they slipped the guinea-pig,
head first, and then sat upon it.)
รขยยIรขยยm glad Iรขยยve seen that done,รขยย thought Alice. รขยยIรขยยve so often read in
the newspapers, at the end of trials, รขยยThere was some attempts at
applause, which was immediately suppressed by the officers of the
court,รขยย and I never understood what it meant till now.รขยย
รขยยIf thatรขยยs all you know about it, you may stand down,รขยย continued the
King.
รขยยI canรขยยt go no lower,รขยย said the Hatter: รขยยIรขยยm on the floor, as it is.รขยย
รขยยThen you may _sit_ down,รขยย the King replied.
Here the other guinea-pig cheered, and was suppressed.
รขยยCome, that finished the guinea-pigs!รขยย thought Alice. รขยยNow we shall get
on better.รขยย
รขยยIรขยยd rather finish my tea,รขยย said the Hatter, with an anxious look at
the Queen, who was reading the list of singers.
รขยยYou may go,รขยย said the King, and the Hatter hurriedly left the court,
without even waiting to put his shoes on.
รขยยรขยยand just take his head off outside,รขยย the Queen added to one of the
officers: but the Hatter was out of sight before the officer could get
to the door.
รขยยCall the next witness!รขยย said the King.
The next witness was the Duchessรขยยs cook. She carried the pepper-box in
her hand, and Alice guessed who it was, even before she got into the
court, by the way the people near the door began sneezing all at once.
รขยยGive your evidence,รขยย said the King.
รขยยShanรขยยt,รขยย said the cook.
The King looked anxiously at the White Rabbit, who said in a low voice,
รขยยYour Majesty must cross-examine _this_ witness.รขยย
รขยยWell, if I must, I must,รขยย the King said, with a melancholy air, and,
after folding his arms and frowning at the cook till his eyes were
nearly out of sight, he said in a deep voice, รขยยWhat are tarts made of?รขยย
รขยยPepper, mostly,รขยย said the cook.
รขยยTreacle,รขยย said a sleepy voice behind her.
รขยยCollar that Dormouse,รขยย the Queen shrieked out. รขยยBehead that Dormouse!
Turn that Dormouse out of court! Suppress him! Pinch him! Off with his
whiskers!รขยย
For some minutes the whole court was in confusion, getting the Dormouse
turned out, and, by the time they had settled down again, the cook had
disappeared.
รขยยNever mind!รขยย said the King, with an air of great relief. รขยยCall the
next witness.รขยย And he added in an undertone to the Queen, รขยยReally, my
dear, _you_ must cross-examine the next witness. It quite makes my
forehead ache!รขยย
Alice watched the White Rabbit as he fumbled over the list, feeling
very curious to see what the next witness would be like, รขยยรขยยfor they
havenรขยยt got much evidence _yet_,รขยย she said to herself. Imagine her
surprise, when the White Rabbit read out, at the top of his shrill
little voice, the name รขยยAlice!รขยย
CHAPTER XII.
Aliceรขยยs Evidence
รขยยHere!รขยย cried Alice, quite forgetting in the flurry of the moment how
large she had grown in the last few minutes, and she jumped up in such
a hurry that she tipped over the jury-box with the edge of her skirt,
upsetting all the jurymen on to the heads of the crowd below, and there
they lay sprawling about, reminding her very much of a globe of
goldfish she had accidentally upset the week before.
รขยยOh, I _beg_ your pardon!รขยย she exclaimed in a tone of great dismay, and
began picking them up again as quickly as she could, for the accident
of the goldfish kept running in her head, and she had a vague sort of
idea that they must be collected at once and put back into the
jury-box, or they would die.
รขยยThe trial cannot proceed,รขยย said the King in a very grave voice, รขยยuntil
all the jurymen are back in their proper placesรขยย_all_,รขยย he repeated
with great emphasis, looking hard at Alice as he said so.
Alice looked at the jury-box, and saw that, in her haste, she had put
the Lizard in head downwards, and the poor little thing was waving its
tail about in a melancholy way, being quite unable to move. She soon
got it out again, and put it right; รขยยnot that it signifies much,รขยย she
said to herself; รขยยI should think it would be _quite_ as much use in the
trial one way up as the other.รขยย
As soon as the jury had a little recovered from the shock of being
upset, and their slates and pencils had been found and handed back to
them, they set to work very diligently to write out a history of the
accident, all except the Lizard, who seemed too much overcome to do
anything but sit with its mouth open, gazing up into the roof of the
court.
รขยยWhat do you know about this business?รขยย the King said to Alice.
รขยยNothing,รขยย said Alice.
รขยยNothing _whatever?_รขยย persisted the King.
รขยยNothing whatever,รขยย said Alice.
รขยยThatรขยยs very important,รขยย the King said, turning to the jury. They were
just beginning to write this down on their slates, when the White
Rabbit interrupted: รขยย_Un_important, your Majesty means, of course,รขยย he
said in a very respectful tone, but frowning and making faces at him as
he spoke.
รขยย_Un_important, of course, I meant,รขยย the King hastily said, and went on
to himself in an undertone,
รขยยimportantรขยยunimportantรขยยunimportantรขยยimportantรขยยรขยย as if he were trying
which word sounded best.
Some of the jury wrote it down รขยยimportant,รขยย and some รขยยunimportant.รขยย
Alice could see this, as she was near enough to look over their slates;
รขยยbut it doesnรขยยt matter a bit,รขยย she thought to herself.
At this moment the King, who had been for some time busily writing in
his note-book, cackled out รขยยSilence!รขยย and read out from his book, รขยยRule
Forty-two. _All persons more than a mile high to leave the court_.รขยย
Everybody looked at Alice.
รขยย_Iรขยยm_ not a mile high,รขยย said Alice.
รขยยYou are,รขยย said the King.
รขยยNearly two miles high,รขยย added the Queen.
รขยยWell, I shanรขยยt go, at any rate,รขยย said Alice: รขยยbesides, thatรขยยs not a
regular rule: you invented it just now.รขยย
รขยยItรขยยs the oldest rule in the book,รขยย said the King.
รขยยThen it ought to be Number One,รขยย said Alice.
The King turned pale, and shut his note-book hastily. รขยยConsider your
verdict,รขยย he said to the jury, in a low, trembling voice.
รขยยThereรขยยs more evidence to come yet, please your Majesty,รขยย said the
White Rabbit, jumping up in a great hurry; รขยยthis paper has just been
picked up.รขยย
รขยยWhatรขยยs in it?รขยย said the Queen.
รขยยI havenรขยยt opened it yet,รขยย said the White Rabbit, รขยยbut it seems to be a
letter, written by the prisoner toรขยยto somebody.รขยย
รขยยIt must have been that,รขยย said the King, รขยยunless it was written to
nobody, which isnรขยยt usual, you know.รขยย
รขยยWho is it directed to?รขยย said one of the jurymen.
รขยยIt isnรขยยt directed at all,รขยย said the White Rabbit; รขยยin fact, thereรขยยs
nothing written on the _outside_.รขยย He unfolded the paper as he spoke,
and added รขยยIt isnรขยยt a letter, after all: itรขยยs a set of verses.รขยย
รขยยAre they in the prisonerรขยยs handwriting?รขยย asked another of the jurymen.
รขยยNo, theyรขยยre not,รขยย said the White Rabbit, รขยยand thatรขยยs the queerest
thing about it.รขยย (The jury all looked puzzled.)
รขยยHe must have imitated somebody elseรขยยs hand,รขยย said the King. (The jury
all brightened up again.)
รขยยPlease your Majesty,รขยย said the Knave, รขยยI didnรขยยt write it, and they
canรขยยt prove I did: thereรขยยs no name signed at the end.รขยย
รขยยIf you didnรขยยt sign it,รขยย said the King, รขยยthat only makes the matter
worse. You _must_ have meant some mischief, or else youรขยยd have signed
your name like an honest man.รขยย
There was a general clapping of hands at this: it was the first really
clever thing the King had said that day.
รขยยThat _proves_ his guilt,รขยย said the Queen.
รขยยIt proves nothing of the sort!รขยย said Alice. รขยยWhy, you donรขยยt even know
what theyรขยยre about!รขยย
รขยยRead them,รขยย said the King.
The White Rabbit put on his spectacles. รขยยWhere shall I begin, please
your Majesty?รขยย he asked.
รขยยBegin at the beginning,รขยย the King said gravely, รขยยand go on till you
come to the end: then stop.รขยย
These were the verses the White Rabbit read:รขยย
รขยยThey told me you had been to her,
And mentioned me to him:
She gave me a good character,
But said I could not swim.
He sent them word I had not gone
(We know it to be true):
If she should push the matter on,
What would become of you?
I gave her one, they gave him two,
You gave us three or more;
They all returned from him to you,
Though they were mine before.
If I or she should chance to be
Involved in this affair,
He trusts to you to set them free,
Exactly as we were.
My notion was that you had been
(Before she had this fit)
An obstacle that came between
Him, and ourselves, and it.
Donรขยยt let him know she liked them best,
For this must ever be
A secret, kept from all the rest,
Between yourself and me.รขยย
รขยยThatรขยยs the most important piece of evidence weรขยยve heard yet,รขยย said the
King, rubbing his hands; รขยยso now let the juryรขยยรขยย
รขยยIf any one of them can explain it,รขยย said Alice, (she had grown so
large in the last few minutes that she wasnรขยยt a bit afraid of
interrupting him,) รขยยIรขยยll give him sixpence. _I_ donรขยยt believe thereรขยยs
an atom of meaning in it.รขยย
The jury all wrote down on their slates, รขยย_She_ doesnรขยยt believe thereรขยยs
an atom of meaning in it,รขยย but none of them attempted to explain the
paper.
รขยยIf thereรขยยs no meaning in it,รขยย said the King, รขยยthat saves a world of
trouble, you know, as we neednรขยยt try to find any. And yet I donรขยยt
know,รขยย he went on, spreading out the verses on his knee, and looking at
them with one eye; รขยยI seem to see some meaning in them, after all.
รขยยรขยย_said I could not swim_รขยยรขยย you canรขยยt swim, can you?รขยย he added, turning
to the Knave.
The Knave shook his head sadly. รขยยDo I look like it?รขยย he said. (Which he
certainly did _not_, being made entirely of cardboard.)
รขยยAll right, so far,รขยย said the King, and he went on muttering over the
verses to himself: รขยยรขยย_We know it to be true_รขยยรขยย thatรขยยs the jury, of
courseรขยยรขยย_I gave her one, they gave him two_รขยยรขยย why, that must be what he
did with the tarts, you knowรขยยรขยย
รขยยBut, it goes on รขยย_they all returned from him to you_,รขยยรขยย said Alice.
รขยยWhy, there they are!รขยย said the King triumphantly, pointing to the
tarts on the table. รขยยNothing can be clearer than _that_. Then
againรขยยรขยย_before she had this fit_รขยยรขยย you never had fits, my dear, I
think?รขยย he said to the Queen.
รขยยNever!รขยย said the Queen furiously, throwing an inkstand at the Lizard
as she spoke. (The unfortunate little Bill had left off writing on his
slate with one finger, as he found it made no mark; but he now hastily
began again, using the ink, that was trickling down his face, as long
as it lasted.)
รขยยThen the words donรขยยt _fit_ you,รขยย said the King, looking round the
court with a smile. There was a dead silence.
รขยยItรขยยs a pun!รขยย the King added in an offended tone, and everybody
laughed, รขยยLet the jury consider their verdict,รขยย the King said, for
about the twentieth time that day.
รขยยNo, no!รขยย said the Queen. รขยยSentence firstรขยยverdict afterwards.รขยย
รขยยStuff and nonsense!รขยย said Alice loudly. รขยยThe idea of having the
sentence first!รขยย
รขยยHold your tongue!รขยย said the Queen, turning purple.
รขยยI wonรขยยt!รขยย said Alice.
รขยยOff with her head!รขยย the Queen shouted at the top of her voice. Nobody
moved.
รขยยWho cares for you?รขยย said Alice, (she had grown to her full size by
this time.) รขยยYouรขยยre nothing but a pack of cards!รขยย
At this the whole pack rose up into the air, and came flying down upon
her: she gave a little scream, half of fright and half of anger, and
tried to beat them off, and found herself lying on the bank, with her
head in the lap of her sister, who was gently brushing away some dead
leaves that had fluttered down from the trees upon her face.
รขยยWake up, Alice dear!รขยย said her sister; รขยยWhy, what a long sleep youรขยยve
had!รขยย
รขยยOh, Iรขยยve had such a curious dream!รขยย said Alice, and she told her
sister, as well as she could remember them, all these strange
Adventures of hers that you have just been reading about; and when she
had finished, her sister kissed her, and said, รขยยIt _was_ a curious
dream, dear, certainly: but now run in to your tea; itรขยยs getting late.รขยย
So Alice got up and ran off, thinking while she ran, as well she might,
what a wonderful dream it had been.
But her sister sat still just as she left her, leaning her head on her
hand, watching the setting sun, and thinking of little Alice and all
her wonderful Adventures, till she too began dreaming after a fashion,
and this was her dream:รขยย
First, she dreamed of little Alice herself, and once again the tiny
hands were clasped upon her knee, and the bright eager eyes were
looking up into hersรขยยshe could hear the very tones of her voice, and
see that queer little toss of her head to keep back the wandering hair
that _would_ always get into her eyesรขยยand still as she listened, or
seemed to listen, the whole place around her became alive with the
strange creatures of her little sisterรขยยs dream.
The long grass rustled at her feet as the White Rabbit hurried byรขยยthe
frightened Mouse splashed his way through the neighbouring poolรขยยshe
could hear the rattle of the teacups as the March Hare and his friends
shared their never-ending meal, and the shrill voice of the Queen
ordering off her unfortunate guests to executionรขยยonce more the pig-baby
was sneezing on the Duchessรขยยs knee, while plates and dishes crashed
around itรขยยonce more the shriek of the Gryphon, the squeaking of the
Lizardรขยยs slate-pencil, and the choking of the suppressed guinea-pigs,
filled the air, mixed up with the distant sobs of the miserable Mock
Turtle.
So she sat on, with closed eyes, and half believed herself in
Wonderland, though she knew she had but to open them again, and all
would change to dull realityรขยยthe grass would be only rustling in the
wind, and the pool rippling to the waving of the reedsรขยยthe rattling
teacups would change to tinkling sheep-bells, and the Queenรขยยs shrill
cries to the voice of the shepherd boyรขยยand the sneeze of the baby, the
shriek of the Gryphon, and all the other queer noises, would change
(she knew) to the confused clamour of the busy farm-yardรขยยwhile the
lowing of the cattle in the distance would take the place of the Mock
Turtleรขยยs heavy sobs.
Lastly, she pictured to herself how this same little sister of hers
would, in the after-time, be herself a grown woman; and how she would
keep, through all her riper years, the simple and loving heart of her
childhood: and how she would gather about her other little children,
and make _their_ eyes bright and eager with many a strange tale,
perhaps even with the dream of Wonderland of long ago: and how she
would feel with all their simple sorrows, and find a pleasure in all
their simple joys, remembering her own child-life, and the happy summer
days.
THE END
End of Project Gutenbergรขยยs Aliceรขยยs Adventures in Wonderland, by Lewis Carroll
*** END OF THIS PROJECT GUTENBERG EBOOK ALICEรขยยS ADVENTURES IN WONDERLAND ***
***** This file should be named 11-0.txt or 11-0.zip *****
This and all associated files of various formats will be found in:
http://www.gutenberg.org/1/11/
Produced by Arthur DiBianca and David Widger
Updated editions will replace the previous one--the old editions will
be renamed.
Creating the works from print editions not protected by U.S. copyright
law means that no one owns a United States copyright in these works,
so the Foundation (and you!) can copy and distribute it in the United
States without permission and without paying copyright
royalties. Special rules, set forth in the General Terms of Use part
of this license, apply to copying and distributing Project
Gutenberg-tm electronic works to protect the PROJECT GUTENBERG-tm
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for the eBooks, unless you receive
specific permission. If you do not charge anything for copies of this
eBook, complying with the rules is very easy. You may use this eBook
for nearly any purpose such as creation of derivative works, reports,
performances and research. They may be modified and printed and given
away--you may do practically ANYTHING in the United States with eBooks
not protected by U.S. copyright law. Redistribution is subject to the
trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg-tm mission of promoting the free
distribution of electronic works, by using or distributing this work
(or any other work associated in any way with the phrase "Project
Gutenberg"), you agree to comply with all the terms of the Full
Project Gutenberg-tm License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and Redistributing Project
Gutenberg-tm electronic works
1.A. By reading or using any part of this Project Gutenberg-tm
electronic work, you indicate that you have read, understand, agree to
and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg-tm electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg-tm electronic work and you do not agree to be bound
by the terms of this agreement, you may obtain a refund from the
person or entity to whom you paid the fee as set forth in paragraph
1.E.8.
1.B. "Project Gutenberg" is a registered trademark. It may only be
used on or associated in any way with an electronic work by people who
agree to be bound by the terms of this agreement. There are a few
things that you can do with most Project Gutenberg-tm electronic works
even without complying with the full terms of this agreement. See
paragraph 1.C below. There are a lot of things you can do with Project
Gutenberg-tm electronic works if you follow the terms of this
agreement and help preserve free future access to Project Gutenberg-tm
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation ("the
Foundation" or PGLAF), owns a compilation copyright in the collection
of Project Gutenberg-tm electronic works. Nearly all the individual
works in the collection are in the public domain in the United
States. If an individual work is unprotected by copyright law in the
United States and you are located in the United States, we do not
claim a right to prevent you from copying, distributing, performing,
displaying or creating derivative works based on the work as long as
all references to Project Gutenberg are removed. Of course, we hope
that you will support the Project Gutenberg-tm mission of promoting
free access to electronic works by freely sharing Project Gutenberg-tm
works in compliance with the terms of this agreement for keeping the
Project Gutenberg-tm name associated with the work. You can easily
comply with the terms of this agreement by keeping this work in the
same format with its attached full Project Gutenberg-tm License when
you share it without charge with others.
1.D. The copyright laws of the place where you are located also govern
what you can do with this work. Copyright laws in most countries are
in a constant state of change. If you are outside the United States,
check the laws of your country in addition to the terms of this
agreement before downloading, copying, displaying, performing,
distributing or creating derivative works based on this work or any
other Project Gutenberg-tm work. The Foundation makes no
representations concerning the copyright status of any work in any
country outside the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg-tm License must appear
prominently whenever any copy of a Project Gutenberg-tm work (any work
on which the phrase "Project Gutenberg" appears, or with which the
phrase "Project Gutenberg" is associated) is accessed, displayed,
performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United States and
most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
eBook or online at www.gutenberg.org. If you are not located in the
United States, you'll have to check the laws of the country where you
are located before using this ebook.
1.E.2. If an individual Project Gutenberg-tm electronic work is
derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to anyone in
the United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase "Project
Gutenberg" associated with or appearing on the work, you must comply
either with the requirements of paragraphs 1.E.1 through 1.E.7 or
obtain permission for the use of the work and the Project Gutenberg-tm
trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg-tm electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg-tm License for all works
posted with the permission of the copyright holder found at the
beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
License terms from this work, or any files containing a part of this
work or any other work associated with Project Gutenberg-tm.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1 with
active links or immediate access to the full terms of the Project
Gutenberg-tm License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form, including
any word processing or hypertext form. However, if you provide access
to or distribute copies of a Project Gutenberg-tm work in a format
other than "Plain Vanilla ASCII" or other format used in the official
version posted on the official Project Gutenberg-tm web site
(www.gutenberg.org), you must, at no additional cost, fee or expense
to the user, provide a copy, a means of exporting a copy, or a means
of obtaining a copy upon request, of the work in its original "Plain
Vanilla ASCII" or other form. Any alternate format must include the
full Project Gutenberg-tm License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg-tm works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg-tm electronic works
provided that
* You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg-tm works calculated using the method
you already use to calculate your applicable taxes. The fee is owed
to the owner of the Project Gutenberg-tm trademark, but he has
agreed to donate royalties under this paragraph to the Project
Gutenberg Literary Archive Foundation. Royalty payments must be paid
within 60 days following each date on which you prepare (or are
legally required to prepare) your periodic tax returns. Royalty
payments should be clearly marked as such and sent to the Project
Gutenberg Literary Archive Foundation at the address specified in
Section 4, "Information about donations to the Project Gutenberg
Literary Archive Foundation."
* You provide a full refund of any money paid by a user who notifies
you in writing (or by e-mail) within 30 days of receipt that s/he
does not agree to the terms of the full Project Gutenberg-tm
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and discontinue
all use of and all access to other copies of Project Gutenberg-tm
works.
* You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in the
electronic work is discovered and reported to you within 90 days of
receipt of the work.
* You comply with all other terms of this agreement for free
distribution of Project Gutenberg-tm works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg-tm electronic work or group of works on different terms than
are set forth in this agreement, you must obtain permission in writing
from both the Project Gutenberg Literary Archive Foundation and The
Project Gutenberg Trademark LLC, the owner of the Project Gutenberg-tm
trademark. Contact the Foundation as set forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend considerable
effort to identify, do copyright research on, transcribe and proofread
works not protected by U.S. copyright law in creating the Project
Gutenberg-tm collection. Despite these efforts, Project Gutenberg-tm
electronic works, and the medium on which they may be stored, may
contain "Defects," such as, but not limited to, incomplete, inaccurate
or corrupt data, transcription errors, a copyright or other
intellectual property infringement, a defective or damaged disk or
other medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the "Right
of Replacement or Refund" described in paragraph 1.F.3, the Project
Gutenberg Literary Archive Foundation, the owner of the Project
Gutenberg-tm trademark, and any other party distributing a Project
Gutenberg-tm electronic work under this agreement, disclaim all
liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE
PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE
LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR
INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH
DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a
defect in this electronic work within 90 days of receiving it, you can
receive a refund of the money (if any) you paid for it by sending a
written explanation to the person you received the work from. If you
received the work on a physical medium, you must return the medium
with your written explanation. The person or entity that provided you
with the defective work may elect to provide a replacement copy in
lieu of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund. If
the second copy is also defective, you may demand a refund in writing
without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you 'AS-IS', WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this agreement
violates the law of the state applicable to this agreement, the
agreement shall be interpreted to make the maximum disclaimer or
limitation permitted by the applicable state law. The invalidity or
unenforceability of any provision of this agreement shall not void the
remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the
trademark owner, any agent or employee of the Foundation, anyone
providing copies of Project Gutenberg-tm electronic works in
accordance with this agreement, and any volunteers associated with the
production, promotion and distribution of Project Gutenberg-tm
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of
the following which you do or cause to occur: (a) distribution of this
or any Project Gutenberg-tm work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg-tm work, and (c) any
Defect you cause.
Section 2. Information about the Mission of Project Gutenberg-tm
Project Gutenberg-tm is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers. It
exists because of the efforts of hundreds of volunteers and donations
from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg-tm's
goals and ensuring that the Project Gutenberg-tm collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a secure
and permanent future for Project Gutenberg-tm and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help, see
Sections 3 and 4 and the Foundation information page at
www.gutenberg.org
Section 3. Information about the Project Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation's EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg Literary
Archive Foundation are tax deductible to the full extent permitted by
U.S. federal laws and your state's laws.
The Foundation's principal office is in Fairbanks, Alaska, with the
mailing address: PO Box 750175, Fairbanks, AK 99775, but its
volunteers and employees are scattered throughout numerous
locations. Its business office is located at 809 North 1500 West, Salt
Lake City, UT 84116, (801) 596-1887. Email contact links and up to
date contact information can be found at the Foundation's web site and
official page at www.gutenberg.org/contact
For additional contact information:
Dr. Gregory B. Newby
Chief Executive and Director
[email protected]
Section 4. Information about Donations to the Project Gutenberg
Literary Archive Foundation
Project Gutenberg-tm depends upon and cannot survive without wide
spread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can be
freely distributed in machine readable form accessible by the widest
array of equipment including outdated equipment. Many small donations
($1 to $5,000) are particularly important to maintaining tax exempt
status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and keep up
with these requirements. We do not solicit donations in locations
where we have not received written confirmation of compliance. To SEND
DONATIONS or determine the status of compliance for any particular
state visit www.gutenberg.org/donate
While we cannot and do not solicit contributions from states where we
have not met the solicitation requirements, we know of no prohibition
against accepting unsolicited donations from donors in such states who
approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg Web pages for current donation
methods and addresses. Donations are accepted in a number of other
ways including checks, online payments and credit card donations. To
donate, please visit: www.gutenberg.org/donate
Section 5. General Information About Project Gutenberg-tm electronic works.
Professor Michael S. Hart was the originator of the Project
Gutenberg-tm concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg-tm eBooks with only a loose network of
volunteer support.
Project Gutenberg-tm eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our Web site which has the main PG search
facility: www.gutenberg.org
This Web site includes information about Project Gutenberg-tm,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how to
subscribe to our email newsletter to hear about new eBooks.
###Markdown
Define a function to plot word frequencies
###Code
def plot_word_frequency(words, top_n=10):
word_freq = FreqDist(words)
labels = [element[0] for element in word_freq.most_common(top_n)]
counts = [element[1] for element in word_freq.most_common(top_n)]
plot = sns.barplot(labels, counts)
return plot
###Output
_____no_output_____
###Markdown
Plot words frequencies present in the gutenberg corpus
###Code
alice_words = alice.text.split()
plot_word_frequency(alice_words, 15)
###Output
_____no_output_____
###Markdown
Stopwords Import stopwords from nltk
###Code
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
Look at the list of stopwords
###Code
import nltk
nltk.download('stopwords')
print(stopwords.words('english'))
###Output
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've", "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn', "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn', "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", 'won', "won't", 'wouldn', "wouldn't"]
###Markdown
Let's remove stopwords from the following piece of text.
###Code
sample_text = "the great aim of education is not knowledge but action"
###Output
_____no_output_____
###Markdown
Break text into words
###Code
sample_words = sample_text.split()
print(sample_words)
###Output
['the', 'great', 'aim', 'of', 'education', 'is', 'not', 'knowledge', 'but', 'action']
###Markdown
Remove stopwords
###Code
sample_words = [word for word in sample_words if word not in stopwords.words('english')]
print(sample_words)
###Output
['great', 'aim', 'education', 'knowledge', 'action']
###Markdown
Join words back to sentence
###Code
sample_text = " ".join(sample_words)
print(sample_text)
###Output
great aim education knowledge action
###Markdown
Removing stopwords in the genesis corpus
###Code
no_stops = [word for word in alice_words if word not in stopwords.words("english")]
plot_word_frequency(no_stops, 10)
###Output
_____no_output_____ |
01 Machine Learning/scikit_examples_jupyter/gaussian_process/plot_gpr_noisy_targets.ipynb | ###Markdown
=========================================================Gaussian Processes regression: basic introductory example=========================================================A simple one-dimensional regression example computed in two different ways:1. A noise-free case2. A noisy case with known noise-level per datapointIn both cases, the kernel's parameters are estimated using the maximumlikelihood principle.The figures illustrate the interpolating property of the Gaussian Processmodel as well as its probabilistic nature in the form of a pointwise 95%confidence interval.Note that the parameter ``alpha`` is applied as a Tikhonovregularization of the assumed covariance between the training points.
###Code
print(__doc__)
# Author: Vincent Dubourg <[email protected]>
# Jake Vanderplas <[email protected]>
# Jan Hendrik Metzen <[email protected]>s
# License: BSD 3 clause
import numpy as np
from matplotlib import pyplot as plt
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
np.random.seed(1)
def f(x):
"""The function to predict."""
return x * np.sin(x)
# ----------------------------------------------------------------------
# First the noiseless case
X = np.atleast_2d([1., 3., 5., 6., 7., 8.]).T
# Observations
y = f(X).ravel()
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
x = np.atleast_2d(np.linspace(0, 10, 1000)).T
# Instantiate a Gaussian Process model
kernel = C(1.0, (1e-3, 1e3)) * RBF(10, (1e-2, 1e2))
gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, sigma = gp.predict(x, return_std=True)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
plt.figure()
plt.plot(x, f(x), 'r:', label=r'$f(x) = x\,\sin(x)$')
plt.plot(X, y, 'r.', markersize=10, label='Observations')
plt.plot(x, y_pred, 'b-', label='Prediction')
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-10, 20)
plt.legend(loc='upper left')
# ----------------------------------------------------------------------
# now the noisy case
X = np.linspace(0.1, 9.9, 20)
X = np.atleast_2d(X).T
# Observations and noise
y = f(X).ravel()
dy = 0.5 + 1.0 * np.random.random(y.shape)
noise = np.random.normal(0, dy)
y += noise
# Instantiate a Gaussian Process model
gp = GaussianProcessRegressor(kernel=kernel, alpha=dy ** 2,
n_restarts_optimizer=10)
# Fit to data using Maximum Likelihood Estimation of the parameters
gp.fit(X, y)
# Make the prediction on the meshed x-axis (ask for MSE as well)
y_pred, sigma = gp.predict(x, return_std=True)
# Plot the function, the prediction and the 95% confidence interval based on
# the MSE
plt.figure()
plt.plot(x, f(x), 'r:', label=r'$f(x) = x\,\sin(x)$')
plt.errorbar(X.ravel(), y, dy, fmt='r.', markersize=10, label='Observations')
plt.plot(x, y_pred, 'b-', label='Prediction')
plt.fill(np.concatenate([x, x[::-1]]),
np.concatenate([y_pred - 1.9600 * sigma,
(y_pred + 1.9600 * sigma)[::-1]]),
alpha=.5, fc='b', ec='None', label='95% confidence interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____ |
salomon_exp/FAST_AI.ipynb | ###Markdown
**Fast AI Experements Based on [this](https://towardsdatascience.com/transfer-learning-using-the-fastai-library-d686b238213e) blog**trained models in this [google Drive Folder](https://drive.google.com/open?id=1bW0UjVudEarP5qTToxwDtcIKXX9iuM9L)
###Code
# from google.colab import drive
# drive.mount('/content/drive/')
###Output
_____no_output_____
###Markdown
get models from drive
###Code
#DATASET
# !unzip drive/My\ Drive/ammi-2020-convnets.zip
# #PSUDO DATASET
# # # # !unzip drive/My\ Drive/data/random.zip -d here
# # GET SAVED MODELS HERE
# !mkdir models/
# !cp -r drive/My\ Drive/data/models/* models/.
# PUSH TRAINED MODELS TO GOOGLE DRIVE
# !cp -r models/* drive/My\ Drive/data/models/.
# !pip install pretrainedmodels
# !pip uninstall torch torchvision -y
# !pip install torch==1.4.0 torchvision==0.5.0
###Output
_____no_output_____
###Markdown
Importing Fast AI library
###Code
import os
import pretrainedmodels
from tqdm import tqdm
from fastai import *
from fastai.vision import *
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import auc,roc_curve
from math import floor
###Output
_____no_output_____
###Markdown
Looking at the data
###Code
# train_path = "./train/train"
# test_path = "./test/test/0"
data_path = "./data/train/train"
test_path = "./data/test/test"
extraimage_path = "./data/extraimages/extraimages"
# !ls
def get_labels(file_path):
dir_name = os.path.dirname(file_path)
split_dir_name = dir_name.split("/")
dir_levels = len(split_dir_name)
label = split_dir_name[dir_levels - 1]
return(label)
# get_labels("./train/train/cgm/train-cgm-528.jpg")
from glob import glob
imagePatches = glob("./data/train/train/*/*.*", recursive=True)
test_imagePatches = glob("./data/extraimages/extraimages/*.*", recursive=True)
imagePatches[0:10]
path=""
transform_kwargs = {"do_flip": True,
"flip_vert": True,
"max_rotate": 180,
"max_zoom": 1.1,
"max_lighting": 0.2,
"max_warp": 0.2,
"p_affine": 0.75,
"p_lighting": 0.7}
tfms = get_transforms(**transform_kwargs)
data = ImageDataBunch.from_name_func(path, imagePatches, label_func=get_labels, size=448,
bs=16,num_workers=2,ds_tfms=tfms,valid_pct=0.0
).normalize(imagenet_stats)
data.show_batch(rows=3, figsize=(8,8))
data,4525+1131
###Output
_____no_output_____
###Markdown
Transfer learning using a pre-trained model: ResNet 50
###Code
model_name = 'se_resnext101_32x4d_2'
model_name = 'se_resnext101_32x4d_2'
def get_cadene_model(pretrained=True, model_name='se_resnext101_32x4d'):
if pretrained:
arch = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')
else:
arch = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained=None)
return arch
learn = cnn_learner(data, get_cadene_model, metrics=[error_rate,accuracy])
learn.lr_find()
learn.recorder.plot()
# learn.apply_dropout(0.2)
learn.fit_one_cycle(15)
# learn.apply_dropout(0.2)
learn.fit_one_cycle(15)
learn.save(model_name)
###Output
_____no_output_____ |
notebooks/knowledgebase_comparison.ipynb | ###Markdown
Data Structuring and Pruning
###Code
## python 2.7 setup
# !pip install pathlib
## we need a python2 version of pyupset
# !pip uninstall -y pyupset
# !pip install https://github.com/agitter/py-upset/archive/master.zip
# !pip install seaborn
from __future__ import division
# where are data files loaded
%env DATAPATH=/util/elastic/
PAPER_SOURCES=['brca.json','cgi.json', 'civic.json', 'jax.json', 'molecularmatch.json', 'oncokb.json', 'pmkb.json']
# Load datasets
import json
import pathlib
import os
data_path = pathlib.Path(os.getenv('DATAPATH','/Users/awagner/Workspace/git/g2p-aggregator/data/local/0.8'))
harvested_sources = dict()
for path in [list(data_path.glob(file))[0] for file in PAPER_SOURCES]:
source = path.parts[-1].split('.')[0]
with path.open() as json_data:
# harvested_sources[source] = json.load(json_data) <- this should work, but doesn't due to malformed json
# what follows is a hack to reassemble into proper JSON object
associations = list()
for line in json_data:
associations.append(json.loads(line))
# resume intended function
harvested_sources[source] = associations
# Standardize structure and merge files
all_associations = list()
for source in harvested_sources:
for entry in harvested_sources[source]:
entry['raw'] = entry.pop(source)
all_associations.append(entry)
len(all_associations)
from collections import Counter
def report_groups(associations):
groups = Counter()
for association in associations:
groups[association['source']] += 1
total = sum(groups.values())
for group in sorted(groups):
print("{}: {} ({:.1f}%)".format(group, groups[group], groups[group] / len(harvested_sources[group]) * 100))
print("Total: {} ({:.1f}%)".format(total, total / len(all_associations) * 100))
report_groups(all_associations)
# Associations with more than 1 feature
multi_featured = [x for x in all_associations if len(x['features']) > 1]
len(multi_featured) / len(all_associations)
report_groups(multi_featured)
# Associations with feature name lists
listed_feature_names = [x for x in all_associations if isinstance(x['feature_names'], list)]
len(listed_feature_names) / len(all_associations)
report_groups(listed_feature_names)
len([x for x in listed_feature_names if len(x['feature_names']) >1 ])
###Output
_____no_output_____
###Markdown
Feature coordinate filteringWhat follows is a detailed look at associations without start and end coordinates after normalization, and a set of regular expression filters to separate out these associations into chunks that can be annotated with gene- or exon-level coordinates, as appropriate.
###Code
# Associations with coordinate features
coord_featured = list()
no_coord_featured = list()
for association in all_associations:
c = 0
for feature in association['features']:
if ('start' in feature) and ('end') in feature:
coord_featured.append(association)
break
else:
c+=1
if c == len(association['features']):
no_coord_featured.append(association)
report_groups(coord_featured)
report_groups(no_coord_featured)
# First association has feature, but no end coord
harvested_sources['cgi'][0]['features']
# Associations with partial coordinate features
partial_coord_featured = list()
no_partial_coord_featured = list()
for association in all_associations:
c = 0
for feature in association['features']:
if ('start' in feature):
partial_coord_featured.append(association)
break
else:
c+=1
if c == len(association['features']):
no_partial_coord_featured.append(association)
report_groups(no_partial_coord_featured)
def get_feature_names(associations):
return (list(map(lambda x: x['feature_names'], associations)))
feature_names = get_feature_names(no_partial_coord_featured)
no_partial_coord_featured_no_feature_names = [x for x in no_partial_coord_featured if x['feature_names'] is None]
no_partial_coord_featured_with_feature_names = [x for x in no_partial_coord_featured if x['feature_names'] is not None]
report_groups(no_partial_coord_featured_no_feature_names)
# All of these have exactly 1 gene name
len([x['genes'] for x in no_partial_coord_featured_no_feature_names if len(x['genes']) == 1])
report_groups(no_partial_coord_featured_with_feature_names)
import re
def test_curls(associations):
# utility to generate curl commands
names = []
for a in associations:
for f in a['features']:
parts = re.split(' +|:',f['description'].strip())
names.append(tuple(parts))
names = list(set(names))
feature_lookups = [t for t in names if len(t) > 1]
if len(feature_lookups) > 0:
print '# curl commands to find feature location'
for t in feature_lookups:
print "curl -s 'http://myvariant.info/v1/query?q={}%20{}' | jq '.hits[0] | {{name: \"{} {}\", referenceName: \"GRCh37\", chromosome: .chrom, start: .hg19.start, end: .hg19.end, ref: .vcf.ref, alt: .vcf.alt }}'".format(t[0],t[1], t[0],t[1])
gene_lookups = [t for t in names if len(t) == 1]
if len(gene_lookups) > 0:
print '# curl commands to find gene location'
for t in gene_lookups:
print "curl -s 'http://mygene.info/v3/query?q={}&fields=genomic_pos_hg19' | jq .hits[0].genomic_pos_hg19".format(t[0])
def feature_filter(re_obj, associations):
# report matches and return non-matches
found = list(filter(lambda x: re_obj.search(x['feature_names']) is not None, associations))
not_found = list(filter(lambda x: re_obj.search(x['feature_names']) is None, associations))
report_groups(found)
# comment following line to suppress curl test commands
test_curls(found)
return(not_found, found)
amp_re = re.compile(r'(amplification)|(loss)|(amp)', re.IGNORECASE)
(remainder, found) = feature_filter(amp_re, no_partial_coord_featured_with_feature_names)
fusion_re = re.compile(r'(\w{2,}-\w{2,})|(fusion)', re.IGNORECASE)
(r2, found) = feature_filter(fusion_re, remainder)
ppm_re = re.compile(r'\w+(:| )[a-z]\d+[a-z]?(fs\*?)?$', re.IGNORECASE)
(r3, found) = feature_filter(ppm_re, r2)
indel_re = re.compile(r'\w+(:| )\w+(ins\w+)|(del($|ins\w+))|(dup$)')
(r4, found) = feature_filter(indel_re, r3)
bucket_re = re.compile(r'[A-Z0-9]+( (in)?act)?( oncogenic)? mut((ant)|(ation))?$')
(r5,found) = feature_filter(bucket_re, r4)
exon_re = re.compile(r'exon', re.IGNORECASE)
(r6,found) = feature_filter(exon_re, r5)
expression_re = re.compile(r'(exp)|(^\w+ (pos(itive)?)|(neg(ative)?)|(biallelic inactivation)$)|(truncating)|(deletion)', re.IGNORECASE)
(r7, found) = feature_filter(expression_re, r6)
report_groups(r7)
get_feature_names([x for x in r7 if x['source'] == 'cgi'])
###Output
_____no_output_____
###Markdown
Knowledgebase Comparison Genes
###Code
from collections import defaultdict
def genes_by_source(associations):
source_genes = defaultdict(set)
for association in associations:
source_genes[association['source']].update(association['genes'])
return source_genes
s = genes_by_source(all_associations)
import pyupset as pyu
import pandas as pd
%matplotlib inline
def plot_overlap(set_dict):
d = {g: pd.DataFrame(list(set_dict[g])) for g in set_dict}
pyu.plot(d, inters_size_bounds=(3, 400000))
# omitting BRCA (only 2 genes)
s = {k: v for k, v in s.items() if k != 'brca'}
plot_overlap(s)
# Genes observed in all knowledgebases
#
# features provenance
provenance = {'unknown_provenance':0}
unknown_provenance = []
for association in all_associations:
c = 0
for feature in association['features']:
if 'provenance_rule' in feature:
if feature['provenance_rule'] not in provenance:
provenance[feature['provenance_rule']] = 0
provenance[feature['provenance_rule']] += 1
else:
provenance['unknown_provenance'] += 1
unknown_provenance.append([association['source'], feature])
provenance
for unknown in unknown_provenance:
print unknown[0], unknown[1]['description']
###Output
cgi FLT3-ITD
cgi FLT3-ITD
cgi FLT3-ITD
cgi FLT3-ITD
cgi MAP2K1 (Q56P,P124S,P124L;C121S)
cgi FLT3-ITD
cgi JAK1 (S646F;R683)
cgi MLL2 oncogenic mutation
cgi MET (Y1230C;Y1235D)
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRCA2 del
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRAF V600E/K
jax BRCA2 del
jax BRCA2 del
jax BRCA2 del
civic HLA-C COPY-NEUTRAL LOSS OF HETEROZYGOSITY
molecularmatch MET MET c.2888-52_2927delGGGGCCCATGATAGCCGTCTTTAACAAGCTCTTTCTTTCTCTCTGTTTTAAGATCTGGGCAGTGAATTAGTTCGCTACGATGCAAGAGTACAinsCC
molecularmatch PGR ER/PR positive
molecularmatch PGR ER/PR positive
molecularmatch MET MET c.2888-6_2888-2delTTTAAinsG
molecularmatch MET MET c.2888-15_2915delTCTCTCTGTTTTAAGATCTGGGCAGTGAATTAGTTCGCTACGAinsT
molecularmatch PGR ER/PR positive
molecularmatch PGR ER/PR positive
molecularmatch PGR ER/PR positive
molecularmatch MET MET c.2888-28_2888-3delCAAGCTCTTTCTTTCTCTCTGTTTTAinsAAAC
molecularmatch MET MET c.2888-17_2888-1delTTTCTCTCTGTTTTAAGinsAA
molecularmatch MET MET c.3028+2_3028+4delTATinsACC
molecularmatch VEGF
molecularmatch BRCA
molecularmatch VEGFR
molecularmatch VEGF
molecularmatch VEGFR
molecularmatch PGR ER/PR positive
molecularmatch PGR ER/PR positive
molecularmatch MET MET c.2888-5_2890delTTAAGATCinsATA
molecularmatch BRCA1 BRCA1 Q1756fs
molecularmatch PDGFR
molecularmatch VEGFR
molecularmatch PDGFR
molecularmatch PGR ER/PR positive
molecularmatch VEGF
molecularmatch MET MET c.2888-33_2888-7delTTTAACAAGCTCTTTCTTTCTCTCTGTinsTTAAAACTG
molecularmatch PDGFR
molecularmatch VEGFR
oncokb HLA-B Truncating Mutations
oncokb HLA-A 596_619splice
oncokb Other Biomarkers Microsatellite Instability-High
oncokb WHSC1L1 Amplification
oncokb Other Biomarkers Microsatellite Instability-High
oncokb FAM58A Truncating Mutations
oncokb HLA-A Truncating Mutations
oncokb Other Biomarkers Microsatellite Instability-High
|
geolocalization-and-clustering.ipynb | ###Markdown
Geolocalization analysis and data vizualization__Scope__: Optimize leaflet distribution viewing customers on the map Import libraries
###Code
import pandas as pd # data Extract Transform Load
import numpy as np # linear algebra
import matplotlib.pyplot as plt # plotting
import matplotlib.image as mpimg # plotting
%matplotlib inline
import sys # system operations
# machine learning libs
from sklearn.preprocessing import normalize
from sklearn.cluster import AgglomerativeClustering
import scipy.cluster.hierarchy as shc
###Output
_____no_output_____
###Markdown
Load Data Frame
###Code
main_path =sys.path[0]
path_data ="/data_geovisitors/onlygeo_with_ip.csv"
df = pd.read_csv(main_path+path_data,';')
df.head()
# change columns name
df[df.columns[0]] = 'date'
df[df.columns[1]] = 'time'
df[df.columns[2]] = 'geolock'
###Output
_____no_output_____
###Markdown
Get the map for your boundries. Insert the max and min of latitude and longitude that I choosed in this site and download the map. With OpenStreetMap.org I download the map that I want for my coordinates. Select Layers / OVNKarte option at right for more address into the map.**Sites**.- Tutorial how download the map:https://medium.com/@abuqassim115/thanks-for-your-response-frank-fb869824ede2- OpenStreetMap.orghttps://www.openstreetmap.org/exportmap=5/51.500/-0.100**Examples**.- North-Italy zonehttps://www.openstreetmap.org/map=6/43.077/8.262- Specific italian city : Montebellunahttps://www.openstreetmap.org/map=13/45.7745/12.0216Take the coordinates data in the Data Frame only from the previous set boundries. From street address to coordinateshttps://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/utils/geocoder Data Structure
###Code
path_map_img = main_path+"/img/only-map/"
global cities
cities = { 'padova':{'map_path':path_map_img+'padova.png',
'lat_max':45.4476,
'lat_min':45.3657,
'lng_max':11.9868,
'lng_min':11.7942,
'coordinate_store':{'lat':45.412749,
'lng':11.919453
}
},
'montebelluna':{'map_path':path_map_img+'montebelluna.png',
'lat_max':45.7951,
'lat_min':45.7544,
'lng_max':12.0811,
'lng_min':12.0063,
'coordinate_store':{'lat':45.779023,
'lng':12.06014
}
}
}
def extract_boundries(data_cities,city_name):
"""
extract latitude and longitude min/max
from the data set with dictionary keys
"""
lat_max = data_cities[city_name]['lat_max']
lat_min = data_cities[city_name]['lat_min']
lng_max = data_cities[city_name]['lng_max']
lng_min = data_cities[city_name]['lng_min']
return([lat_max,lat_min,lng_max,lng_min])
def filter_df_for_plot(df,data_cities,city_name):
"""
filter dataframe with the boundries
of the city map
"""
boundries=extract_boundries(data_cities,city_name)
df_filtered = df[
(df['LAT']< boundries[0]) &
(df['LAT']>= boundries[1]) &
(df['LNG']< boundries[2]) &
(df['LNG']>= boundries[3])]
return df_filtered
def create_bbox(boundries):
# BBox serves for the plotting size figures
BBox = ((boundries[3], boundries[2],
boundries[1], boundries[0]))
return BBox
path_to_save_imgs = main_path+"/img/map-with-points/"
def plot_map(city_name,df=df,data_cities=cities):
# set boundries
boundries = extract_boundries(data_cities,city_name)
bbox = create_bbox(boundries)
# store coordinates
X_store_coordinates = data_cities[city_name]['coordinate_store']['lng']
Y_store_coordinates = data_cities[city_name]['coordinate_store']['lat']
# load background img
IMG=plt.imread(path_map_img+city_name+'.png')
# create figure
fig, ax = plt.subplots()
# plot
ax.scatter(df.LNG, df.LAT, zorder=1, alpha=0.5 , c='r', s=10)
ax.scatter(X_store_coordinates,Y_store_coordinates,c='b', s=50)
# set figure boundries
ax.set_xlim(bbox[0],bbox[1])
ax.set_ylim(bbox[2],bbox[3])
# estetics
plt.title(" Client map of {0} ".format(city_name[:1].upper()+city_name[1:]))
plt.xlabel('longitude')
plt.ylabel('latitude')
# show
ax.imshow(IMG, zorder=0, extent = bbox, aspect= 'auto')
# save
fig.savefig(path_to_save_imgs+city_name+'.png', dpi=300, bbox_inches='tight')
def main_plot(city_name,data_cities=cities, dataframe=df):
"""go to cities dictionary
extract boundries latitude min/max and longitude min/max
filter the dataframe of cli
ents with the boundries
extract img path and plot over it each client and the store position
"""
dataframe = filter_df_for_plot(dataframe, data_cities ,city_name)
plot_map(city_name,dataframe)
main_plot('padova')
main_plot('montebelluna')
###Output
_____no_output_____
###Markdown
CLUSTERING
###Code
def filter_df_for_clustering(df,city_name, data_cities=cities):
boundries=extract_boundries(data_cities,city_name)
df_filtered = df[
(df['LAT']< boundries[0]) &
(df['LAT']>= boundries[1]) &
(df['LNG']< boundries[2]) &
(df['LNG']>= boundries[3])]
df_filtered2 = df_filtered[['LAT', 'LNG']]
return df_filtered2
def hierarchical_clustering(city_name,df,N_cluster=5,data_cities=cities):
# machine learning
cluster = AgglomerativeClustering(n_clusters= N_cluster, affinity='euclidean', linkage='ward')
cluster.fit_predict(df)
# SETTINGs
point_dimention = 4 # [ 0.1 - 100 ]
opacity = 0.8 # [ 0.01 - 1 ]
# PLOT
plt.figure(figsize=(50, 20))
# set boundries
boundries = extract_boundries(data_cities,city_name)
bbox = create_bbox(boundries)
# store coordinates
X_store_coordinates = data_cities[city_name]['coordinate_store']['lng']
Y_store_coordinates = data_cities[city_name]['coordinate_store']['lat']
# load background img
IMG=plt.imread(path_map_img+city_name+'.png')
# create figure
fig, ax = plt.subplots()
# plot
ax.scatter(np.array(df['LNG']),np.array(df['LAT']),
alpha= opacity , c=cluster.labels_,
cmap='gist_rainbow_r',marker='o', s = point_dimention)
ax.scatter(X_store_coordinates,Y_store_coordinates, c ='r', s=30)
# set figure boundries
ax.set_xlim(bbox[0],bbox[1])
ax.set_ylim(bbox[2],bbox[3])
# estetics
plt.title(" Clusters of client map of {0} ".format(city_name[:1].upper()+city_name[1:]))
plt.xlabel('longitude')
plt.ylabel('latitude')
# show
ax.imshow(IMG, zorder=0, extent = bbox, aspect= 'auto')
# save
fig.savefig(path_to_save_imgs+city_name+'_cluster.png', dpi=1200, bbox_inches='tight')
def main_clustering(city_name,N_cluster=20,data_cities=cities,dataframe=df):
dataframe=filter_df_for_clustering(df, city_name,data_cities)
hierarchical_clustering(city_name,dataframe,N_cluster)
main_clustering('padova',5)
main_clustering('montebelluna',5)
###Output
_____no_output_____ |
Module4/Module4 - Lab2CopyLab3.ipynb | ###Markdown
DAT210x - Programming with Python for DS Module4- Lab2
###Code
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
from sklearn.decomposition import PCA
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Some Boilerplate Code For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations. We've added some notes to the code in case you're interested in knowing what it's doing: A Note on SKLearn's `.transform()` calls: Any time you perform a transformation on your data, you lose the column header names because the output of SciKit-Learn's `.transform()` method is an NDArray and not a daraframe.This actually makes a lot of sense because there are essentially two types of transformations:- Those that adjust the scale of your features, and- Those that change alter the number of features, perhaps even changing their values entirely.An example of adjusting the scale of a feature would be changing centimeters to inches. Changing the feature entirely would be like using PCA to reduce 300 columns to 30. In either case, the original column's units have either been altered or no longer exist at all, so it's up to you to assign names to your columns after any transformation, if you'd like to store the resulting NDArray back into a dataframe.
###Code
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
###Output
_____no_output_____
###Markdown
SKLearn contains many methods for transforming your features by scaling them, a type of pre-processing): - `RobustScaler` - `Normalizer` - `MinMaxScaler` - `MaxAbsScaler` - `StandardScaler` - ...http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessingHowever in order to be effective at PCA, there are a few requirements that must be met, and which will drive the selection of your scaler. PCA requires your data is standardized -- in other words, it's _mean_ should equal 0, and it should have unit variance.SKLearn's regular `Normalizer()` doesn't zero out the mean of your data, it only clamps it, so it could be inappropriate to use depending on your data. `MinMaxScaler` and `MaxAbsScaler` both fail to set a unit variance, so you won't be using them here either. `RobustScaler` can work, again depending on your data (watch for outliers!). So for this assignment, you're going to use the `StandardScaler`. Get familiar with it by visiting these two websites:- http://scikit-learn.org/stable/modules/preprocessing.htmlpreprocessing-scaler- http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler Lastly, some code to help with visualizations:
###Code
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
###Output
_____no_output_____
###Markdown
And Now, The Assignment
###Code
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
###Output
_____no_output_____
###Markdown
Load up the dataset specified on the lab instructions page and remove any and all _rows_ that have a NaN in them. You should be a pro at this by now ;-)**QUESTION**: Should the `id` column be included in your dataset as a feature?
###Code
# .. your code here ..
df=pd.read_csv('Datasets/kidney_disease.csv',sep=',',index_col=0)
df[['pcv','wc','rc']]=df[['pcv','wc','rc']].apply(pd.to_numeric,errors='coerce')
df.dropna(inplace=True)
df=pd.get_dummies(df,columns=['rbc', 'pc', 'pcc', 'ba', 'htn', 'dm', 'cad', 'appet', 'pe', 'ane'])
print(df.dtypes)
print(df.describe)
###Output
age float64
bp float64
sg float64
al float64
su float64
bgr float64
bu float64
sc float64
sod float64
pot float64
hemo float64
pcv float64
wc float64
rc float64
classification object
rbc_abnormal uint8
rbc_normal uint8
pc_abnormal uint8
pc_normal uint8
pcc_notpresent uint8
pcc_present uint8
ba_notpresent uint8
ba_present uint8
htn_no uint8
htn_yes uint8
dm_no uint8
dm_yes uint8
cad_no uint8
cad_yes uint8
appet_good uint8
appet_poor uint8
pe_no uint8
pe_yes uint8
ane_no uint8
ane_yes uint8
dtype: object
<bound method NDFrame.describe of age bp sg al su bgr bu sc sod pot ... \
id ...
3 48.0 70.0 1.005 4.0 0.0 117.0 56.0 3.8 111.0 2.5 ...
9 53.0 90.0 1.020 2.0 0.0 70.0 107.0 7.2 114.0 3.7 ...
11 63.0 70.0 1.010 3.0 0.0 380.0 60.0 2.7 131.0 4.2 ...
14 68.0 80.0 1.010 3.0 2.0 157.0 90.0 4.1 130.0 6.4 ...
20 61.0 80.0 1.015 2.0 0.0 173.0 148.0 3.9 135.0 5.2 ...
22 48.0 80.0 1.025 4.0 0.0 95.0 163.0 7.7 136.0 3.8 ...
27 69.0 70.0 1.010 3.0 4.0 264.0 87.0 2.7 130.0 4.0 ...
48 73.0 70.0 1.005 0.0 0.0 70.0 32.0 0.9 125.0 4.0 ...
58 73.0 80.0 1.020 2.0 0.0 253.0 142.0 4.6 138.0 5.8 ...
71 46.0 60.0 1.010 1.0 0.0 163.0 92.0 3.3 141.0 4.0 ...
74 56.0 90.0 1.015 2.0 0.0 129.0 107.0 6.7 131.0 4.8 ...
76 48.0 80.0 1.005 4.0 0.0 133.0 139.0 8.5 132.0 5.5 ...
84 59.0 70.0 1.010 3.0 0.0 76.0 186.0 15.0 135.0 7.6 ...
90 63.0 100.0 1.010 2.0 2.0 280.0 35.0 3.2 143.0 3.5 ...
91 56.0 70.0 1.015 4.0 1.0 210.0 26.0 1.7 136.0 3.8 ...
92 71.0 70.0 1.010 3.0 0.0 219.0 82.0 3.6 133.0 4.4 ...
93 73.0 100.0 1.010 3.0 2.0 295.0 90.0 5.6 140.0 2.9 ...
127 71.0 60.0 1.015 4.0 0.0 118.0 125.0 5.3 136.0 4.9 ...
128 52.0 90.0 1.015 4.0 3.0 224.0 166.0 5.6 133.0 47.0 ...
130 50.0 90.0 1.010 2.0 0.0 128.0 208.0 9.2 134.0 4.8 ...
133 70.0 100.0 1.015 4.0 0.0 118.0 125.0 5.3 136.0 4.9 ...
144 60.0 90.0 1.010 2.0 0.0 105.0 53.0 2.3 136.0 5.2 ...
147 60.0 60.0 1.010 3.0 1.0 288.0 36.0 1.7 130.0 3.0 ...
153 55.0 90.0 1.010 2.0 1.0 273.0 235.0 14.2 132.0 3.4 ...
157 62.0 70.0 1.025 3.0 0.0 122.0 42.0 1.7 136.0 4.7 ...
159 59.0 80.0 1.010 1.0 0.0 303.0 35.0 1.3 122.0 3.5 ...
171 83.0 70.0 1.020 3.0 0.0 102.0 60.0 2.6 115.0 5.7 ...
176 21.0 90.0 1.010 4.0 0.0 107.0 40.0 1.7 125.0 3.5 ...
181 45.0 70.0 1.025 2.0 0.0 117.0 52.0 2.2 136.0 3.8 ...
189 64.0 60.0 1.010 4.0 1.0 239.0 58.0 4.3 137.0 5.4 ...
.. ... ... ... ... ... ... ... ... ... ... ...
368 30.0 80.0 1.025 0.0 0.0 82.0 42.0 0.7 146.0 5.0 ...
369 75.0 70.0 1.020 0.0 0.0 107.0 48.0 0.8 144.0 3.5 ...
370 69.0 70.0 1.020 0.0 0.0 83.0 42.0 1.2 139.0 3.7 ...
371 28.0 60.0 1.025 0.0 0.0 79.0 50.0 0.5 145.0 5.0 ...
372 72.0 60.0 1.020 0.0 0.0 109.0 26.0 0.9 150.0 4.9 ...
373 61.0 70.0 1.025 0.0 0.0 133.0 38.0 1.0 142.0 3.6 ...
374 79.0 80.0 1.025 0.0 0.0 111.0 44.0 1.2 146.0 3.6 ...
375 70.0 80.0 1.020 0.0 0.0 74.0 41.0 0.5 143.0 4.5 ...
376 58.0 70.0 1.025 0.0 0.0 88.0 16.0 1.1 147.0 3.5 ...
377 64.0 70.0 1.020 0.0 0.0 97.0 27.0 0.7 145.0 4.8 ...
379 62.0 80.0 1.025 0.0 0.0 78.0 45.0 0.6 138.0 3.5 ...
380 59.0 60.0 1.020 0.0 0.0 113.0 23.0 1.1 139.0 3.5 ...
382 48.0 80.0 1.025 0.0 0.0 75.0 22.0 0.8 137.0 5.0 ...
383 80.0 80.0 1.025 0.0 0.0 119.0 46.0 0.7 141.0 4.9 ...
384 57.0 60.0 1.020 0.0 0.0 132.0 18.0 1.1 150.0 4.7 ...
385 63.0 70.0 1.020 0.0 0.0 113.0 25.0 0.6 146.0 4.9 ...
386 46.0 70.0 1.025 0.0 0.0 100.0 47.0 0.5 142.0 3.5 ...
387 15.0 80.0 1.025 0.0 0.0 93.0 17.0 0.9 136.0 3.9 ...
388 51.0 80.0 1.020 0.0 0.0 94.0 15.0 1.2 144.0 3.7 ...
389 41.0 80.0 1.025 0.0 0.0 112.0 48.0 0.7 140.0 5.0 ...
390 52.0 80.0 1.025 0.0 0.0 99.0 25.0 0.8 135.0 3.7 ...
391 36.0 80.0 1.025 0.0 0.0 85.0 16.0 1.1 142.0 4.1 ...
392 57.0 80.0 1.020 0.0 0.0 133.0 48.0 1.2 147.0 4.3 ...
393 43.0 60.0 1.025 0.0 0.0 117.0 45.0 0.7 141.0 4.4 ...
394 50.0 80.0 1.020 0.0 0.0 137.0 46.0 0.8 139.0 5.0 ...
395 55.0 80.0 1.020 0.0 0.0 140.0 49.0 0.5 150.0 4.9 ...
396 42.0 70.0 1.025 0.0 0.0 75.0 31.0 1.2 141.0 3.5 ...
397 12.0 80.0 1.020 0.0 0.0 100.0 26.0 0.6 137.0 4.4 ...
398 17.0 60.0 1.025 0.0 0.0 114.0 50.0 1.0 135.0 4.9 ...
399 58.0 80.0 1.025 0.0 0.0 131.0 18.0 1.1 141.0 3.5 ...
dm_no dm_yes cad_no cad_yes appet_good appet_poor pe_no pe_yes \
id
3 1 0 1 0 0 1 0 1
9 0 1 1 0 0 1 1 0
11 0 1 1 0 0 1 0 1
14 0 1 0 1 0 1 0 1
20 0 1 0 1 0 1 0 1
22 1 0 1 0 1 0 1 0
27 0 1 0 1 1 0 0 1
48 0 1 1 0 1 0 0 1
58 0 1 0 1 1 0 1 0
71 0 1 1 0 1 0 1 0
74 1 0 1 0 1 0 1 0
76 0 1 1 0 1 0 0 1
84 1 0 1 0 0 1 0 1
90 1 0 0 1 1 0 1 0
91 1 0 1 0 1 0 1 0
92 0 1 0 1 1 0 1 0
93 0 1 0 1 0 1 1 0
127 0 1 1 0 0 1 0 1
128 0 1 1 0 1 0 1 0
130 1 0 1 0 0 1 0 1
133 1 0 1 0 1 0 1 0
144 1 0 1 0 1 0 1 0
147 1 0 1 0 0 1 1 0
153 0 1 1 0 0 1 0 1
157 0 1 1 0 1 0 1 0
159 0 1 1 0 0 1 1 0
171 1 0 1 0 0 1 1 0
176 1 0 1 0 1 0 1 0
181 1 0 1 0 1 0 1 0
189 0 1 1 0 0 1 0 1
.. ... ... ... ... ... ... ... ...
368 1 0 1 0 1 0 1 0
369 1 0 1 0 1 0 1 0
370 1 0 1 0 1 0 1 0
371 1 0 1 0 1 0 1 0
372 1 0 1 0 1 0 1 0
373 1 0 1 0 1 0 1 0
374 1 0 1 0 1 0 1 0
375 1 0 1 0 1 0 1 0
376 1 0 1 0 1 0 1 0
377 1 0 1 0 1 0 1 0
379 1 0 1 0 1 0 1 0
380 1 0 1 0 1 0 1 0
382 1 0 1 0 1 0 1 0
383 1 0 1 0 1 0 1 0
384 1 0 1 0 1 0 1 0
385 1 0 1 0 1 0 1 0
386 1 0 1 0 1 0 1 0
387 1 0 1 0 1 0 1 0
388 1 0 1 0 1 0 1 0
389 1 0 1 0 1 0 1 0
390 1 0 1 0 1 0 1 0
391 1 0 1 0 1 0 1 0
392 1 0 1 0 1 0 1 0
393 1 0 1 0 1 0 1 0
394 1 0 1 0 1 0 1 0
395 1 0 1 0 1 0 1 0
396 1 0 1 0 1 0 1 0
397 1 0 1 0 1 0 1 0
398 1 0 1 0 1 0 1 0
399 1 0 1 0 1 0 1 0
ane_no ane_yes
id
3 0 1
9 0 1
11 1 0
14 1 0
20 0 1
22 0 1
27 1 0
48 1 0
58 1 0
71 1 0
74 1 0
76 1 0
84 0 1
90 1 0
91 1 0
92 1 0
93 1 0
127 1 0
128 0 1
130 0 1
133 1 0
144 1 0
147 0 1
153 0 1
157 1 0
159 1 0
171 0 1
176 0 1
181 1 0
189 1 0
.. ... ...
368 1 0
369 1 0
370 1 0
371 1 0
372 1 0
373 1 0
374 1 0
375 1 0
376 1 0
377 1 0
379 1 0
380 1 0
382 1 0
383 1 0
384 1 0
385 1 0
386 1 0
387 1 0
388 1 0
389 1 0
390 1 0
391 1 0
392 1 0
393 1 0
394 1 0
395 1 0
396 1 0
397 1 0
398 1 0
399 1 0
[158 rows x 35 columns]>
###Markdown
Let's build some color-coded labels; the actual label feature will be removed prior to executing PCA, since it's unsupervised. You're only labeling by color so you can see the effects of PCA: Use an indexer to select only the following columns: `['bgr','wc','rc']`
###Code
# .. your code here ..
#df=df[['bgr','wc','rc']]
labels = ['red' if i=='ckd' else 'green' for i in df.classification]
df.drop(['classification'],axis=1,inplace=True)
print(df.dtypes)
print(df.describe)
###Output
age float64
bp float64
sg float64
al float64
su float64
bgr float64
bu float64
sc float64
sod float64
pot float64
hemo float64
pcv float64
wc float64
rc float64
rbc_abnormal uint8
rbc_normal uint8
pc_abnormal uint8
pc_normal uint8
pcc_notpresent uint8
pcc_present uint8
ba_notpresent uint8
ba_present uint8
htn_no uint8
htn_yes uint8
dm_no uint8
dm_yes uint8
cad_no uint8
cad_yes uint8
appet_good uint8
appet_poor uint8
pe_no uint8
pe_yes uint8
ane_no uint8
ane_yes uint8
dtype: object
<bound method NDFrame.describe of age bp sg al su bgr bu sc sod pot ... \
id ...
3 48.0 70.0 1.005 4.0 0.0 117.0 56.0 3.8 111.0 2.5 ...
9 53.0 90.0 1.020 2.0 0.0 70.0 107.0 7.2 114.0 3.7 ...
11 63.0 70.0 1.010 3.0 0.0 380.0 60.0 2.7 131.0 4.2 ...
14 68.0 80.0 1.010 3.0 2.0 157.0 90.0 4.1 130.0 6.4 ...
20 61.0 80.0 1.015 2.0 0.0 173.0 148.0 3.9 135.0 5.2 ...
22 48.0 80.0 1.025 4.0 0.0 95.0 163.0 7.7 136.0 3.8 ...
27 69.0 70.0 1.010 3.0 4.0 264.0 87.0 2.7 130.0 4.0 ...
48 73.0 70.0 1.005 0.0 0.0 70.0 32.0 0.9 125.0 4.0 ...
58 73.0 80.0 1.020 2.0 0.0 253.0 142.0 4.6 138.0 5.8 ...
71 46.0 60.0 1.010 1.0 0.0 163.0 92.0 3.3 141.0 4.0 ...
74 56.0 90.0 1.015 2.0 0.0 129.0 107.0 6.7 131.0 4.8 ...
76 48.0 80.0 1.005 4.0 0.0 133.0 139.0 8.5 132.0 5.5 ...
84 59.0 70.0 1.010 3.0 0.0 76.0 186.0 15.0 135.0 7.6 ...
90 63.0 100.0 1.010 2.0 2.0 280.0 35.0 3.2 143.0 3.5 ...
91 56.0 70.0 1.015 4.0 1.0 210.0 26.0 1.7 136.0 3.8 ...
92 71.0 70.0 1.010 3.0 0.0 219.0 82.0 3.6 133.0 4.4 ...
93 73.0 100.0 1.010 3.0 2.0 295.0 90.0 5.6 140.0 2.9 ...
127 71.0 60.0 1.015 4.0 0.0 118.0 125.0 5.3 136.0 4.9 ...
128 52.0 90.0 1.015 4.0 3.0 224.0 166.0 5.6 133.0 47.0 ...
130 50.0 90.0 1.010 2.0 0.0 128.0 208.0 9.2 134.0 4.8 ...
133 70.0 100.0 1.015 4.0 0.0 118.0 125.0 5.3 136.0 4.9 ...
144 60.0 90.0 1.010 2.0 0.0 105.0 53.0 2.3 136.0 5.2 ...
147 60.0 60.0 1.010 3.0 1.0 288.0 36.0 1.7 130.0 3.0 ...
153 55.0 90.0 1.010 2.0 1.0 273.0 235.0 14.2 132.0 3.4 ...
157 62.0 70.0 1.025 3.0 0.0 122.0 42.0 1.7 136.0 4.7 ...
159 59.0 80.0 1.010 1.0 0.0 303.0 35.0 1.3 122.0 3.5 ...
171 83.0 70.0 1.020 3.0 0.0 102.0 60.0 2.6 115.0 5.7 ...
176 21.0 90.0 1.010 4.0 0.0 107.0 40.0 1.7 125.0 3.5 ...
181 45.0 70.0 1.025 2.0 0.0 117.0 52.0 2.2 136.0 3.8 ...
189 64.0 60.0 1.010 4.0 1.0 239.0 58.0 4.3 137.0 5.4 ...
.. ... ... ... ... ... ... ... ... ... ... ...
368 30.0 80.0 1.025 0.0 0.0 82.0 42.0 0.7 146.0 5.0 ...
369 75.0 70.0 1.020 0.0 0.0 107.0 48.0 0.8 144.0 3.5 ...
370 69.0 70.0 1.020 0.0 0.0 83.0 42.0 1.2 139.0 3.7 ...
371 28.0 60.0 1.025 0.0 0.0 79.0 50.0 0.5 145.0 5.0 ...
372 72.0 60.0 1.020 0.0 0.0 109.0 26.0 0.9 150.0 4.9 ...
373 61.0 70.0 1.025 0.0 0.0 133.0 38.0 1.0 142.0 3.6 ...
374 79.0 80.0 1.025 0.0 0.0 111.0 44.0 1.2 146.0 3.6 ...
375 70.0 80.0 1.020 0.0 0.0 74.0 41.0 0.5 143.0 4.5 ...
376 58.0 70.0 1.025 0.0 0.0 88.0 16.0 1.1 147.0 3.5 ...
377 64.0 70.0 1.020 0.0 0.0 97.0 27.0 0.7 145.0 4.8 ...
379 62.0 80.0 1.025 0.0 0.0 78.0 45.0 0.6 138.0 3.5 ...
380 59.0 60.0 1.020 0.0 0.0 113.0 23.0 1.1 139.0 3.5 ...
382 48.0 80.0 1.025 0.0 0.0 75.0 22.0 0.8 137.0 5.0 ...
383 80.0 80.0 1.025 0.0 0.0 119.0 46.0 0.7 141.0 4.9 ...
384 57.0 60.0 1.020 0.0 0.0 132.0 18.0 1.1 150.0 4.7 ...
385 63.0 70.0 1.020 0.0 0.0 113.0 25.0 0.6 146.0 4.9 ...
386 46.0 70.0 1.025 0.0 0.0 100.0 47.0 0.5 142.0 3.5 ...
387 15.0 80.0 1.025 0.0 0.0 93.0 17.0 0.9 136.0 3.9 ...
388 51.0 80.0 1.020 0.0 0.0 94.0 15.0 1.2 144.0 3.7 ...
389 41.0 80.0 1.025 0.0 0.0 112.0 48.0 0.7 140.0 5.0 ...
390 52.0 80.0 1.025 0.0 0.0 99.0 25.0 0.8 135.0 3.7 ...
391 36.0 80.0 1.025 0.0 0.0 85.0 16.0 1.1 142.0 4.1 ...
392 57.0 80.0 1.020 0.0 0.0 133.0 48.0 1.2 147.0 4.3 ...
393 43.0 60.0 1.025 0.0 0.0 117.0 45.0 0.7 141.0 4.4 ...
394 50.0 80.0 1.020 0.0 0.0 137.0 46.0 0.8 139.0 5.0 ...
395 55.0 80.0 1.020 0.0 0.0 140.0 49.0 0.5 150.0 4.9 ...
396 42.0 70.0 1.025 0.0 0.0 75.0 31.0 1.2 141.0 3.5 ...
397 12.0 80.0 1.020 0.0 0.0 100.0 26.0 0.6 137.0 4.4 ...
398 17.0 60.0 1.025 0.0 0.0 114.0 50.0 1.0 135.0 4.9 ...
399 58.0 80.0 1.025 0.0 0.0 131.0 18.0 1.1 141.0 3.5 ...
dm_no dm_yes cad_no cad_yes appet_good appet_poor pe_no pe_yes \
id
3 1 0 1 0 0 1 0 1
9 0 1 1 0 0 1 1 0
11 0 1 1 0 0 1 0 1
14 0 1 0 1 0 1 0 1
20 0 1 0 1 0 1 0 1
22 1 0 1 0 1 0 1 0
27 0 1 0 1 1 0 0 1
48 0 1 1 0 1 0 0 1
58 0 1 0 1 1 0 1 0
71 0 1 1 0 1 0 1 0
74 1 0 1 0 1 0 1 0
76 0 1 1 0 1 0 0 1
84 1 0 1 0 0 1 0 1
90 1 0 0 1 1 0 1 0
91 1 0 1 0 1 0 1 0
92 0 1 0 1 1 0 1 0
93 0 1 0 1 0 1 1 0
127 0 1 1 0 0 1 0 1
128 0 1 1 0 1 0 1 0
130 1 0 1 0 0 1 0 1
133 1 0 1 0 1 0 1 0
144 1 0 1 0 1 0 1 0
147 1 0 1 0 0 1 1 0
153 0 1 1 0 0 1 0 1
157 0 1 1 0 1 0 1 0
159 0 1 1 0 0 1 1 0
171 1 0 1 0 0 1 1 0
176 1 0 1 0 1 0 1 0
181 1 0 1 0 1 0 1 0
189 0 1 1 0 0 1 0 1
.. ... ... ... ... ... ... ... ...
368 1 0 1 0 1 0 1 0
369 1 0 1 0 1 0 1 0
370 1 0 1 0 1 0 1 0
371 1 0 1 0 1 0 1 0
372 1 0 1 0 1 0 1 0
373 1 0 1 0 1 0 1 0
374 1 0 1 0 1 0 1 0
375 1 0 1 0 1 0 1 0
376 1 0 1 0 1 0 1 0
377 1 0 1 0 1 0 1 0
379 1 0 1 0 1 0 1 0
380 1 0 1 0 1 0 1 0
382 1 0 1 0 1 0 1 0
383 1 0 1 0 1 0 1 0
384 1 0 1 0 1 0 1 0
385 1 0 1 0 1 0 1 0
386 1 0 1 0 1 0 1 0
387 1 0 1 0 1 0 1 0
388 1 0 1 0 1 0 1 0
389 1 0 1 0 1 0 1 0
390 1 0 1 0 1 0 1 0
391 1 0 1 0 1 0 1 0
392 1 0 1 0 1 0 1 0
393 1 0 1 0 1 0 1 0
394 1 0 1 0 1 0 1 0
395 1 0 1 0 1 0 1 0
396 1 0 1 0 1 0 1 0
397 1 0 1 0 1 0 1 0
398 1 0 1 0 1 0 1 0
399 1 0 1 0 1 0 1 0
ane_no ane_yes
id
3 0 1
9 0 1
11 1 0
14 1 0
20 0 1
22 0 1
27 1 0
48 1 0
58 1 0
71 1 0
74 1 0
76 1 0
84 0 1
90 1 0
91 1 0
92 1 0
93 1 0
127 1 0
128 0 1
130 0 1
133 1 0
144 1 0
147 0 1
153 0 1
157 1 0
159 1 0
171 0 1
176 0 1
181 1 0
189 1 0
.. ... ...
368 1 0
369 1 0
370 1 0
371 1 0
372 1 0
373 1 0
374 1 0
375 1 0
376 1 0
377 1 0
379 1 0
380 1 0
382 1 0
383 1 0
384 1 0
385 1 0
386 1 0
387 1 0
388 1 0
389 1 0
390 1 0
391 1 0
392 1 0
393 1 0
394 1 0
395 1 0
396 1 0
397 1 0
398 1 0
399 1 0
[158 rows x 34 columns]>
###Markdown
Either take a look at the dataset's webpage in the attribute info section of UCI's [Chronic Kidney Disease]() page,: https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease or alternatively, you can actually look at the first few rows of your dataframe using `.head()`. What kind of data type should these three columns be? Compare what you see with the results when you print out your dataframe's `dtypes`.If Pandas did not properly detect and convert your columns to the data types you expected, use an appropriate command to coerce these features to the right type.
###Code
df.reset_index(drop=True)
# .. your code here ..
###Output
_____no_output_____
###Markdown
PCA Operates based on variance. The variable with the greatest variance will dominate. Examine your data using a command that will check the variance of every feature in your dataset, and then print out the results. Also print out the results of running `.describe` on your dataset._Hint:_ If you do not see all three variables: `'bgr'`, `'wc'`, and `'rc'`, then it's likely you probably did not complete the previous step properly.
###Code
# .. your code here ..
df.var(axis=0)
print (df.var())
print(df.describe())
###Output
age 2.406297e+02
bp 1.248891e+02
sg 3.023865e-05
al 1.996936e+00
su 6.616141e-01
bgr 4.217182e+03
bu 2.246322e+03
sc 9.471717e+00
sod 5.609143e+01
pot 1.208501e+01
hemo 8.307100e+00
pcv 8.290402e+01
wc 9.777380e+06
rc 1.039104e+00
rbc_abnormal 1.015883e-01
rbc_normal 1.015883e-01
pc_abnormal 1.508103e-01
pc_normal 1.508103e-01
pcc_notpresent 8.127066e-02
pcc_present 8.127066e-02
ba_notpresent 7.062807e-02
ba_present 7.062807e-02
htn_no 1.699589e-01
htn_yes 1.699589e-01
dm_no 1.467387e-01
dm_yes 1.467387e-01
cad_no 6.518584e-02
cad_yes 6.518584e-02
appet_good 1.064662e-01
appet_poor 1.064662e-01
pe_no 1.112634e-01
pe_yes 1.112634e-01
ane_no 9.159074e-02
ane_yes 9.159074e-02
dtype: float64
age bp sg al su bgr \
count 158.000000 158.000000 158.000000 158.000000 158.000000 158.000000
mean 49.563291 74.050633 1.019873 0.797468 0.253165 131.341772
std 15.512244 11.175381 0.005499 1.413130 0.813397 64.939832
min 6.000000 50.000000 1.005000 0.000000 0.000000 70.000000
25% 39.250000 60.000000 1.020000 0.000000 0.000000 97.000000
50% 50.500000 80.000000 1.020000 0.000000 0.000000 115.500000
75% 60.000000 80.000000 1.025000 1.000000 0.000000 131.750000
max 83.000000 110.000000 1.025000 4.000000 5.000000 490.000000
bu sc sod pot ... dm_no \
count 158.000000 158.000000 158.000000 158.000000 ... 158.000000
mean 52.575949 2.188608 138.848101 4.636709 ... 0.822785
std 47.395382 3.077615 7.489421 3.476351 ... 0.383065
min 10.000000 0.400000 111.000000 2.500000 ... 0.000000
25% 26.000000 0.700000 135.000000 3.700000 ... 1.000000
50% 39.500000 1.100000 139.000000 4.500000 ... 1.000000
75% 49.750000 1.600000 144.000000 4.900000 ... 1.000000
max 309.000000 15.200000 150.000000 47.000000 ... 1.000000
dm_yes cad_no cad_yes appet_good appet_poor pe_no \
count 158.000000 158.000000 158.000000 158.000000 158.000000 158.000000
mean 0.177215 0.930380 0.069620 0.879747 0.120253 0.873418
std 0.383065 0.255315 0.255315 0.326292 0.326292 0.333562
min 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
25% 0.000000 1.000000 0.000000 1.000000 0.000000 1.000000
50% 0.000000 1.000000 0.000000 1.000000 0.000000 1.000000
75% 0.000000 1.000000 0.000000 1.000000 0.000000 1.000000
max 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
pe_yes ane_no ane_yes
count 158.000000 158.000000 158.000000
mean 0.126582 0.898734 0.101266
std 0.333562 0.302640 0.302640
min 0.000000 0.000000 0.000000
25% 0.000000 1.000000 0.000000
50% 0.000000 1.000000 0.000000
75% 0.000000 1.000000 0.000000
max 1.000000 1.000000 1.000000
[8 rows x 34 columns]
###Markdown
Below, we assume your dataframe's variable is named `df`. If it isn't, make the appropriate changes. But do not alter the code in `scaleFeaturesDF()` just yet!
###Code
# .. your (possible) code adjustment here ..
if scaleFeatures: df = scaleFeaturesDF(df)
###Output
New Variances:
age 1.006369
bp 1.006369
sg 1.006369
al 1.006369
su 1.006369
bgr 1.006369
bu 1.006369
sc 1.006369
sod 1.006369
pot 1.006369
hemo 1.006369
pcv 1.006369
wc 1.006369
rc 1.006369
rbc_abnormal 1.006369
rbc_normal 1.006369
pc_abnormal 1.006369
pc_normal 1.006369
pcc_notpresent 1.006369
pcc_present 1.006369
ba_notpresent 1.006369
ba_present 1.006369
htn_no 1.006369
htn_yes 1.006369
dm_no 1.006369
dm_yes 1.006369
cad_no 1.006369
cad_yes 1.006369
appet_good 1.006369
appet_poor 1.006369
pe_no 1.006369
pe_yes 1.006369
ane_no 1.006369
ane_yes 1.006369
dtype: float64
New Describe:
age bp sg al su \
count 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02
mean 1.032929e-16 7.406171e-16 -1.624580e-15 -7.757508e-16 -2.108018e-18
std 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00
min -2.817246e+00 -2.158952e+00 -2.713365e+00 -5.661221e-01 -3.122333e-01
25% -6.669624e-01 -1.261282e+00 2.309247e-02 -5.661221e-01 -3.122333e-01
50% 6.057713e-02 5.340564e-01 2.309247e-02 -5.661221e-01 -3.122333e-01
75% 6.749439e-01 5.340564e-01 9.352451e-01 1.437770e-01 -3.122333e-01
max 2.162358e+00 3.227064e+00 9.352451e-01 2.273474e+00 5.854375e+00
bgr bu sc sod pot \
count 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02
mean -9.755075e-17 -2.578809e-16 7.869935e-17 8.119384e-16 4.321438e-17
std 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00
min -9.475974e-01 -9.011706e-01 -5.830146e-01 -3.730148e+00 -6.165957e-01
25% -5.305059e-01 -5.625116e-01 -4.852266e-01 -5.154386e-01 -2.703085e-01
50% -2.447210e-01 -2.767680e-01 -3.548426e-01 2.034626e-02 -3.945044e-02
75% 6.306235e-03 -5.981458e-02 -1.918626e-01 6.900774e-01 7.597862e-02
max 5.540492e+00 5.427520e+00 4.241194e+00 1.493755e+00 1.222489e+01
... dm_no dm_yes cad_no cad_yes \
count ... 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02
mean ... -5.417607e-16 3.478230e-16 8.207218e-16 -3.941994e-16
std ... 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00
min ... -2.154729e+00 -4.640955e-01 -3.655631e+00 -2.735506e-01
25% ... 4.640955e-01 -4.640955e-01 2.735506e-01 -2.735506e-01
50% ... 4.640955e-01 -4.640955e-01 2.735506e-01 -2.735506e-01
75% ... 4.640955e-01 -4.640955e-01 2.735506e-01 -2.735506e-01
max ... 4.640955e-01 2.154729e+00 2.735506e-01 3.655631e+00
appet_good appet_poor pe_no pe_yes ane_no \
count 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02 1.580000e+02
mean 3.913887e-16 -3.927941e-16 -8.291539e-16 8.291539e-16 -1.686415e-16
std 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00 1.003180e+00
min -2.704772e+00 -3.697170e-01 -2.626785e+00 -3.806935e-01 -2.979094e+00
25% 3.697170e-01 -3.697170e-01 3.806935e-01 -3.806935e-01 3.356725e-01
50% 3.697170e-01 -3.697170e-01 3.806935e-01 -3.806935e-01 3.356725e-01
75% 3.697170e-01 -3.697170e-01 3.806935e-01 -3.806935e-01 3.356725e-01
max 3.697170e-01 2.704772e+00 3.806935e-01 2.626785e+00 3.356725e-01
ane_yes
count 1.580000e+02
mean 7.869935e-17
std 1.003180e+00
min -3.356725e-01
25% -3.356725e-01
50% -3.356725e-01
75% -3.356725e-01
max 2.979094e+00
[8 rows x 34 columns]
###Markdown
Run PCA on your dataset, reducing it to 2 principal components. Make sure your PCA model is saved in a variable called `'pca'`, and that the results of your transformation are saved in another variable `'T'`:
###Code
# .. your code here ..
pca=PCA(n_components=2)
T=pca.fit_transform(df)
print(T)
###Output
[[ 7.40571153e+00 -5.43388620e+00]
[ 8.21313912e+00 -3.80916306e+00]
[ 8.15781916e+00 4.97229631e-01]
[ 1.05963543e+01 2.28891097e+00]
[ 9.44702510e+00 -9.76322819e-02]
[ 3.77608141e+00 -3.45653656e+00]
[ 6.49399598e+00 4.77553145e+00]
[ 3.49232725e+00 3.13413817e-01]
[ 5.48091667e+00 4.36358285e+00]
[ 2.38110879e+00 7.78254699e-01]
[ 3.78273710e+00 1.89743461e-01]
[ 6.28105664e+00 2.03976907e+00]
[ 7.55897683e+00 -5.56544088e+00]
[ 3.32183735e+00 5.56899126e+00]
[ 5.55330990e-04 1.92658101e+00]
[ 6.86332695e+00 3.95825182e+00]
[ 8.92564807e+00 3.40946867e+00]
[ 4.78638553e+00 -9.40723269e-01]
[ 7.00235753e+00 -1.20924925e+00]
[ 8.42680121e+00 -4.85097558e+00]
[ 8.37737011e-01 3.22885751e-02]
[ 8.85103841e-01 6.32233656e-01]
[ 6.68421732e+00 -3.65490242e+00]
[ 1.03247654e+01 -2.33331878e+00]
[ 2.33083423e+00 7.85873746e-01]
[ 3.38481712e+00 1.17320895e+00]
[ 4.09016492e+00 -4.16636781e+00]
[ 4.99981257e+00 -2.82356773e+00]
[ 1.82169880e+00 -1.83891419e+00]
[ 8.26189613e+00 2.42970136e+00]
[ 3.96521233e+00 8.42418341e-02]
[ 9.49945240e+00 -3.17136233e+00]
[ 5.51199747e+00 5.91597921e-01]
[ 6.76825936e+00 1.37282577e+00]
[ 6.32211702e+00 3.45686662e-01]
[ 6.99373821e+00 1.69729966e+00]
[ 6.05942715e+00 6.84686503e+00]
[ 7.59670737e+00 3.54254699e+00]
[ 5.40408814e+00 -7.49134080e-01]
[ 1.06381223e+01 1.92596929e+00]
[ 7.58188529e+00 -1.62995970e+00]
[ 5.75522682e+00 6.74173638e+00]
[ 1.16951367e+01 -5.37017954e+00]
[ -1.94981286e+00 -1.29001306e-01]
[ -2.80834881e+00 -2.30787608e-01]
[ -2.30522508e+00 -1.83484938e-01]
[ -2.21135861e+00 -1.03562125e-02]
[ -2.13870571e+00 -3.20327133e-01]
[ -2.54147495e+00 7.40153104e-03]
[ -2.05731469e+00 2.05665001e-01]
[ -2.15707089e+00 -3.55132017e-01]
[ -2.05285730e+00 -1.23056736e-01]
[ -2.09389860e+00 -2.64420415e-01]
[ -2.05795198e+00 -1.18488273e-02]
[ -1.95793336e+00 -1.39098633e-02]
[ -1.91942913e+00 3.22227267e-03]
[ -1.89546100e+00 -1.42922386e-01]
[ -2.00982458e+00 2.98350772e-02]
[ -2.10445805e+00 6.59456185e-02]
[ -1.61948036e+00 -4.38051970e-02]
[ -2.13021073e+00 2.10313506e-02]
[ -2.43512392e+00 -1.56487923e-01]
[ -2.20146993e+00 -3.09958104e-01]
[ -2.08632334e+00 -3.45438264e-01]
[ -2.00545018e+00 1.30168596e-02]
[ -1.94121848e+00 3.75241748e-02]
[ -2.25003619e+00 -2.26135203e-01]
[ -2.23172229e+00 -5.43342843e-02]
[ -2.64233808e+00 -5.43148249e-02]
[ -2.19445729e+00 1.23780302e-01]
[ -1.93288432e+00 -4.31248977e-01]
[ -2.76734833e+00 -8.60162606e-02]
[ -2.13992544e+00 1.74491303e-03]
[ -2.22919689e+00 1.48581605e-01]
[ -2.29802335e+00 -1.16493396e-01]
[ -2.08125395e+00 -1.14113704e-01]
[ -2.27218271e+00 -2.52438362e-01]
[ -2.25770213e+00 -8.73750467e-03]
[ -1.92928241e+00 -5.19525141e-01]
[ -2.15719963e+00 3.91631017e-01]
[ -2.54321627e+00 -1.24682685e-01]
[ -1.83904746e+00 -3.62063479e-01]
[ -1.99601098e+00 -6.46447304e-02]
[ -2.27004031e+00 1.84540868e-01]
[ -2.12099276e+00 1.91474622e-01]
[ -2.04889138e+00 -4.57212208e-01]
[ -2.02988164e+00 -1.61769061e-01]
[ -2.40440995e+00 1.72349774e-01]
[ -2.26544342e+00 1.30745986e-01]
[ -2.10930633e+00 -1.22478821e-01]
[ -1.72853478e+00 -1.92531771e-01]
[ -2.51057409e+00 1.37203549e-01]
[ -2.38837064e+00 1.52247389e-01]
[ -1.97804942e+00 6.02226890e-02]
[ -2.12108536e+00 -4.06412432e-02]
[ -2.19268435e+00 4.03858924e-02]
[ -2.12639496e+00 -1.05328954e-01]
[ -2.51139840e+00 -8.89576140e-02]
[ -2.61952642e+00 -2.83735262e-01]
[ -2.06762213e+00 8.93641217e-02]
[ -2.47904852e+00 -2.04284206e-01]
[ -1.97898261e+00 8.80490733e-02]
[ -2.58014240e+00 -1.55372312e-01]
[ -2.14490397e+00 3.50479377e-01]
[ -2.03469317e+00 -5.00345592e-01]
[ -2.44686707e+00 -2.70070962e-01]
[ -1.99983809e+00 1.34707703e-01]
[ -2.45650899e+00 -1.38818446e-01]
[ -2.62777095e+00 -2.20276536e-01]
[ -2.10305205e+00 1.65222580e-01]
[ -2.51974313e+00 -4.42034607e-01]
[ -2.68972408e+00 -3.70924741e-02]
[ -2.41048050e+00 5.48213644e-02]
[ -1.97682326e+00 -4.31661750e-01]
[ -2.22754814e+00 -2.06798676e-01]
[ -2.39884352e+00 -1.20890339e-01]
[ -2.53781498e+00 -1.85094472e-01]
[ -2.52815467e+00 -2.24385081e-01]
[ -2.84341316e+00 -1.43166055e-01]
[ -1.89370089e+00 -6.48894674e-02]
[ -2.10052830e+00 -1.07745494e-01]
[ -2.43227046e+00 2.22563631e-01]
[ -2.46288235e+00 -3.44173993e-01]
[ -2.50635779e+00 -2.68966370e-01]
[ -2.14066663e+00 -3.33028890e-01]
[ -1.76809173e+00 -1.34140538e-03]
[ -2.26583474e+00 -5.14253937e-02]
[ -2.62509631e+00 2.17879240e-01]
[ -2.50150190e+00 -2.72286172e-01]
[ -1.73887567e+00 1.04731256e-01]
[ -2.15176001e+00 5.26705470e-03]
[ -2.88609089e+00 -2.94294522e-01]
[ -2.41262736e+00 3.91677206e-01]
[ -1.99411501e+00 5.50868754e-02]
[ -2.27884891e+00 2.97550099e-01]
[ -2.11112498e+00 5.93694365e-02]
[ -2.72364376e+00 1.47822636e-01]
[ -2.10204383e+00 9.68749142e-02]
[ -2.45819360e+00 -8.05055737e-02]
[ -2.29819942e+00 9.96795834e-02]
[ -2.79668484e+00 -9.84203940e-02]
[ -1.98500997e+00 1.77477480e-01]
[ -2.03622362e+00 2.26390727e-01]
[ -2.40386966e+00 3.20505738e-01]
[ -2.65716046e+00 -5.97016544e-02]
[ -2.63654354e+00 -3.96608855e-01]
[ -2.39786617e+00 1.25304143e-01]
[ -2.58518289e+00 -3.92378192e-02]
[ -2.32266447e+00 -1.18845803e-01]
[ -2.66443425e+00 -1.57266081e-01]
[ -2.04312611e+00 2.37434965e-01]
[ -2.42242966e+00 -1.35655438e-01]
[ -1.59091936e+00 -5.79741856e-02]
[ -2.10943378e+00 3.29180376e-01]
[ -2.89780419e+00 -1.28982091e-01]
[ -2.38689593e+00 -3.50490868e-01]
[ -2.50728620e+00 -4.82294059e-01]
[ -2.59080024e+00 3.06285529e-01]]
###Markdown
Now, plot the transformed data as a scatter plot. Recall that transforming the data will result in a NumPy NDArray. You can either use MatPlotLib to graph it directly, or you can convert it back to DataFrame and have Pandas do it for you.Since we've already demonstrated how to plot directly with MatPlotLib in `Module4/assignment1.ipynb`, this time we'll show you how to convert your transformed data back into to a Pandas Dataframe and have Pandas plot it from there.
###Code
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, df.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
###Output
Features by importance:
[(2.9696150394993355, 'ane_no'), (2.969615039499335, 'ane_yes'), (2.758875925894981, 'bgr'), (2.715504955394058, 'dm_yes'), (2.7155049553940573, 'dm_no'), (2.6589320941392356, 'pcv'), (2.6455721001098698, 'hemo'), (2.602112662848505, 'al'), (2.5934172193046887, 'htn_no'), (2.593417219304688, 'htn_yes'), (2.576730565999865, 'su'), (2.485944023344342, 'cad_no'), (2.4859440233443415, 'cad_yes'), (2.484132428132541, 'sc'), (2.4794711137049363, 'pc_normal'), (2.479471113704936, 'pc_abnormal'), (2.465109416186053, 'appet_poor'), (2.4651094161860527, 'appet_good'), (2.45756650608509, 'bu'), (2.3937269696125707, 'sg'), (2.3881975796285304, 'rc'), (2.2105782461310133, 'pe_yes'), (2.2105782461310133, 'pe_no'), (2.182818062607649, 'sod'), (2.0059261508175275, 'rbc_normal'), (2.0059261508175275, 'rbc_abnormal'), (1.9861731688066917, 'ba_present'), (1.9861731688066917, 'ba_notpresent'), (1.9842911319071956, 'pcc_present'), (1.9842911319071956, 'pcc_notpresent'), (1.2846796771566322, 'age'), (1.0541946042166217, 'bp'), (0.9193845987448057, 'wc'), (0.5640412875529994, 'pot')]
|
adopt/kuwait-make-strata.ipynb | ###Markdown
CREATE CAMPAIGN
###Code
from adopt.facebook.update import Instruction
from adopt.malaria import run_instructions
def create_campaign(name):
params = {
"name": name,
"objective": "MESSAGES",
"status": "PAUSED",
"special_ad_categories": [],
}
return Instruction("campaign", "create", params)
# create_campaign_for_user(USER, CAMPAIGN, db_conf)
run_instructions([create_campaign(AD_CAMPAIGN)], state)
# cid = next(c for c in state.campaigns if c['name'] == AD_CAMPAIGN)['id']
# run_instructions([Instruction("campaign", "update", {"status": "PAUSED"}, cid)], state)
###Output
_____no_output_____
###Markdown
BASIC CONF
###Code
c = {'optimization_goal': 'REPLIES',
'destination_type': 'MESSENGER',
'adset_hours': 48,
'budget': 1500000.0,
'min_budget': 100.0,
'opt_window': 5*24,
'end_date': '2021-05-21',
'proportional': True,
'page_id': PAGE_ID,
'instagram_id': None,
'ad_account': AD_ACCOUNT,
'ad_campaign': AD_CAMPAIGN}
config = CampaignConf(**c)
create_campaign_confs(CAMPAIGNID, "opt", [config._asdict()], db_conf)
###Output
_____no_output_____
###Markdown
AUDIENCES
###Code
from adopt.marketing import dict_from_nested_type
import typedjson
import json
audiences = [
{
"name": f"vlab-vacc-{COUNTRY_CODE}-nationality-a",
"shortcodes": SURVEY_SHORTCODES,
"subtype": "LOOKALIKE",
"lookalike": {
"name": f"vlab-vacc-{COUNTRY_CODE}-nationality-a-lookalike",
"target": 1100,
"spec": {
"country": COUNTRY_CODE,
"starting_ratio": 0.0,
"ratio": 0.2
}
},
"question_targeting": {"op": "equal",
"vars": [
{"type": "response", "value": "nationality"},
{"type": "constant", "value": "A"}
]}
},
{
"name": f"vlab-vacc-{COUNTRY_CODE}-nationality-b",
"shortcodes": SURVEY_SHORTCODES,
"subtype": "LOOKALIKE",
"lookalike": {
"name": f"vlab-vacc-{COUNTRY_CODE}-nationality-b-lookalike",
"target": 1100,
"spec": {
"country": COUNTRY_CODE,
"starting_ratio": 0.0,
"ratio": 0.2
}
},
"question_targeting": {"op": "equal",
"vars": [
{"type": "response", "value": "nationality"},
{"type": "constant", "value": "B"}
]}
},
{
"name": RESPONDENT_AUDIENCE,
"shortcodes": [INITIAL_SHORTCODE],
"subtype": "CUSTOM"
},
]
audience_confs = [typedjson.decode(AudienceConf, c) for c in audiences]
confs = [dict_from_nested_type(a) for a in audience_confs]
create_campaign_confs(CAMPAIGNID, "audience", confs, db_conf)
###Output
_____no_output_____
###Markdown
CREATIVES
###Code
from mena.strata import generate_creative_confs
images = {i['name']: i for i in state.account.get_ad_images(fields=['name', 'hash'])}
creative_confs, image_confs = generate_creative_confs(CREATIVE_FILE, INITIAL_SHORTCODE, images)
create_campaign_confs(CAMPAIGNID, "creative", creative_confs, db_conf)
###Output
_____no_output_____
###Markdown
STRATA
###Code
from mena.strata import get_adsets, extraction_confs, hyphen_case
from itertools import product
from mena.strata import format_group_product
template_state = CampaignState(userinfo.token,
get_api(env, userinfo.token),
AD_ACCOUNT,
TEMPLATE_CAMPAIGN)
a, g, l = get_adsets(template_state, extraction_confs)
variables = [
{ "name": "age", "source": "facebook", "conf": a},
{ "name": "gender", "source": "facebook", "conf": g},
{ "name": "location", "source": "facebook", "conf": l},
{ "name": "nationality", "source": "survey", "conf": {
"levels": [{"name": "A",
"audiences": [f"vlab-vacc-{COUNTRY_CODE}-nationality-a-lookalike"],
"excluded_audiences": [f"vlab-vacc-{COUNTRY_CODE}-nationality-b-lookalike"],
"question_targeting": {"op": "or",
"vars": [
{"op": "equal", "vars": [
{"type": "response", "value": "nationality"},
{"type": "constant", "value": "A"}]},
]}},
{"name": "B",
"audiences": [],
"excluded_audiences": [],
"question_targeting": {"op": "equal",
"vars": [
{"type": "response", "value": "nationality"},
{"type": "constant", "value": "B"}
]}},
]}}
]
share_lookup = pd.read_csv(DISTRIBUTION_FILE, header=[0,1,2], index_col=[0])
share = share_lookup.T.reset_index().melt(id_vars=DISTRIBUTION_VARS,
var_name='location',
value_name='percentage')
groups = product(*[[(v['name'], v['source'], l) for l in v['conf']['levels']] for v in variables])
groups = [format_group_product(g, share) for g in groups]
ALL_CREATIVES = [t['name'] for t in image_confs]
def make_stratum(id_, quota, c):
return { 'id': id_,
'metadata': {**c['metadata'], **EXTRA_METADATA},
'facebook_targeting': c['facebook_targeting'],
'creatives': ALL_CREATIVES,
'audiences': c['audiences'],
'excluded_audiences': [*c['excluded_audiences'], RESPONDENT_AUDIENCE],
'quota': float(quota),
'shortcodes': SURVEY_SHORTCODES,
'question_targeting': c['question_targeting']}
from adopt.marketing import StratumConf, QuestionTargeting
import typedjson
strata = [make_stratum(*g) for g in groups]
strata_data = [dict_from_nested_type(typedjson.decode(StratumConf, c)) for c in strata]
create_campaign_confs(CAMPAIGNID, "stratum", strata_data, db_conf)
###Output
_____no_output_____
###Markdown
TESTING
###Code
mal = load_basics(CAMPAIGNID, env)
mal.state.ads
mal.state.custom_audiences
%%time
from adopt.malaria import update_ads_for_campaign, update_audience_for_campaign
# instructions, report = update_ads_for_campaign(mal)
instructions, report = update_audience_for_campaign(mal)
len(instructions)
[i.params['session'] for i in instructions]
import facebook_business
from adopt.malaria import run_instructions
run_instructions(instructions, mal.state)
import pandas as pd
from adopt.campaign_queries import get_last_adopt_report
rdf = pd.DataFrame(get_last_adopt_report(CAMPAIGNID, "FACEBOOK_ADOPT", mal.db_conf)).T
# get the frequency!
mal.state.insights
###Output
_____no_output_____ |
something-learned/Mathematics/hackermath/Module_1c_linear_regression_ridge.ipynb | ###Markdown
Linear Regression (Ridge)So far we have been looking at solving for vector $x$ when there is a known matrix $A$ and vector $b$, such that$$ Ax = b $$The first approach is solving for one (or none) unique solution when $n$ dimensions and $p$ feature when $ n = p + 1 $ i.e. $n \times n$ matrixThe second approach is using OLS - ordinary least squares linear regression, when $ n > p + 1 $ Overfitting in OLSOrdinary least squares estimation leads to an overdetermined (over-fitted) solution, which fits well with in-sample we have but does not generalise well when we extend it to outside the sample Lets take the OLS cars example: Our sample was 7 cars for which we had $price$ and $kmpl$ data. However, our entire data is a population is a total of 42 cars. We want to see how well does this OLS for 7 cars do when we extend it to the entire set of 42 cars.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
pop = pd.read_csv('data/cars_small.csv')
pop.shape
pop.head()
sample_rows = [35,17,11,25,12,22,13]
sample = pop.loc[sample_rows,:]
sample
###Output
_____no_output_____
###Markdown
Lets plot the entire population (n = 42) and the sample (n =7) and our original prediction line.$$ price = 1662 - 62 * kmpl ~~~~ \textit{(sample = 7)}$$
###Code
# Plot the population and the sample
plt.scatter(pop.kmpl, pop.price, s = 150, alpha = 0.5 )
plt.scatter(sample.kmpl, sample.price, s = 150, alpha = 0.5, c = 'r')
# Plot the OLS Line - Sample
beta_0_s, beta_1_s = 1662, -62
x = np.arange(min(pop.kmpl),max(pop.kmpl),1)
plt.xlabel('kmpl')
plt.ylabel('price')
y_s = beta_0_s + beta_1_s * x
plt.text(x[-1], y_s[-1], 'sample')
plt.plot(x, y_s, '-')
###Output
_____no_output_____
###Markdown
Let us find the best-fit OLS line for the population
###Code
def ols (df):
n = df.shape[0]
x0 = np.ones(n)
x1 = df.kmpl
X = np.c_[x0, x1]
X = np.asmatrix(X)
y = np.transpose(np.asmatrix(df.price))
X_T = np.transpose(X)
X_pseudo = np.linalg.inv(X_T * X) * X_T
beta = X_pseudo * y
return beta
ols(sample)
ols(pop)
###Output
_____no_output_____
###Markdown
So the two OLS lines are:$$ price = 1662 - 62 * kmpl ~~~~ \textit{(sample = 7)}$$ $$ price = 1158 - 36 * kmpl ~~~~ \textit{(population = 42)}$$ Let us plot this data:
###Code
# Plot the population and the sample
plt.scatter(pop.kmpl, pop.price, s = 150, alpha = 0.5 )
plt.scatter(sample.kmpl, sample.price, s = 150, alpha = 0.5, c = 'r')
# Plot the OLS Line - sample and population
beta_0_s, beta_1_s = 1662, -62
beta_0_p, beta_1_p = 1158, -36
x = np.arange(min(pop.kmpl),max(pop.kmpl),1)
plt.xlabel('kmpl')
plt.ylabel('price')
y_s = beta_0_s + beta_1_s * x
plt.text(x[-1], y_s[-1], 'sample')
y_p = beta_0_p + beta_1_p * x
plt.text(x[-1], y_p[-1], 'population')
plt.plot(x, y_s, '-')
plt.plot(x, y_p, '-')
###Output
_____no_output_____
###Markdown
Understanding OverfittingThe reason overfitting is happening is because our orginal line is really dependent on the selection of the sample of 7 observations. If we change our sample, we would get a different answer every time!
###Code
# Randomly select 7 cars from this dataset
sample_random = pop.sample(n=7)
sample_random
###Output
_____no_output_____
###Markdown
Let us write some code to randomly draw a sample of 7 and do it $z$ times and see the OLS lines and coefficients
###Code
ols(sample_random)
def random_cars_ols (z):
beta = []
for i in range(z):
# Select a sample and run OLS
sample_random = pop.sample(n=7)
b = ols(sample_random)
beta.append([b[0,0], b[1,0]])
# Get the OLS line
x = np.arange(min(pop.kmpl), max(pop.kmpl), 1)
y = b[0,0] + b[1,0] *x
# Set the plotting area
plt.subplot(1, 2, 1)
plt.tight_layout()
a = round(1/np.log(z), 2)
# Plot the OLS line
plt.plot(x,y, '-', linewidth = 1.0, c = 'b', alpha = a)
plt.xlabel('kmpl')
plt.ylabel('price')
plt.ylim(0,1000)
# Plot the intercept and coefficients
plt.subplot(1,2,2)
plt.scatter(beta[i][1],beta[i][0], s = 50, alpha = a)
plt.xlim(-120,60)
plt.ylim(-500,3000)
plt.xlabel('beta_1')
plt.ylabel('beta_0')
# Plot the Popultaion line
plt.subplot(1, 2, 1)
beta_0_p, beta_1_p = 1158, -36
x = np.arange(min(pop.kmpl),max(pop.kmpl),1)
y_p = beta_0_p + beta_1_p * x
plt.plot(x, y_p, '-', linewidth =4, c = 'r')
###Output
_____no_output_____
###Markdown
Let us do this 500 times, $ z = 500 $
###Code
random_cars_ols(500)
###Output
_____no_output_____
###Markdown
L2 Regularization - Ridge RegressionNow to prevent our $\beta $ from going all over the place to fit the line, we can need to constrain the constraint $\beta$$$ \beta^{T} \beta < C $$For OLS our error term was: $$ E_{ols}(\beta)= \frac {1}{n} (y-X\beta)^{T}(y-X\beta) $$So now we add another constraint on the $\beta$ to our minimization function$$ E_{reg}(\beta)= \frac {1}{n} (y-X\beta)^{T}(y-X\beta) + \frac {\alpha}{n} \beta^{T}\beta$$To get the minimum for this error function, we need to differentiate by $\beta^T$$$ \nabla E_{reg}(\beta) = 0 $$$$ \nabla E_{reg}(\beta) ={\frac {dE_{reg}(\beta)}{d\beta^T}} = \frac {2}{n} X^T(X\betaโy) + \frac {\alpha}{n} \beta= 0 $$$$ X^T X\beta + \alpha \beta= X^T y $$So our $\beta$ for a regularized function is$$ \beta_{reg} = (X^T X + \alpha I)^{-1}X^Ty$$When $ \alpha = 0 $, then it becomes OLS$$ \beta_{ols} = (X^T X)^{-1}X^Ty$$ Direct Calculation
###Code
def ridge (df, alpha):
n = df.shape[0]
x0 = np.ones(n)
x1 = df.kmpl
X = np.c_[x0, x1]
X = np.asmatrix(X)
y = np.asmatrix(df.price.values.reshape(-1,1))
X_T = np.transpose(X)
I = np.identity(2)
beta = np.linalg.inv(X_T * X + alpha * I ) * X_T * y
return beta
###Output
_____no_output_____
###Markdown
Let us run this with slpha = 0, which is OLS
###Code
ridge(sample, 0)
###Output
_____no_output_____
###Markdown
Lets increase alpha to constraint the plot and see the result
###Code
def ridge_plot(df, alphas, func):
plt.scatter(df.kmpl, df.price, s = 150, alpha = 0.5)
plt.xlabel('kmpl')
plt.ylabel('price')
# Plot the Ridge line
for a in alphas:
beta = func(df, a)
x = np.arange(min(df.kmpl), max(df.kmpl), 1)
y = beta[0,0] + beta[1,0] * x
plt.plot(x,y, '-', linewidth = 1, c = 'b')
plt.text(x[-1], y[-1], '%s' % a, size = "smaller")
ridge_plot(sample, [0, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1], ridge)
###Output
_____no_output_____
###Markdown
ExercisesRun a Ridge Linear Regression:$$ price = \beta_{0} + \beta_{1} kmpl + \beta_{2} bhp + \beta_{2} kmpl^2 + \beta_{2} bhp/kmpl $$Run the Ridge Regression using Pseudo Inverse? Plot the Ridge Regression for different values of $\alpha$
###Code
###Output
_____no_output_____
###Markdown
Plot the overfitting by taking $n = 20$ samples? Plot the overfitting by taking $n = 42$ (entire population)? Using sklearn
###Code
from sklearn import linear_model
def ridge_sklearn(df, alpha):
y = df.price
X = df.kmpl.values.reshape(-1,1)
X = np.c_[np.ones((X.shape[0],1)),X]
model = linear_model.Ridge(alpha = alpha, fit_intercept = False)
model.fit(X,y)
beta = np.array([model.coef_]).T
return beta
ridge_sklearn(pop, 0)
ridge_plot(sample, [0, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1], ridge_sklearn)
###Output
_____no_output_____
###Markdown
Let us now run the see how this ridge regression helps in reducing overplotting
###Code
def random_cars_ridge (z, alpha, func):
beta = []
for i in range(z):
# Select a sample and run OLS
sample_random = pop.sample(n=7)
b = func(sample_random, alpha)
beta.append([b[0,0], b[1,0]])
# Get the OLS line
x = np.arange(min(pop.kmpl), max(pop.kmpl), 1)
y = b[0,0] + b[1,0] *x
# Set the plotting area
plt.subplot(1, 2, 1)
plt.tight_layout()
a = round(1/np.log(z), 2)
# Plot the OLS line
plt.plot(x,y, '-', linewidth = 1, c = 'b', alpha = a)
plt.xlabel('kmpl')
plt.ylabel('price')
plt.ylim(0,1000)
# Plot the intercept and coefficients
plt.subplot(1,2,2)
plt.scatter(beta[i][1],beta[i][0], s = 50, alpha = a)
plt.xlim(-120,60)
plt.ylim(-500,3000)
plt.xlabel('beta_1')
plt.ylabel('beta_0')
# Plot the Population line
plt.subplot(1, 2, 1)
beta_0_p, beta_1_p = 1158, -36
x = np.arange(min(pop.kmpl),max(pop.kmpl),1)
y_p = beta_0_p + beta_1_p * x
plt.plot(x, y_p, '-', linewidth =4, c = 'r')
random_cars_ridge (500, 0.02, ridge)
###Output
_____no_output_____ |
lab4/lab4_colour_palette.ipynb | ###Markdown
Lab 4.1 - Wykrywanie palety kolorรณw**Wykonanie rozwiฤ
zaล: Marcin Przewiฤลบlikowski**https://github.com/mprzewie/ml_basics_course
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
import os
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples
###Output
_____no_output_____
###Markdown
Wybierzmy ลadne, kolorowe zdjฤcie. Postarajmy siฤ by miaล kilka dominijฤ
cych barw oraz jakieล anomalie (np. maลy balonik na tle parku, albo samotny X-Wing na tle galaktyki). Warunek kilku dominujฤ
cych barw jest tylko jednym z powodรณw, dla ktรณrych warto siฤgnฤ
ฤ po klasykฤ komiksu :)
###Code
image = cv2.imread("ds_1.jpg")
image = cv2.resize(image, (500, 800))
image = image[:,:,[2,1,0]]
image = image / 255
plt.figure(figsize=(6, 9))
plt.imshow(image)
plt.title("Okลadka jednego z najlepszych komiksรณw wszechczasรณw")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Potraktujmy kaลผdy jego piksel jako obserwacjฤ w przestrzeni 3-D (po jednym wymiarze na kaลผdy z kolorรณw). Zdecydujmy czy usuwamy ze zbioru duplikaty (piksele o takich samych wartoลciach RGB) - nasz wybรณr wpลynie na finalny wynik.
###Code
all_pixels = image.reshape(-1, 3)
unique_pixels = np.unique(all_pixels, axis=0)
###Output
_____no_output_____
###Markdown
Bฤde pracowaฤ na wszystkich pikselach. Wykonajmy na takim zbiorze klasteryzacjฤ k-means, z nastฤpujฤ
cymi zaลoลผeniami:* jako ลrodkรณw klastrรณw uลผywamy istniejฤ
ce elementy zbioru, a nie ich ลrednie (czyli jest to w praktyce k-medoids) - nie chcemy znaleลบฤ kolorรณw, ktรณre nie wystฤ
piลy na zdjฤciu;* dobieramy wartoลฤ staลej k uลผywajฤ
c dowolnej zaproponowanej przez siebie metody.
###Code
kmeanses = []
fig = plt.figure(figsize=(15, 25))
for n_clusters in range(1, 11):
kmeans = KMeans(n_clusters).fit(all_pixels)
# zamieniam znalezione centra klastrรณw na punkty najbardziej im podobne z datasetu,
# ลผeby otrzymaฤ K-Medoids
new_cluster_centers = []
for c in kmeans.cluster_centers_:
differences = unique_pixels - c
differences_summed = (differences ** 2).sum(axis=1)
min_difference = differences[np.argmin(differences_summed)]
new_cluster_centers.append(c + min_difference)
new_cluster_centers = np.array(new_cluster_centers)
kmeans.cluster_centers_ = new_cluster_centers
kmeanses.append(kmeans)
cluster_indices = kmeans.predict(all_pixels)
all_pixels_clustered = kmeans.cluster_centers_[cluster_indices].reshape(image.shape)
plt.subplot(5, 2, n_clusters)
plt.title(f"n_clusters = {n_clusters}")
plt.imshow(all_pixels_clustered)
plt.axis("off")
plt.tight_layout()
plt.show()
plt.title("Kwadratowa suma odlegลoลci punktรณw od swoich klastrรณw w zaleลผnoลci od liczby klastrรณw")
plt.plot(np.arange(len(kmeanses)), [k.inertia_ for k in kmeanses])
plt.xlabel("Liczba klastrรณw")
plt.xlabel("Suma kwadratowej odlegลoลci")
plt.show()
###Output
_____no_output_____
###Markdown
Z wykresu widaฤ (z "elbow method"), ลผe od $k=4$, zmiany w ลrednich odlegลoลciach punktรณw od ich klastrรณw nie sฤ
juลผ tak duลผe, jak przy mniejszej liczbie klastrรณw.Dodatkowo wizualizujฤ
c klasteryzacje, $k \geq 5$ wydajฤ
siฤ dawaฤ ลadne wizualne wyniki. Uลผyjฤ wiฤc dalej kmeans wytrenowanego dla $k=5$
###Code
kmeans = kmeanses[4]
###Output
_____no_output_____
###Markdown
Prezentujemy uzyskanฤ
paletฤ.
###Code
plt.title("Paleta kolorรณw znalezionych w k-means")
plt.imshow(np.array([kmeans.cluster_centers_]))
plt.show()
sampled_pixels = unique_pixels[np.random.randint(0, len(unique_pixels), 10000)]
sampled_pixels_clusters = kmeans.predict(sampled_pixels)
clusters = kmeans.cluster_centers_
sampled_pixels_clustered = clusters[sampled_pixels_clusters]
fig = plt.figure(figsize=(10, 10))
fig.suptitle("Przykลadowe piksele z obrazka (po lewej) i kolory, do ktรณrych zostaลy zmapowane przez k-means (po prawej)")
for i, c in enumerate(clusters):
plt.subplot(1, len(clusters), i +1)
pixels_of_cluster = sampled_pixels[sampled_pixels_clusters == i][:10]
pixels_clustered = sampled_pixels_clustered[sampled_pixels_clusters == i][:10]
original_and_clustered = np.hstack([
pixels_of_cluster, pixels_clustered
]).reshape(-1, 2, 3)
plt.axis("off")
plt.imshow(original_and_clustered)
plt.show()
###Output
_____no_output_____
###Markdown
Wizualizujemy samฤ
klasteryzacjฤ (np. rzutujemy punkty ze zbioru na 2D uลผywajฤ
c PCA, kaลผdemu z nich ลrodek malujemy na pierwotny kolor, a obwรณdkฤ na kolor klastra do ktรณrego byล przyporzฤ
dkowany).
###Code
pca = PCA().fit(all_pixels)
sampled_pixels_pcad = pca.transform(sampled_pixels)
clusters_pcad = pca.transform(clusters)
fig = plt.figure(figsize=(10,10))
fig.suptitle("Wizualizacja klasteryzacji bez centrรณw klastrรณw")
for i, (c, c_p) in enumerate(zip(clusters, clusters_pcad)):
n_points = 20
pixels_of_cluster = sampled_pixels[sampled_pixels_clusters == i][:n_points]
pixels_pcad = sampled_pixels_pcad[sampled_pixels_clusters == i][:n_points]
plt.scatter(pixels_pcad[:,0], pixels_pcad[:,1], c=[c for _ in pixels_pcad], s=400)
plt.scatter(pixels_pcad[:,0], pixels_pcad[:,1], c=pixels_of_cluster, s=150)
plt.show()
###Output
_____no_output_____
###Markdown
Nastฤpnie na tej samej wizualizacji 2D pokazujemy centra znalezionych klastrรณw oraz wartoลฤ miary Silhouette dla kaลผdego z punktรณw (jest zawsze z zakresu -1 do 1, moลผna to zwizualizowaฤ skalฤ
szaroลci). Jaki kolor miaลy oryginalnie punkty o najwyลผszym Silhouette, a jakie te o najniลผszym? Czy miara ta nadaje siฤ do wykrywania punktรณw - anomalii? Poniewaลผ $silhouette$ liczy siฤ bardzo dลugo na peลnym zbiorze punktรณw, liczฤ je tylko na stworzonej wczesniej prรณbce.
###Code
sampled_pixels_scores = silhouette_samples(sampled_pixels, sampled_pixels_clusters)
fig = plt.figure(figsize=(15,15))
fig.suptitle("Wizualizacja klasteryzacji z centrami klastrรณw i wartoลciฤ
$silhouette$")
for i, (c, c_p) in enumerate(zip(clusters, clusters_pcad)):
n_points = 20
pixels_of_cluster = sampled_pixels[sampled_pixels_clusters == i][:n_points]
pixels_pcad = sampled_pixels_pcad[sampled_pixels_clusters == i][:n_points]
pixels_scores = sampled_pixels_scores[sampled_pixels_clusters == i][:n_points]
plt.scatter(pixels_pcad[:,0], pixels_pcad[:,1], c=[c for _ in pixels_pcad], s=1100)
plt.scatter(pixels_pcad[:,0], pixels_pcad[:,1], c="white", s=800)
for (p_c, p_p, p_s) in zip(pixels_of_cluster, pixels_pcad, pixels_scores):
plt.scatter([p_p[0]], [p_p[1]], c=[p_c], s=600, marker=f"${'%.2f' % p_s}$")
plt.scatter([c_p[0]], [c_p[1]], c="white", marker="D", s=800 )
plt.scatter([c_p[0]], [c_p[1]], c=[c], marker="D", s=500 )
plt.show()
###Output
_____no_output_____ |
_notebooks/2022-05-23-NLPKaggleComp.ipynb | ###Markdown
Quickly trying out a NLP model for Kaggle Competition- toc: true- branch: master- badges: true- hide_binder_badge: true- hide_deepnote_badge: true- comments: true- author: Kurian Benoy- categories: [kaggle, fastaicourse, NLP, huggingface] - hide: false- search_exclude: false This is my attempt to see how well we can build a NLP model for [Natural Language Processing with Disaster Tweets](https://www.kaggle.com/competitions/nlp-getting-started/overview).According to competition you are required to :> In this competition, youโre challenged to build a machine learning model that predicts which Tweets are about real disasters and which oneโs arenโt. Youโll have access to a dataset of 10,000 tweets that were hand classified. If this is your first time working on an NLP problem, we've created a quick tutorial to get you up and running. Downloading Data
###Code
creds = ''
from pathlib import Path
cred_path = Path("~/.kaggle/kaggle.json").expanduser()
if not cred_path.exists():
cred_path.parent.mkdir(exist_ok=True)
cred_path.write_text(creds)
cred_path.chmod(0o600)
! kaggle competitions download -c nlp-getting-started
#hide-output
! unzip nlp-getting-started.zip
import pandas as pd
df = pd.read_csv("train.csv")
df.head()
df.describe(include="object")
#hide_output
df["input"] = df["text"]
###Output
_____no_output_____
###Markdown
Tokenization
###Code
from datasets import Dataset, DatasetDict
ds = Dataset.from_pandas(df)
ds
model_nm = "microsoft/deberta-v3-small"
#hide-output
from transformers import AutoModelForSequenceClassification, AutoTokenizer
tokz = AutoTokenizer.from_pretrained(model_nm)
def tok_func(x):
return tokz(x["input"])
tok_ds = ds.map(tok_func, batched=True)
#collapse_output
row = tok_ds[0]
row["input"], row["input_ids"]
tok_ds = tok_ds.rename_columns({"target": "labels"})
tok_ds
#collapse_output
tok_ds[0]
###Output
_____no_output_____
###Markdown
Validation, Traning, Testing
###Code
eval_df = pd.read_csv("test.csv")
eval_df.head()
eval_df.describe(include="object")
model_dataset = tok_ds.train_test_split(0.25, seed=34)
model_dataset
eval_df["input"] = eval_df["text"]
eval_ds = Dataset.from_pandas(eval_df).map(tok_func, batched=True)
###Output
_____no_output_____
###Markdown
Training Models
###Code
from transformers import TrainingArguments, Trainer, DataCollatorWithPadding
bs = 128
epochs = 4
data_collator = DataCollatorWithPadding(tokenizer=tokz)
training_args = TrainingArguments("test-trainer")
model = AutoModelForSequenceClassification.from_pretrained(model_nm, num_labels=2)
trainer = Trainer(
model,
training_args,
train_dataset=model_dataset['train'],
eval_dataset=model_dataset['test'],
data_collator=data_collator,
tokenizer=tokz,
)
trainer.train()
preds = trainer.predict(eval_ds).predictions.astype(float)
preds
###Output
The following columns in the test set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: location, text, id, input, keyword. If location, text, id, input, keyword are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 3263
Batch size = 8
|
Real_and_Fake_Face_Detection.ipynb | ###Markdown
IntroductionGreetings from the Kaggle bot! This is an automatically-generated kernel with starter code demonstrating how to read in the data and begin exploring. If you're inspired to dig deeper, click the blue "Fork Notebook" button at the top of this kernel to begin editing. Exploratory AnalysisTo begin this exploratory analysis, first import libraries and define functions for plotting the data using `matplotlib`. Depending on the data, not all plots will be made. (Hey, I'm just a simple kerneling bot, not a Kaggle Competitions Grandmaster!)
###Code
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt # plotting
import numpy as np # linear algebra
import os # accessing directory structure
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
###Output
_____no_output_____
###Markdown
There is 0 csv file in the current version of the dataset:
###Code
print(os.listdir('../input'))
###Output
_____no_output_____
###Markdown
The next hidden code cells define functions for plotting data. Click on the "Code" button in the published kernel to reveal the hidden code.
###Code
# Distribution graphs (histogram/bar graph) of column data
def plotPerColumnDistribution(df, nGraphShown, nGraphPerRow):
nunique = df.nunique()
df = df[[col for col in df if nunique[col] > 1 and nunique[col] < 50]] # For displaying purposes, pick columns that have between 1 and 50 unique values
nRow, nCol = df.shape
columnNames = list(df)
nGraphRow = (nCol + nGraphPerRow - 1) / nGraphPerRow
plt.figure(num = None, figsize = (6 * nGraphPerRow, 8 * nGraphRow), dpi = 80, facecolor = 'w', edgecolor = 'k')
for i in range(min(nCol, nGraphShown)):
plt.subplot(nGraphRow, nGraphPerRow, i + 1)
columnDf = df.iloc[:, i]
if (not np.issubdtype(type(columnDf.iloc[0]), np.number)):
valueCounts = columnDf.value_counts()
valueCounts.plot.bar()
else:
columnDf.hist()
plt.ylabel('counts')
plt.xticks(rotation = 90)
plt.title(f'{columnNames[i]} (column {i})')
plt.tight_layout(pad = 1.0, w_pad = 1.0, h_pad = 1.0)
plt.show()
# Correlation matrix
def plotCorrelationMatrix(df, graphWidth):
filename = df.dataframeName
df = df.dropna('columns') # drop columns with NaN
df = df[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values
if df.shape[1] < 2:
print(f'No correlation plots shown: The number of non-NaN or constant columns ({df.shape[1]}) is less than 2')
return
corr = df.corr()
plt.figure(num=None, figsize=(graphWidth, graphWidth), dpi=80, facecolor='w', edgecolor='k')
corrMat = plt.matshow(corr, fignum = 1)
plt.xticks(range(len(corr.columns)), corr.columns, rotation=90)
plt.yticks(range(len(corr.columns)), corr.columns)
plt.gca().xaxis.tick_bottom()
plt.colorbar(corrMat)
plt.title(f'Correlation Matrix for {filename}', fontsize=15)
plt.show()
# Scatter and density plots
def plotScatterMatrix(df, plotSize, textSize):
df = df.select_dtypes(include =[np.number]) # keep only numerical columns
# Remove rows and columns that would lead to df being singular
df = df.dropna('columns')
df = df[[col for col in df if df[col].nunique() > 1]] # keep columns where there are more than 1 unique values
columnNames = list(df)
if len(columnNames) > 10: # reduce the number of columns for matrix inversion of kernel density plots
columnNames = columnNames[:10]
df = df[columnNames]
ax = pd.plotting.scatter_matrix(df, alpha=0.75, figsize=[plotSize, plotSize], diagonal='kde')
corrs = df.corr().values
for i, j in zip(*plt.np.triu_indices_from(ax, k = 1)):
ax[i, j].annotate('Corr. coef = %.3f' % corrs[i, j], (0.8, 0.2), xycoords='axes fraction', ha='center', va='center', size=textSize)
plt.suptitle('Scatter and Density Plot')
plt.show()
###Output
_____no_output_____ |
task_20.ipynb | ###Markdown
ะกะพะทะดะฐะนัะต ะผะฐััะธะฒ , ัะพััะพััะธะน ะธะท 10 ัะปััะฐะนะฝัั
ัะตะปัั
ัะธัะตะป ะธะท ะดะธะฐะฟะฐะทะพะฝะฐ ะพั 1 ะดะพ 20, ะทะฐัะตะผ ัะพะทะดะฐะนัะต ะผะฐััะธะฒ ัะพััะพััะธะน ะธะท 5 ัะปะตะผะตะฝัะพะฒ, ะฝะฐัะธะฝะฐั ัะพ 2 ะฟะพ ะฟะพััะดะบั. ะัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะพะฑะพะธั
ะผะฐััะธะฒะพะฒ ะฟัะธ ะฟะพะผะพัะธ ััะฝะบัะธะธ print()
###Code
arr_1 = np.random.randint(1, 21, 10)
arr_2 = arr_1[1:6]
print(arr_1)
print(arr_2)
###Output
[16 13 12 7 5 3 16 9 4 18]
[13 12 7 5 3]
###Markdown
ะกะพะทะดะฐะนัะต ะผะฐััะธะฒ , ัะพััะพััะธะน ะธะท 10 ัะปััะฐะนะฝัั
ัะตะปัั
ัะธัะตะป ะธะท ะดะธะฐะฟะฐะทะพะฝะฐ ะพั 0 ะดะพ 20, ะทะฐัะตะผ ัะพะทะดะฐะนัะต ะผะฐััะธะฒ ัะพััะพััะธะน ะธะท ัะปะตะผะตะฝัะพะฒ, ะฝะฐัะธะฝะฐั ั ััะตััะตะณะพ ะฟะพ ะฟะพััะดะบั ะดะพ ะฟะพัะปะตะดะฝะตะณะพ. ะัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะพะฑะพะธั
ะผะฐััะธะฒะพะฒ ะฟัะธ ะฟะพะผะพัะธ ััะฝะบัะธะธ print()
###Code
arr_1 = np.random.randint(1, 21, 10)
arr_2 = arr_1[2:]
print(arr_1)
print(arr_2)
###Output
[ 2 8 5 7 18 6 3 12 16 5]
[ 5 7 18 6 3 12 16 5]
###Markdown
ะกะพะทะดะฐะนัะต ะดะฒัะผะตัะฝัะน ะผะฐััะธะฒ , ัะพััะพััะธะน ะธะท ัะปััะฐะนะฝัั
ัะตะปัั
ัะธัะตะป ะธะท ะดะธะฐะฟะฐะทะพะฝะฐ ะพั 3 ะดะพ 11, ะฒ ะบะพัะพัะพะผ 4 ัััะพะบะธ ะธ 3 ััะพะปะฑัะฐ. ะะฐัะตะผ ัะพะทะดะฐะนัะต ะผะฐััะธะฒ ัะพััะพััะธะน ะธะท ัะปะตะผะตะฝัะพะฒ , ะฒัะพัะพะน ะธ ััะตััะตะน ัััะพะบะธ ะธ ะฟะตัะฒะพะณะพ ะธ ะฒัะพัะพะณะพ ััะพะปะฑัะฐ ะฟะพ ะฟะพััะดะบั. ะัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะพะฑะพะธั
ะผะฐััะธะฒะพะฒ ะฟัะธ ะฟะพะผะพัะธ ััะฝะบัะธะธ print()
###Code
arr_1 = np.random.randint(3, 12, 12).reshape(4, 3)
arr_2 = arr_1[1:3, :2]
print(arr_1)
print(arr_2)
###Output
[[11 10 8]
[ 5 9 3]
[ 8 11 8]
[ 3 3 4]]
[[ 5 9]
[ 8 11]]
###Markdown
ะกะพะทะดะฐะนัะต ะดะฒัะผะตัะฝัะน ะผะฐััะธะฒ , ัะพััะพััะธะน ะธะท ัะปััะฐะนะฝัั
ัะตะปัั
ัะธัะตะป ะธะท ะดะธะฐะฟะฐะทะพะฝะฐ ะพั 0 ะดะพ 9, ะฒ ะบะพัะพัะพะผ 4 ัััะพะบะธ ะธ 6 ััะพะปะฑัะพะฒ. ะะฐัะตะผ ัะพะทะดะฐะนัะต ะผะฐััะธะฒ ัะพััะพััะธะน ะธะท ัะปะตะผะตะฝัะพะฒ , ะบะพัะพััะต ะฑะพะปััะต ัะธัะปะฐ 3. ะัะฒะตะดะธัะต ะทะฝะฐัะตะฝะธั ะพะฑะพะธั
ะผะฐััะธะฒะพะฒ ะฟัะธ ะฟะพะผะพัะธ ััะฝะบัะธะธ print()
###Code
arr_1 = np.random.randint(0, 10, 24).reshape(4, 6)
arr_2 = arr_1[arr_1>3]
print(arr_1)
print(arr_2)
###Output
[[5 4 8 4 9 6]
[8 1 6 6 7 9]
[5 9 2 5 9 3]
[9 3 0 1 0 1]]
[5 4 8 4 9 6 8 6 6 7 9 5 9 5 9 9]
|
tsa/jose/TSA_COURSE_NOTEBOOKS/05-Time-Series-Analysis-with-Statsmodels/.ipynb_checkpoints/02-EWMA-Exponentially-Weighted-Moving-Average-checkpoint.ipynb | ###Markdown
______Copyright Pierian DataFor more information, visit us at www.pieriandata.com MA Moving AveragesIn this section we'll compare Simple Moving Averages to Exponentially Weighted Moving Averages in terms of complexity and performance.Related Functions:pandas.DataFrame.rolling(window) Provides rolling window calculationspandas.DataFrame.ewm(span) Provides exponential weighted functions Perform standard imports and load the datasetFor these examples we'll use the International Airline Passengers dataset, which gives monthly totals in thousands from January 1949 to December 1960.
###Code
import pandas as pd
import numpy as np
%matplotlib inline
airline = pd.read_csv('../Data/airline_passengers.csv',index_col='Month',parse_dates=True)
airline.dropna(inplace=True)
airline.head()
###Output
_____no_output_____
###Markdown
___ SMA Simple Moving AverageWe've already shown how to create a simple moving average by applying a mean function to a rolling window.For a quick review:
###Code
airline['6-month-SMA'] = airline['Thousands of Passengers'].rolling(window=6).mean()
airline['12-month-SMA'] = airline['Thousands of Passengers'].rolling(window=12).mean()
airline.head(15)
airline.plot();
###Output
_____no_output_____
###Markdown
___ EWMA Exponentially Weighted Moving Average We just showed how to calculate the SMA based on some window. However, basic SMA has some weaknesses:* Smaller windows will lead to more noise, rather than signal* It will always lag by the size of the window* It will never reach to full peak or valley of the data due to the averaging.* Does not really inform you about possible future behavior, all it really does is describe trends in your data.* Extreme historical values can skew your SMA significantlyTo help fix some of these issues, we can use an EWMA (Exponentially weighted moving average). EWMA will allow us to reduce the lag effect from SMA and it will put more weight on values that occured more recently (by applying more weight to the more recent values, thus the name). The amount of weight applied to the most recent values will depend on the actual parameters used in the EWMA and the number of periods given a window size.[Full details on Mathematics behind this can be found here](http://pandas.pydata.org/pandas-docs/stable/user_guide/computation.htmlexponentially-weighted-windows).Here is the shorter version of the explanation behind EWMA.The formula for EWMA is: $y_t = \frac{\sum\limits_{i=0}^t w_i x_{t-i}}{\sum\limits_{i=0}^t w_i}$ Where $x_t$ is the input value, $w_i$ is the applied weight (Note how it can change from $i=0$ to $t$), and $y_t$ is the output.Now the question is, how to we define the weight term $w_i$?This depends on the adjust parameter you provide to the .ewm() method.When adjust=True (default) is used, weighted averages are calculated using weights equal to $w_i = (1 - \alpha)^i$which gives $y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ...+ (1 - \alpha)^t x_{0}}{1 + (1 - \alpha) + (1 - \alpha)^2 + ...+ (1 - \alpha)^t}$ When adjust=False is specified, moving averages are calculated as: $\begin{split}y_0 &= x_0 \\y_t &= (1 - \alpha) y_{t-1} + \alpha x_t,\end{split}$which is equivalent to using weights: \begin{split}w_i = \begin{cases} \alpha (1 - \alpha)^i & \text{if } i < t \\ (1 - \alpha)^i & \text{if } i = t.\end{cases}\end{split} When adjust=True we have $y_0=x_0$ and from the last representation above we have $y_t=\alpha x_t+(1โฮฑ)y_{tโ1}$, therefore there is an assumption that $x_0$ is not an ordinary value but rather an exponentially weighted moment of the infinite series up to that point.For the smoothing factor $\alpha$ one must have $0alpha directly, itโs often easier to think about either the span, center of mass (com) or half-life of an EW moment: \begin{split}\alpha = \begin{cases} \frac{2}{s + 1}, & \text{for span}\ s \geq 1\\ \frac{1}{1 + c}, & \text{for center of mass}\ c \geq 0\\ 1 - \exp^{\frac{\log 0.5}{h}}, & \text{for half-life}\ h > 0 \end{cases}\end{split} * Span corresponds to what is commonly called an โN-day EW moving averageโ.* Center of mass has a more physical interpretation and can be thought of in terms of span: $c=(sโ1)/2$* Half-life is the period of time for the exponential weight to reduce to one half.* Alpha specifies the smoothing factor directly.We have to pass precisely one of the above into the .ewm() function. For our data we'll use span=12.
###Code
airline['EWMA12'] = airline['Thousands of Passengers'].ewm(span=12,adjust=False).mean()
airline[['Thousands of Passengers','EWMA12']].plot();
###Output
_____no_output_____
###Markdown
Comparing SMA to EWMA
###Code
airline[['Thousands of Passengers','EWMA12','12-month-SMA']].plot(figsize=(12,8)).autoscale(axis='x',tight=True);
###Output
_____no_output_____ |
code_lou_demos/pandas_intro.ipynb | ###Markdown
Code Louisville intro to Pandas Pandas is a library that allows us to deal with data in a dataframe format. This is really useful for being able to quickly do data analysis. When importing, it is conventional to import as 'pd' which allows it to later be referenced by typing 'pd' rather than 'pandas'
###Code
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
I'm using the os to find the current working directory.
###Code
os.getcwd()
###Output
_____no_output_____
###Markdown
I downloaded the data from here: http://greaterlouisvilleproject.org/deep-drivers-of-change/education/ and saved it in my current working directory. Now I'm using Pandas' read_excel command to read it in and save it as a dataframe named edu_df. Then I used the `.head()` method to show the first 6 rows of the dataframe. Pandas also has a `pd.read_csv()` command. You can specify either an absolute file path, or more commonly a relative file path, e.g. `pd.read_csv('data/my_data.csv')`
###Code
edu_df = pd.read_excel('GLP-Codebook.xlsx', 'Edu County', index_col=None, na_values=['NA'])
edu_df.head(n = 6)
###Output
_____no_output_____
###Markdown
Using `.tail()` shows the last n rows of the dataframe.
###Code
edu_df.tail(n = 6)
###Output
_____no_output_____
###Markdown
More generally, `.shape` will give the dimensions of the dataframe
###Code
edu_df.shape
###Output
_____no_output_____
###Markdown
And passing the pandas dataframe to the list function will produce a list of all the column names.
###Code
list(edu_df)
###Output
_____no_output_____
###Markdown
You can use the `.iloc` method to pull data based on its index location.
###Code
edu_df.iloc[3, 8]
###Output
_____no_output_____
###Markdown
While calling `.iloc[3, 8]` will give you Louisville's child poverty rate in 2005, I don't recommend doing it this way. It's better to select by column name and then filter down to the row(s) you want. It's way too easy to make a mistake with numerical indices. Selecting by column name and filtering data based is covered a bit later in this intro. You can pull more than one row and column using the `:` operator. The first index is included, while the second one is excluded, so in the example below asking for rows `1:4` includes row 1 but not row 4
###Code
edu_df.iloc[1:4, 1:9]
###Output
_____no_output_____
###Markdown
The `:` operator can also be used to select everything up to a certain index - again exclusive of the index you use. Sp :4 gives rows 0 to 3.
###Code
edu_df.iloc[:4, :9]
###Output
_____no_output_____
###Markdown
Pandas makes it easy to get summary statistics for the whole dataframe. Note though, that Pandas guesses what kind of data it is dealing with. A mean year of 2010.5 doesn't make much sense.
###Code
edu_df.describe()
###Output
_____no_output_____
###Markdown
The type of a variable can be changed using the `.astype()` method. Here we make year a categorical variable, and it drops out of the `edu_df.describe()` output because it no longer matches the rest of the dataframe.
###Code
edu_df['year'] = edu_df['year'].astype('category')
edu_df.describe()
###Output
_____no_output_____
###Markdown
But we can select it on its own to describe it.
###Code
edu_df['year'].describe()
###Output
_____no_output_____
###Markdown
The `.describe()` method returns something different depending on the type of data it's passed
###Code
edu_df['child_per'].describe()
###Output
_____no_output_____
###Markdown
Pandas also makes it easy to create new variables by performing mathematical operations on already existing variables.
###Code
edu_df['bach_plus_race_gap'] = edu_df['bach_plus_per_white'] - edu_df['bach_plus_per_black']
edu_df.head(n = 6)
###Output
_____no_output_____
###Markdown
Filtering data can be done by using brackets. So suppose we just want Louisville in the year 2005. We can filter to that, and then select the column for under age 5 child poverty. This is a better idea than using `.iloc()` because it won't silently break if the underlying dataframe changes and it's harder to make a mistake with column names and variable values (city == "Louisville) than with index values.
###Code
filtered_df = edu_df[(edu_df.city == "Louisville") & (edu_df.year == 2005)]
filtered_df['under_5_per']
###Output
_____no_output_____
###Markdown
We can also combine these operation to avoid creating a new dataframe.
###Code
edu_df[(edu_df.city == "Louisville") & (edu_df.year == 2005)]['under_5_per']
###Output
_____no_output_____
###Markdown
Joining Data Pandas also allows us to merge datasets together relatively painlessly. To start with, we'll need another dataset. Let's read in another sheet from the same excel document.
###Code
jobs_df = pd.read_excel('GLP-Codebook.xlsx', 'Jobs County', index_col=None, na_values=['NA'])
jobs_df.head(n = 6)
###Output
_____no_output_____
###Markdown
Pandas has a `merge()` function that takes the name of the two dataframe, the type of join (left, right, inner, outer) and the names of the columns to join on.
###Code
df = pd.merge(edu_df, jobs_df, how='outer', left_on=['FIPS','year'], right_on = ['FIPS','year'])
df.head(n = 6)
###Output
_____no_output_____
###Markdown
Notice that pandas even renamed duplicated columns. So city was in both datasets and now there is city_x and city_y.
###Code
list(df)
###Output
_____no_output_____
###Markdown
That's more columns than we need for this example workbook. Pandas .filter method can be used to select a subset of the columns
###Code
df_sel = df.filter(items = ['city_x', 'year', 'current_x', 'child_per', 'per_25_64_bach_plus', 'per_high_wage'])
df_sel.head(n = 6)
###Output
_____no_output_____
###Markdown
In this data, current is an indicator variable that takes 1 if the city is from the current peer city list, and 0 otherwise. There is an older peer city list called baseline. If you work for the city, you should select to keep the `baseline_x` column and then filter to when that variable is equal to 1.
###Code
df_sel = df_sel[(df_sel.current_x == 1)]
df_sel['current_x'].describe()
###Output
_____no_output_____
###Markdown
Some of these names are kind of unwieldy. Let's rename things. And we already showed how to select columsn, but now that `current_x` only takes only the value of 1, we can drop it from the dataframe.
###Code
df_sel = df_sel.rename(columns = {"city_x": "city",
"per_25_64_bach_plus" :"bach",
"child_per":"child_pov",
"per_high_wage":"high_wage_jobs"})
df_sel = df_sel.drop('current_x', axis = 1)
list(df_sel)
###Output
_____no_output_____
###Markdown
Pandas easily allows us to look for correlations across all of the data using the .corr() method.
###Code
df_sel.corr()
###Output
_____no_output_____
###Markdown
Reshaping Data A common operation in data science is to transform data from wide to long and vice versa. The data is currently in a long format. It's 204 rows and 5 columns.
###Code
df_sel.shape
df_T = df_sel.T
df_T
df_wide = df_sel.pivot(index = 'city', columns = 'year')
df_wide
df_wide.shape
###Output
_____no_output_____
###Markdown
Pivoting the dataframe resulted in a dataframe with a hierarchical index. So now calling data at the top level can select more than one column.
###Code
df_wide['child_pov']
###Output
_____no_output_____
###Markdown
And we can call down multiple index levels, making it easy to select all our cities for 2016
###Code
df_wide['child_pov'][2016]
###Output
_____no_output_____
###Markdown
You can also sort by using the .sort_values
###Code
df_wide['child_pov'][2016].sort_values
###Output
_____no_output_____
###Markdown
Except that sorted by city - by default it used the first column. Which is okay if we want them in alphabetical order, but what about order by child poverty?
###Code
df_wide['child_pov'].sort_values(by=[2016], ascending = False)
###Output
_____no_output_____
###Markdown
Reshaping hierarchical data frames is difficult, so I'm going to cut down to just the child poverty data
###Code
df_wide = df_wide['child_pov']
df_wide.head(n = 6)
###Output
_____no_output_____
###Markdown
One final note before reshaping, you can use .T to transpose a dataframe.
###Code
df_wide.T
###Output
_____no_output_____
###Markdown
Pandas has an index that isn't strictly part of the dataframe. By default it's 0, 1, 2, 3, etc. HOwever, when I cast the data from long to wide, I set the index to city values. Now we need to undo that before melting/gathering the data.
###Code
df_wide.reset_index(level=0, inplace = True)
df_wide.head(n = 6)
###Output
_____no_output_____
###Markdown
And now we're ready to put our data back into long format using `.melt()`
###Code
df_long = pd.melt(frame = df_wide,
col_level = 0,
id_vars = ['city'],
value_vars = [2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016],
value_name = "child_pov")
df_long
###Output
_____no_output_____ |
sample_volume.ipynb | ###Markdown
Demonstration of DCBC evaluation usage in volume spaceThis notebook shows an example of a Distance controlled boundary coefficient (DCBC) evaluation of a striatum parcellation using the Multi-domain task battery (MDTB) functional dataset (glm7). Installation and RequirmentsEnsure Python version >= 3.6 and pip installed on your system.- Install Python at https://www.python.org/downloads/- Install pip: https://pip.pypa.io/en/stable/installing/ Dependencies`pip install nibabel scipy numpy sklearn matplotlib` UsageBelow is a quick sample script of using the DCBC evaluation for evaluating `Power 2011` left hemisphere.
###Code
import numpy as np
import scipy as sp
from DCBC_vol import compute_dist, compute_DCBC
import nibabel as nb
import mat73
from plotting import plot_single
# Load mask voxel index
vol_ind = mat73.loadmat('D:/data/sc2/encoding/glm7/striatum_avrgDataStruct.mat')['volIndx']
vol_ind = vol_ind.astype(int)
###Output
data type not supported: struct, uint64
###Markdown
Now, we load the parcellation that we want to evaluate from the file given the volxel indices of the mask.Note: we first need to transfer the vol index to 3d coordinates by F-order. This is because MATLAB uses row-order to perform linear indexing, while numpy uses colume order.
###Code
# Load parcellation given the mask file or voxel indices
parcel_mni = nb.load('D:/data/sc2/encoding/glm7/spect/masked_par_choi_7.nii.gz').get_fdata()
coord = np.unravel_index(vol_ind - 1, parcel_mni.shape, 'F') # Note: the linear indexing in numpy is column-order
parcels = np.rint(parcel_mni[coord[0], coord[1], coord[2]])
print(parcels)
###Output
[5. 0. 5. ... 0. 6. 0.]
###Markdown
We also need a pairwise distance matrix of all mask voxel indices for DCBC calcluation.
###Code
# Compute the distance matrix between voxel pairs using the mask file, numpy default C-order
coord = np.asarray(coord).transpose()
dist = compute_dist(coord, 2)
print(dist)
###Output
[[ 0. 38.05259518 12.16552506 ... 69.3108938 67.94115101
68.93475176]
[38.05259518 0. 26. ... 51.57518783 51.2249939
50.99019514]
[12.16552506 26. 0. ... 62.09669879 60.95900262
61.6116872 ]
...
[69.3108938 51.57518783 62.09669879 ... 0. 2.
2. ]
[67.94115101 51.2249939 60.95900262 ... 2. 0.
2.82842712]
[68.93475176 50.99019514 61.6116872 ... 2. 2.82842712
0. ]]
###Markdown
Here, we load subject functional data for DCBC evaluation and several experiment settings.
###Code
# Load functional profile (betas) and several parameters for evaluation settings
T = mat73.loadmat('D:/data/sc2/encoding/glm7/striatum_avrgDataStruct.mat')['T']
returnsubj = [2,3,4,6,8,9,10,12,14,15,17,18,19,20,21,22,24,25,26,27,28,29,30,31]
session, maxDist, binWidth = 1, 90, 5
###Output
data type not supported: struct, uint64
###Markdown
Now, we start the real DCBC evaluation on the given parcellation using selected subjects and given experiment settings. So here we set the bin width = 5 mm, the maximum distance between any pair of voxels is 90 mm. We only use subjects session 1 data.
###Code
wcorr_array, bcorr_array, dcbc_array = np.array([]), np.array([]), np.array([])
for sub in returnsubj:
data = T['data'][(T['SN'] == sub) & (T['sess'] == session)].transpose()
R = compute_DCBC(maxDist=maxDist, func=data, dist=dist, binWidth=binWidth, parcellation=parcels)
wcorr_array = np.append(wcorr_array, R['corr_within'])
bcorr_array = np.append(bcorr_array, R['corr_between'])
dcbc_array = np.append(dcbc_array, R['DCBC'])
# print(wcorr_array, bcorr_array, dcbc_array)
###Output
_____no_output_____
###Markdown
After we finishe the DCBC evalaution for all subjects, we plot the un-weighted within- and between-correlation curves. A bigger gap between two curves means the given parcellation has higher quality to functionally separate the brain regions. Otherwise, the parcellation cannot functionally separate the brain obviously. In the extrame, the two curves are the same for random parcellations.
###Code
%matplotlib inline
plot_single(within=wcorr_array, between=bcorr_array, maxDist=90, binWidth=5,
subjects=returnsubj, within_color='k', between_color='r')
print(dcbc_array)
###Output
_____no_output_____ |
Monte_carlo Simulation on AMD Stocks/amd stocks simulation.ipynb | ###Markdown
OVERVIEW---* Data Visualization of AMD stock price.* Plotting ACF AND PACF.* Growth Factor of AMD stock price.* Seasonal Decomposition of data.* Monte Carlo simulation of AMD stock price.
###Code
#VIZ LIBRARY
import pandas as pd
from pandas import plotting
import pandas_datareader as wb
import numpy as np
from tqdm.notebook import tqdm as tqdm
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib import pyplot
plt.style.use('fivethirtyeight')
sns.set_style('whitegrid')
#CLASSICAL STATS
import scipy
from scipy.stats import norm
import statsmodels
from scipy import signal
import statsmodels.api as sm
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.seasonal import seasonal_decompose
#METRICS
from sklearn.metrics import accuracy_score, confusion_matrix,classification_report, r2_score,mean_absolute_error,mean_squared_error
import warnings
warnings.filterwarnings('ignore')
#Setting monte-carlo asset data
ticker = 'AMD'
t_intervals, iteration = 30, 25 #Simulating the movement of stocks for 30 days with 25 different possibilities
df = pd.DataFrame()
#Get the data from source
df[ticker] = wb.DataReader(ticker, data_source='yahoo', start='2018-1-1')['Adj Close']
###Output
_____no_output_____
###Markdown
DATA BASIC INFORMATION---
###Code
#show dataframe
df.head(10).T
#show feature datatype
df.info()
print('MIN STOCK PRICE: ', df.AMD.min())
print('MAX STOCK PRICE: ', df.AMD.max())
print('MEAN OF STOCK PRICE: ', df.AMD.mean())
print('MEDIAN OF STOCK PRICE: ', df.AMD.median())
###Output
MIN STOCK PRICE: 9.529999732971191
MAX STOCK PRICE: 58.900001525878906
MEAN OF STOCK PRICE: 29.04399999285501
MEDIAN OF STOCK PRICE: 27.8100004196167
###Markdown
EDA---
###Code
#show fig
plt.figure(figsize=(13,4))
plt.title('STOCK PRICE VS TIME')
plt.plot(df.index, df.AMD, lw=2, marker='o', markersize=2, color='steelblue')
plt.xlabel('Date')
plt.ylabel('Price')
###Output
_____no_output_____
###Markdown
INSIGHT---* The graph is highly non-linear and just by looking at it, we can't identy the pattern or trend of stock price. I think if we decompose it, we can see more interesting details.
###Code
#Applying Seasonal decomposse
dec = seasonal_decompose(df.AMD, freq=1, model='multiplicative')
fig,ax = plt.subplots(3,1, figsize=(10,5))
sns.lineplot(dec.trend.index, dec.trend.values, ax=ax[0])
sns.lineplot(dec.seasonal.index, dec.seasonal.values, ax=ax[1])
sns.lineplot(dec.resid.index, dec.resid.values, ax=ax[2])
for i, res in enumerate(['TREND', 'SEASONAL', 'RESIDUAL']):
ax[i].set_title(res)
plt.tight_layout(1)
###Output
_____no_output_____
###Markdown
DISTRIBUTION OF PRICE---
###Code
df['range'] = pd.cut(df.AMD, [0,10,20,30,40,50,60]).values
#show distribution
fig, ax = plt.subplots(1,2, figsize=(15,5))
sns.barplot(x=df.groupby('range')['AMD'].count().index, y=df.groupby('range')['AMD'].count().values, ax=ax[0])
sns.distplot(df.AMD, bins=40)
plt.suptitle('DISTRIBUTION OF PRICE', fontsize=20)
ax[0].set_xlabel('Range')
ax[0].set_ylabel('Frequency')
ax[1].set_xlabel('Range')
###Output
_____no_output_____
###Markdown
INSIGHTS---* As we can see from the plot abaove, prices from 10-30 are very frequent.* The Stock price distribution is positively skewed, which mean the measures are dispersed.* The Distribution may be expressed as (Mean > Median > Mode). GROWTH FACTOR OF STOCK PRICE---
###Code
plt.figure(figsize=(14,5))
plt.title('GROWTH FACTOR PLOT OF AMD STOCK PRICE', fontsize=18)
plt.plot(df.AMD.index, (df.AMD / df.AMD.shift().fillna(0)), lw=2, color='salmon')
###Output
_____no_output_____
###Markdown
ACF AND PACF---
###Code
fig, ax = plt.subplots(1,2, figsize=(14,4))
plot_acf(df.AMD, lags=7, ax=ax[0])
plot_pacf(df.AMD, lags=7, ax=ax[1])
plt.show()
###Output
_____no_output_____
###Markdown
INSIGHTS---* The autocorrelation function shows a veryslow decay, which means that the future values have a very high correlation with its past values.* The partial autocorrelation function shows a high correlation with the first lag and lesser correlation with the second and third lag. MONTE CARLO SIMULATION---
###Code
#dropping the range feature, because i dont need them anymore
df.drop('range', axis=1, inplace=True)
#log returns of data
log_returns = np.log(1 + df.pct_change())
#show fig log returns
plt.figure(figsize=(10,4))
plt.title('LOG NORMAL RETURNS OF PRICES')
sns.lineplot(log_returns.index, log_returns.AMD,lw=1, color='violet')
plt.legend('')
#Setting up the drift and random component
mean_ = log_returns.mean()
var = log_returns.var()
stdev = log_returns.std()
drift = mean_ - (0.5 *var)
daily_returns = np.exp(drift.values + stdev.values * norm.ppf(np.random.rand(t_intervals, iteration)))
S0 = df.iloc[-1]
#Empty daily returns
price_list = np.zeros_like(daily_returns)
price_list[0] = S0
#appliying montecarlo simulation
for i in range(1 , t_intervals):
price_list[i] = price_list[i-1] * daily_returns[i]
#Show the result of 30 days simulation
plt.figure(figsize = (10,4))
plt.plot(price_list, lw=1)
plt.title('30 DAYS SIMULATION WITH 25 DIFFERENT POSSIBILITIES')
###Output
_____no_output_____ |
notebook/19B_Pystan.ipynb | ###Markdown
PyStan====Install `PyStan` with```pip install pystan```The nice thing about `PyMC` is that everything is in Python. With `PyStan`, however, you need to use a domain specific language based on C++ syntax to specify the model and the data, which is less flexible and more work. However, in exchange you get an extremely powerful HMC package (only does HMC) that can be used in R and Python. Useful links- [Paper describing Stan](http://www.stat.columbia.edu/~gelman/research/unpublished/stan-resubmit-JSS1293.pdf)- [Stan home page](http://mc-stan.org/interfaces/)- [Stan Examples and Reference Manual](https://github.com/stan-dev/example-models/wiki)- [PyStan docs](http://pystan.readthedocs.org/en/latest/)- [PyStan GitHub page](https://github.com/stan-dev/pystan) Coin tossWe'll repeat the example of determining the bias of a coin from observed coin tosses. The likelihood is binomial, and we use a beta prior.
###Code
coin_code = """
data {
int<lower=0> n; // number of tosses
int<lower=0> y; // number of heads
}
transformed data {}
parameters {
real<lower=0, upper=1> p;
}
transformed parameters {}
model {
p ~ beta(2, 2);
y ~ binomial(n, p);
}
generated quantities {}
"""
coin_dat = {
'n': 100,
'y': 61,
}
###Output
_____no_output_____
###Markdown
Fit model
###Code
sm = pystan.StanModel(model_code=coin_code)
###Output
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_7f1947cd2d39ae427cd7b6bb6e6ffd77 NOW.
###Markdown
MAP
###Code
op = sm.optimizing(data=coin_dat)
op
###Output
_____no_output_____
###Markdown
MCMC
###Code
fit = sm.sampling(data=coin_dat)
###Output
_____no_output_____
###Markdown
Loading from a fileThe string in coin_code can also be in a file - say `coin_code.stan` - then we can use it like so```pythonfit = pystan.stan(file='coin_code.stan', data=coin_dat, iter=1000, chains=1)```
###Code
print(fit)
coin_dict = fit.extract()
coin_dict.keys()
# lp_ is the log posterior
###Output
_____no_output_____
###Markdown
We can convert to a DataFrame if necessary
###Code
df = pd.DataFrame(coin_dict)
df.head(3)
fit.plot('p');
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Estimating mean and standard deviation of normal distribution$$X \sim \mathcal{N}(\mu, \sigma^2)$$
###Code
norm_code = """
data {
int<lower=0> n;
real y[n];
}
transformed data {}
parameters {
real<lower=0, upper=100> mu;
real<lower=0, upper=10> sigma;
}
transformed parameters {}
model {
y ~ normal(mu, sigma);
}
generated quantities {}
"""
norm_dat = {
'n': 100,
'y': np.random.normal(10, 2, 100),
}
fit = pystan.stan(model_code=norm_code, data=norm_dat, iter=1000, chains=1)
fit
trace = fit.extract()
plt.figure(figsize=(10,4))
plt.subplot(1,2,1);
plt.hist(trace['mu'][:], 25, histtype='step');
plt.subplot(1,2,2);
plt.hist(trace['sigma'][:], 25, histtype='step');
###Output
_____no_output_____
###Markdown
Optimization (finding MAP)
###Code
sm = pystan.StanModel(model_code=norm_code)
op = sm.optimizing(data=norm_dat)
op
###Output
INFO:pystan:COMPILING THE C++ CODE FOR MODEL anon_model_3318343d5265d1b4ebc1e443f0228954 NOW.
###Markdown
Reusing fitted objects
###Code
new_dat = {
'n': 100,
'y': np.random.normal(10, 2, 100),
}
fit2 = pystan.stan(fit=fit, data=new_dat, chains=1)
fit2
###Output
_____no_output_____
###Markdown
Saving compiled modelsWe can also compile Stan models and save them to file, so as to reload them for later use without needing to recompile.
###Code
def save(obj, filename):
"""Save compiled models for reuse."""
import pickle
with open(filename, 'wb') as f:
pickle.dump(obj, f, protocol=pickle.HIGHEST_PROTOCOL)
def load(filename):
"""Reload compiled models for reuse."""
import pickle
return pickle.load(open(filename, 'rb'))
model = pystan.StanModel(model_code=norm_code)
save(model, 'norm_model.pic')
new_model = load('norm_model.pic')
fit4 = new_model.sampling(new_dat, chains=1)
fit4
###Output
_____no_output_____
###Markdown
Estimating parameters of a linear regression modelWe will show how to estimate regression parameters using a simple linear model$$y \sim ax + b$$We can restate the linear model $$y = ax + b + \epsilon$$ as sampling from a probability distribution$$y \sim \mathcal{N}(ax + b, \sigma^2)$$We will assume the following priors$$a \sim \mathcal{N}(0, 100) \\b \sim \mathcal{N}(0, 100) \\\sigma \sim \mathcal{U}(0, 20)$$
###Code
lin_reg_code = """
data {
int<lower=0> n;
real x[n];
real y[n];
}
transformed data {}
parameters {
real a;
real b;
real sigma;
}
transformed parameters {
real mu[n];
for (i in 1:n) {
mu[i] <- a*x[i] + b;
}
}
model {
sigma ~ uniform(0, 20);
y ~ normal(mu, sigma);
}
generated quantities {}
"""
n = 11
_a = 6
_b = 2
x = np.linspace(0, 1, n)
y = _a*x + _b + np.random.randn(n)
lin_reg_dat = {
'n': n,
'x': x,
'y': y
}
fit = pystan.stan(model_code=lin_reg_code, data=lin_reg_dat, iter=1000, chains=1)
fit
fit.plot(['a', 'b']);
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Simple Logistic modelWe have observations of height and weight and want to use a logistic model to guess the sex.
###Code
# observed data
df = pd.read_csv('HtWt.csv')
df.head()
log_reg_code = """
data {
int<lower=0> n;
int male[n];
real weight[n];
real height[n];
}
transformed data {}
parameters {
real a;
real b;
real c;
}
transformed parameters {}
model {
a ~ normal(0, 10);
b ~ normal(0, 10);
c ~ normal(0, 10);
for(i in 1:n) {
male[i] ~ bernoulli(inv_logit(a*weight[i] + b*height[i] + c));
}
}
generated quantities {}
"""
log_reg_dat = {
'n': len(df),
'male': df.male,
'height': df.height,
'weight': df.weight
}
fit = pystan.stan(model_code=log_reg_code, data=log_reg_dat, iter=1000, chains=4)
fit
df_trace = pd.DataFrame(fit.extract(['c', 'b', 'a']))
pd.scatter_matrix(df_trace[:], diagonal='kde');
###Output
_____no_output_____ |
jupyter_notebooks/ditributions/plot_distributions.ipynb | ###Markdown
Normal distribution
###Code
mu = 3.2
sigma = 0.3
data = norm.rvs(mu, sigma, size=3000, random_state=45)
fig, ax = plt.subplots(figsize=(12,6))
sns.histplot(data=data, kde=True, palette='deep')
ax.annotate(f'$\mu$={mu}\n$\sigma$={sigma}', xy=(3.85, 80), fontsize=15,
ha='center', va='center')
ax.set_title('Average weight of a newborn in kilos', fontsize=15)
ax.xaxis.set_tick_params(labelsize=13)
ax.get_yaxis().set_visible(False)
plt.tight_layout()
plt.savefig('../../assets/images/probability/toy_newborn_weight_distribution.png', bbox_inches='tight');
fig, ax = plt.subplots(1,2, figsize=(20,6))
x = np.arange(1.5, 4.9, 0.001)
y = norm.pdf(x, mu, sigma)
ax[0].plot(x, y, color='royalblue', alpha=0.9)
x1 = np.arange(1.5, 3, 0.001)
y1 = norm.pdf(x1, mu, sigma)
ax[0].fill_between(x1, y1, 0, alpha=0.3, color='b')
ax[1].plot(x, y, color='royalblue', alpha=0.9)
x2 = np.arange(3, 3.5, 0.001)
y2 = norm.pdf(x2, mu, sigma)
ax[1].fill_between(x2, y2, 0, alpha=0.3, color='b')
ax[0].set_title('Weight less than 3 kilos', fontsize=15)
ax[1].set_title('Weight from 3 to 3.5 kilos', fontsize=15)
for ax in ax:
ax.xaxis.set_tick_params(labelsize=13)
ax.get_yaxis().set_visible(False)
plt.savefig('toy_newborn_weight_distribution_area.png', bbox_inches='tight');
fig, ax = plt.subplots(figsize=(10,6))
x = np.arange(1.5, 4.9, 0.001)
y = norm.pdf(x, mu, sigma)
ax.plot(x, y, color='royalblue', alpha=0.9)
ax.axvline(mu, color='dimgray', linestyle='--')
ax.axvline(mu+sigma, color='darkgrey', linestyle='--')
ax.axvline(mu+sigma*2, color='darkgrey', linestyle='--')
ax.axvline(mu+sigma*3, color='darkgrey', linestyle='--')
ax.axvline(mu-sigma, color='darkgrey', linestyle='--')
ax.axvline(mu-sigma*2, color='darkgrey', linestyle='--')
ax.axvline(mu-sigma*3, color='darkgrey', linestyle='--')
props = dict(boxstyle="round", fc='lightsteelblue', ec='ghostwhite')
ax.annotate(s='', xy=(mu-sigma, 0.9), xytext=(mu+sigma, 0.9), fontsize=15,
ha='center', va='center', arrowprops=dict(arrowstyle='<->', )
)
ax.text(mu, 1, '68.26%', fontsize=14,
ha='center', va='center', bbox=props)
ax.annotate(s='', xy=(mu-2*sigma, 0.55), xytext=(mu+2*sigma, 0.55), fontsize=15,
ha='center', va='center', arrowprops=dict(arrowstyle='<->', )
)
ax.text(mu, 0.65, '95.44%', fontsize=14,
ha='center', va='center', bbox=props)
ax.annotate(s='', xy=(mu-3*sigma, 0.2), xytext=(mu+3*sigma, 0.2), fontsize=15,
ha='center', va='center', arrowprops=dict(arrowstyle='<->', )
)
ax.text(mu, 0.3, '99.73%', fontsize=14,
ha='center', va='center', bbox=props)
ax.xaxis.set_tick_params(labelsize=13)
ax.get_yaxis().set_visible(False)
plt.savefig('toy_newborn_6_sigma.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Student's t-distribution
###Code
fig, ax = plt.subplots(figsize=(12,6))
mu = 3.2
sigma = 0.3
x = np.arange(1.5, 4.9, 0.01)
y = norm.pdf(x, loc=mu, scale=sigma)
ax.plot(x, y, color='royalblue', alpha=0.9, label='Normal distribution')
y2 = t.pdf(x, df=2, loc=mu, scale=sigma)
ax.plot(x, y2, color='peru', alpha=0.9, label=r'$t$-distribution, 2 degrees of freedom')
y3 = t.pdf(x, df=10, loc=mu, scale=sigma)
ax.plot(x, y3, color='olive', alpha=0.9, label=r'$t$-distribution, 10 degrees of freedom')
y4 = t.pdf(x, df=30, loc=mu, scale=sigma)
ax.plot(x, y4, color='maroon', alpha=0.9, label=r'$t$-distribution, 30 degrees of freedom')
ax.axvline(mu, color='darkgrey', linestyle='--')
ax.set_title('PDF for the normal and t-distributions', fontsize=15)
ax.xaxis.set_tick_params(labelsize=13)
ax.get_yaxis().set_visible(False)
plt.legend(fontsize=13)
plt.tight_layout()
plt.savefig('../../assets/images/probability/normal_and_t_distributions.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Chi-square distribution
###Code
x = np.arange(0, 10, 0.01)
with sns.axes_style('whitegrid'):
fig, ax = plt.subplots(figsize=(12,6))
ax.set_ylim(0, 1)
ax.set_xlim(0, 10)
for df in range(1, 6):
y = chi2.pdf(x, df=df, loc=0, scale=1)
plt.plot(x, y, label = f'{df} degree of freedom')
plt.legend(fontsize=13)
plt.tight_layout()
plt.savefig('../../assets/images/probability/chi_squared_distributions.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Binomial distribution
###Code
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(figsize=(12,6))
n = 200
p = 0.8
size = 1000
binomial = np.random.binomial(n, p, size)
sns.histplot(data=binomial, palette='deep', bins=20)
ax.set_title('Number of visitors out of 200 who enjoyed the movie', fontsize=15)
ax.annotate(f'$n$={n}\n$p$={p}\n$N$={size}', xy=(146, 100), fontsize=15,
ha='center', va='center')
plt.tight_layout()
plt.savefig('../../assets/images/probability/binomial_distribution.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Uniform distribution
###Code
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(figsize=(12,6))
uniform_discrete = np.random.randint(low=1, high=7, size=500)
sns.histplot(data=uniform_discrete, palette='deep', bins=6)
ax.set_title('Number of outcomes in 500 dice rollings', fontsize=15)
plt.tight_layout()
plt.savefig('../../assets/images/probability/uniform_distribution.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Geometric distribution
###Code
geometric = []
failure = 0
n = 0
p = 0.2
while n < 2000:
result = np.random.choice(['success', 'failure'], p=(p, 1-p))
if result == 'failure':
failure += 1
else:
geometric.append(failure)
failure = 0
n += 1
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(figsize=(12,6))
sns.histplot(data=geometric, palette='deep', bins=14)
ax.annotate(f'$p$={p}\n$N$={n}', xy=(9, 550), fontsize=15,
ha='center', va='center')
ax.set_title('Number of customer engagements before the first sale', fontsize=15)
plt.tight_layout()
plt.savefig('../../assets/images/probability/geometric_distribution.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Negative binomial distribution
###Code
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(figsize=(12,6))
negative_binomial = np.random.default_rng().negative_binomial(10, 0.2, size=2000)
sns.histplot(data=negative_binomial, palette='deep', bins=14)
ax.annotate('$r$=10\n$p$=0.2\n$N$=2000', xy=(70, 280), fontsize=15,
ha='center', va='center')
ax.set_title('Number of unsuccessful customer engagements before 10 sales were made', fontsize=15)
plt.tight_layout()
plt.savefig('../../assets/images/probability/negative_binomial_distribution.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Poisson distribution
###Code
with sns.axes_style('darkgrid'):
fig, ax = plt.subplots(figsize=(12,6))
poisson = np.random.poisson(lam=5, size=2000)
sns.histplot(data=poisson, palette='deep', bins=14)
ax.annotate('$\lambda$=5\n$N$=2000', xy=(11, 150), fontsize=15,
ha='center', va='center')
ax.set_title('Number of received promo emails per week', fontsize=15)
plt.tight_layout()
plt.savefig('../../assets/images/probability/poisson_distribution.png', bbox_inches='tight');
###Output
_____no_output_____
###Markdown
Exponential distribution
###Code
with sns.axes_style('darkgrid'):
fig, ax1 = plt.subplots(figsize=(12,6))
ax2 = ax1.twinx()
lam = 4/60
beta = 1/lam
exponential = np.random.default_rng().exponential(beta, size=2000)
x = np.arange(0, 110, 0.001)
def expon_func(x, lam):
f_x = lam*np.exp(-lam*x)
return f_x
sns.histplot(data=exponential, palette='deep', bins=14, ax=ax1)
ax2.plot(x, expon_func(x, lam), color='maroon', label='Probability density function')
ax1.annotate('$\lambda$=0.07\n$N$=2000', xy=(100, 280), fontsize=15,
ha='center', va='center')
ax1.set_title('Distribution of minutes spent before a new bus arrives', fontsize=15)
plt.legend(fontsize=13)
plt.tight_layout()
plt.savefig('../../assets/images/probability/exponential_distribution.png', bbox_inches='tight');
###Output
_____no_output_____ |
Jupyter_Notebooks/.ipynb_checkpoints/Keras_101_Neuron_Number-checkpoint.ipynb | ###Markdown
We are going to discuss and hopefully demonstrate how many hidden layers and how many neurons should populate those layers. To begin lets import the necessary packages we will use for our demonstrations and discussions.
###Code
import keras
from sklearn.datasets import make_moons, make_circles
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
We will now generate some classification data for us to play with. These datasets consist of the features x1 and x2 for which each point is assigned a class of 0 or 1.
###Code
dataset_poly = make_moons(n_samples=300,noise=0.20, random_state=1)
dataset_circle = make_circles(n_samples=300,noise=0.15,factor=0.2, random_state=1)
features_poly = dataset_poly[0]
labels_poly = dataset_poly[1]
features_circle = dataset_circle[0]
labels_circle = dataset_circle[1]
features_poly[:,0] = (features_poly[:,0]+1.5)/3.0
features_poly[:,1] = (features_poly[:,1]+1.5)/3.0
x1_min = np.amin(features_poly[:,0])
x1_max = np.amax(features_poly[:,0])
x2_min = np.amin(features_poly[:,1])
x2_max = np.amax(features_poly[:,1])
features_circle[:,0] = (features_circle[:,0]+1.5)/3.0
features_circle[:,1] = (features_circle[:,1]+1.5)/3.0
###Output
_____no_output_____
###Markdown
By plotting the two datasets we can see that the shapes of the decision boundaries that seperate these two classes of data resemble that of a polynolial for the first and a circle for the second.
###Code
plt.scatter(features_poly[:,0],features_poly[:,1],edgecolor="black",linewidth=2,c=labels_poly)
plt.xlabel("x1")
plt.ylabel("x2")
plt.colorbar()
plt.show()
plt.scatter(features_circle[:,0],features_circle[:,1],edgecolor="black",linewidth=2,c=labels_circle)
plt.xlabel("x1")
plt.ylabel("x2")
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
We will begin by looking at the first polynomial dataset and looking at the most simple neural network possible, that of a single neuron, strictly speaking this is not a neual netowkr but a linear classifier however it does make up the simplest building block of the networks we will be looking at later. Here the two input features x1 and x2 feed into a single output neuron that classifies each datapoint input into the network as a 1 or 0.
###Code
layers = []
layers.append(keras.layers.Dense(1, input_dim = 2, activation="sigmoid"))
model = keras.Sequential(layers)
model.compile(optimizer=keras.optimizers.Adam(lr=1), loss='binary_crossentropy', metrics=['binary_accuracy', 'categorical_accuracy'])
history = model.fit(features_poly, labels_poly, batch_size=features_poly.shape[0],epochs=600, verbose=0)
loss = history.history['loss']
epoch = np.arange(0, len(loss))
plt.plot(epoch,loss, label='Training Data Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
By plotting the resulting decision boundary after the training has converged we see that we do indeed have a linear classifier, after any amount of training we will always be left with a linear decision boundary seperating the two classifications of data.
###Code
xx, yy = np.meshgrid(np.arange(x1_min,x1_max,0.01),np.arange(x2_min,x2_max,0.01))
z = model.predict(np.c_[xx.ravel(),yy.ravel()])
z = z.reshape(xx.shape)
plt.contourf(xx,yy,z)
plt.scatter(features[:,0],features[:,1],c=labels)
plt.xlabel('x1')
plt.ylabel('x2')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
This single neuron makes up the most basic componant of a neural network, it is here we will begin our discussion of how to design our neural network. The second simplest network we can imagine consists of the same input and output as the first network but with a single hiddiden layer of one neuron between the input and output, at this point we have nothing more than a linear classifier. If you are not convinced (as you probably shouldn't be) then feel free to adapt the model above to observe the reuslting decision boundary. The task now is to descirbe how adding more neurons to a single hidden layer changes the shape and complexity of the decision boundary we can describe with our network. We will see that by increasing the number of neurons in a single hidden layer that then feeds into a single output neuron we are able to represent any single function that maps one finite space onto another. As each neuron in the hidden layer acts as a single linear classifier by adding more we feed more linear componants into the output neuron, the result is a sum of linear componants that allows any single function to be described. To demonstrate this lets add a second neuron into our hidden layer.
###Code
layers = []
layers.append(keras.layers.Dense(2, input_dim = 2, activation="sigmoid"))
layers.append(keras.layers.Dense(1, activation="sigmoid"))
model = keras.Sequential(layers)
model.compile(optimizer=keras.optimizers.Adam(lr=0.1), loss='binary_crossentropy', metrics=['binary_accuracy', 'categorical_accuracy'])
history = model.fit(features_poly, labels_poly, batch_size=features_poly.shape[0],epochs=2000, verbose=0)
loss = history.history['loss']
epoch = np.arange(0, len(loss))
plt.plot(epoch,loss, label='Training Data Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
xx, yy = np.meshgrid(np.arange(x1_min,x1_max,0.01),np.arange(x2_min,x2_max,0.01))
z = model.predict(np.c_[xx.ravel(),yy.ravel()])
z = z.reshape(xx.shape)
plt.contourf(xx,yy,z)
plt.scatter(features[:,0],features[:,1],c=labels)
plt.xlabel('x1')
plt.ylabel('x2')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
It should be evident that the above decision boundary is essentially the summation of two linear decision boundaries through an output neuron thereby normalising the inputs. Now let's have a look at the shape of the data we are classifiying inorder to egt an idea of the model this data requires.We can see that the summation of three linear decision boundares would perfectly describe the decision boundary we require for our classification data, so let's add a third neuron to our network model and test this hypothesis.
###Code
layers = []
layers.append(keras.layers.Dense(3, input_dim = 2, activation="sigmoid"))
layers.append(keras.layers.Dense(1, activation="sigmoid"))
model = keras.Sequential(layers)
model.compile(optimizer=keras.optimizers.Adam(lr=0.1), loss='binary_crossentropy', metrics=['binary_accuracy', 'categorical_accuracy'])
history = model.fit(features_poly, labels_poly, batch_size=features_poly.shape[0],epochs=3000, verbose=0)
loss = history.history['loss']
epoch = np.arange(0, len(loss))
plt.plot(epoch,loss, label='Training Data Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
xx, yy = np.meshgrid(np.arange(x1_min,x1_max,0.01),np.arange(x2_min,x2_max,0.01))
z = model.predict(np.c_[xx.ravel(),yy.ravel()])
z = z.reshape(xx.shape)
plt.contourf(xx,yy,z)
plt.scatter(features[:,0],features[:,1],c=labels)
plt.xlabel('x1')
plt.ylabel('x2')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that we do indeed get a decisioun boundary that describes a good generalisation of the classification between the two classes in our data set. You may be wondering the effect of adding more neurons to the single hidden layer of our network. We can add any number of neurons and the result will be a more complex decision boundary, a more complex decision boundary allows for those data points in the data that deviate from the general distribution of the data (outliers) to be incorperated into the decision boundary, this is because the loss function will indeed be reduced when the network fits these points even though they do as discused lay outide the general rule or decision boundary, this is termed overfitting (More on this later).
###Code
layers = []
layers.append(keras.layers.Dense(36, input_dim = 2, activation="sigmoid"))
layers.append(keras.layers.Dense(1, activation="sigmoid"))
model = keras.Sequential(layers)
model.compile(optimizer=keras.optimizers.Adam(lr=0.1), loss='binary_crossentropy', metrics=['binary_accuracy', 'categorical_accuracy'])
history = model.fit(features_poly, labels_poly, batch_size=features_poly.shape[0],epochs=3000, verbose=0)
loss = history.history['loss']
epoch = np.arange(0, len(loss))
plt.plot(epoch,loss, label='Training Data Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
xx, yy = np.meshgrid(np.arange(x1_min,x1_max,0.01),np.arange(x2_min,x2_max,0.01))
z = model.predict(np.c_[xx.ravel(),yy.ravel()])
z = z.reshape(xx.shape)
plt.contourf(xx,yy,z)
plt.scatter(features[:,0],features[:,1],c=labels)
plt.xlabel('x1')
plt.ylabel('x2')
plt.colorbar()
plt.show()
###Output
_____no_output_____ |
notebooks/INTRO_DC_LayeredEarth.ipynb | ###Markdown
**Course website**: https://github.com/leomiquelutti/UFU-geofisica-1**Note**: This notebook is part of the course "Geofรญsica 1" of Geology program of the [Universidade Federal de Uberlรขndia](http://www.ufu.br/). All content can be freely used and adapted under the terms of the [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).![Creative Commons License](https://i.creativecommons.org/l/by/4.0/88x31.png)Agradecimentos especiais ao [Leonardo Uieda](www.leouieda.com) e ao [Grupo Geosci](http://geosci.xyz/) Esse documento que vocรช estรก usando รฉ um [Jupyter notebook](http://jupyter.org/). ร um documento interativo que mistura texto (como esse), cรณdigo (como abaixo), e o resultado de executar o cรณdigo (nรบmeros, texto, figuras, videos, etc).
###Code
from em_examples import DCLayers
from IPython.display import display
%matplotlib inline
from matplotlib import rcParams
rcParams['font.size'] = 14
###Output
_____no_output_____
###Markdown
InstruรงรตesO notebook te fornecerรก exemplos interativos que trabalham os temas abordados no questionรกrio. Utilize esses exemplos para responder as perguntas.As cรฉlulas com nรบmeros ao lado, como `In [1]:`, sรฃo cรณdigo [Python](http://python.org/). Algumas dessas cรฉlulas nรฃo produzem resultado e servem de preparaรงรฃo para os exemplos interativos. Outras, produzem grรกficos interativos. **Vocรช deve executar todas as cรฉlulas, uma de cada vez**, mesmo as que nรฃo produzem grรกficos.Para executar uma cรฉlula, clique em cima dela e aperte `Shift + Enter`. O foco (contorno verde ou cinza em torno da cรฉlula) deverรก passar para a cรฉlula abaixo. Para rodรก-la, aperte `Shift + Enter` novamente e assim por diante. Vocรช pode executar cรฉlulas de texto que nรฃo acontecerรก nada. Purpose Investigating DC Resistivity Data Using the widgets contained in this notebook we will explore the physical principals governing DC resistivity including the behavior of currents, electric field, electric potentials in a two layer earth. The measured data in a DC experiment are potential differences, we will demonstrate how these provide information about subsurface physical properties. Background: Computing Apparent ResistivityIn practice we cannot measure the potentials everywhere, we are limited to those locations where we place electrodes. For each source (current electrode pair) many potential differences are measured between M and N electrode pairs to characterize the overall distribution of potentials. The widget below allows you to visualize the potentials, electric fields, and current densities from a dipole source in a simple model with 2 layers. For different electrode configurations you can measure the potential differences and see the calculated apparent resistivities. In a uniform halfspace the potential differences can be computed by summing up the potentials at each measurement point from the different current sources based on the following equations:\begin{align} V_M = \frac{\rho I}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} \right] \\ V_N = \frac{\rho I}{2 \pi} \left[ \frac{1}{AN} - \frac{1}{NB} \right] \end{align} where $AM$, $MB$, $AN$, and $NB$ are the distances between the corresponding electrodes. The potential difference $\Delta V_{MN}$ in a dipole-dipole survey can therefore be expressed as follows,\begin{equation} \Delta V_{MN} = V_M - V_N = \rho I \underbrace{\frac{1}{2 \pi} \left[ \frac{1}{AM} - \frac{1}{MB} - \frac{1}{AN} + \frac{1}{NB} \right]}_{G}\end{equation}and the resistivity of the halfspace $\rho$ is equal to,$$ \rho = \frac{\Delta V_{MN}}{IG}$$In this equation $G$ is often referred to as the geometric factor. In the case where we are not in a uniform halfspace the above equation is used to compute the apparent resistivity ($\rho_a$) which is the resistivity of the uniform halfspace which best reproduces the measured potential difference.In the top plot the location of the A electrode is marked by the red +, the B electrode is marked by the blue -, and the M/N potential electrodes are marked by the black dots. The $V_M$ and $V_N$ potentials are printed just above and to the right of the black dots. The calculted apparent resistivity is shown in the grey box to the right. The bottom plot can show the resistivity model, the electric fields (e), potentials, or current densities (j) depending on which toggle button is selected. Some patience may be required for the plots to update after parameters have been changed. LayeredEarth app Parameters: - **A**: (+) Current electrode location - **B**: (-) Current electrode location - **M**: (+) Potential electrode location - **N**: (-) Potential electrode location - **$\rho_1$**: Resistivity of the first layer - **$\rho_2$**: Resistivity of the second layer - **h**: Thickness of the first layer - **Plot**: Choice of 2D plot (Model, Potential, Electric field, Currents)
###Code
out = DCLayers.plot_layer_potentials_app()
display(out)
###Output
_____no_output_____ |
Data_Analysis/missing_value_handling.ipynb | ###Markdown
Working With Missing ValuesMissing Data can occur when no information is provided for one or more items or for a whole unit. Missing Data is a very big problem in real life scenario. Missing Data can also refer to as **NA(Not Available)** values in pandas.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
dataframe = pd.read_csv('dataset/circle_employee.csv',index_col='user_id') # Load data
dataframe.iloc[:,:6].head()
###Output
_____no_output_____
###Markdown
Checking for missing values using isnull() and notnull()In order to check missing values in Pandas DataFrame, we use a function isnull() and notnull(). Both function help in checking whether a value is NaN or not. These function can also be used in Pandas Series in order to find null values in a series
###Code
# using isnull() function
dataframe.head(10).isnull()
# using isnull() function
dataframe.head(10).notnull()
age = dataframe['age']
age.head(10).isnull()
###Output
_____no_output_____
###Markdown
Filling missing values using fillna(), replace() and interpolate() :In order to fill null values in a datasets, we use fillna(), replace() and interpolate() function these function replace NaN values with some value of their own. All these function help in filling a null values in datasets of a DataFrame. - **Interpolate():** function is basically used to fill NA values in the dataframe but it uses various interpolation technique to fill the missing values rather than hard-coding the value.
###Code
# fill null values on age column using Mean
mean = age.mean()
print(mean)
age.fillna(mean).head(10)
###Output
28.302325581395348
###Markdown
Dropping missing values using dropna() :In order to drop a null values from a dataframe, we used dropna() function this fuction drop Rows/Columns of datasets with Null values in different ways.
###Code
# using dropna() function
age.dropna().head(10)
###Output
_____no_output_____
###Markdown
DATA_ANALYSIS_WITH_PYTHON
###Code
filename = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DA0101EN/auto.csv"
headers = ["symboling","normalized-losses","make","fuel-type","aspiration", "num-of-doors","body-style",
"drive-wheels","engine-location","wheel-base", "length","width","height","curb-weight","engine-type",
"num-of-cylinders", "engine-size","fuel-system","bore","stroke","compression-ratio","horsepower",
"peak-rpm","city-mpg","highway-mpg","price"]
df = pd.read_csv(filename, names = headers)
df.head()
###Output
_____no_output_____
###Markdown
In the car dataset, missing data comes with the question mark "?". We replace "?" with NaN (Not a Number), which is Python's default missing value marker, for reasons of computational speed and convenience. Here we use the function:
###Code
# replace "?" to NaN
df.replace("?", np.nan, inplace = True)
df.head(5)
###Output
_____no_output_____
###Markdown
Evaluating for Missing DataThe missing values are converted to Python's default. We use Python's built-in functions to identify these missing values. There are two methods to detect missing data: .isnull() .notnull()The output is a boolean value indicating whether the value that is passed into the argument is in fact missing data.
###Code
missing_data = df.isnull()
missing_data.head(5)
###Output
_____no_output_____
###Markdown
Count missing values in each columnUsing a for loop in Python, we can quickly figure out the number of missing values in each column. As mentioned above, "True" represents a missing value, "False" means the value is present in the dataset. In the body of the for loop the method ".value_counts()" counts the number of "True" values.
###Code
for column in missing_data.columns.values.tolist():
print(column)
print (missing_data[column].value_counts())
print("")
###Output
symboling
False 205
Name: symboling, dtype: int64
normalized-losses
False 164
True 41
Name: normalized-losses, dtype: int64
make
False 205
Name: make, dtype: int64
fuel-type
False 205
Name: fuel-type, dtype: int64
aspiration
False 205
Name: aspiration, dtype: int64
num-of-doors
False 203
True 2
Name: num-of-doors, dtype: int64
body-style
False 205
Name: body-style, dtype: int64
drive-wheels
False 205
Name: drive-wheels, dtype: int64
engine-location
False 205
Name: engine-location, dtype: int64
wheel-base
False 205
Name: wheel-base, dtype: int64
length
False 205
Name: length, dtype: int64
width
False 205
Name: width, dtype: int64
height
False 205
Name: height, dtype: int64
curb-weight
False 205
Name: curb-weight, dtype: int64
engine-type
False 205
Name: engine-type, dtype: int64
num-of-cylinders
False 205
Name: num-of-cylinders, dtype: int64
engine-size
False 205
Name: engine-size, dtype: int64
fuel-system
False 205
Name: fuel-system, dtype: int64
bore
False 201
True 4
Name: bore, dtype: int64
stroke
False 201
True 4
Name: stroke, dtype: int64
compression-ratio
False 205
Name: compression-ratio, dtype: int64
horsepower
False 203
True 2
Name: horsepower, dtype: int64
peak-rpm
False 203
True 2
Name: peak-rpm, dtype: int64
city-mpg
False 205
Name: city-mpg, dtype: int64
highway-mpg
False 205
Name: highway-mpg, dtype: int64
price
False 201
True 4
Name: price, dtype: int64
###Markdown
Deal with missing dataHow to deal with missing data? 1. drop data a. drop the whole row b. drop the whole column 2. replace data a. replace it by mean b. replace it by frequency c. replace it based on other functions Whole columns should be dropped only if most entries in the column are empty. In our dataset, none of the columns are empty enough to drop entirely. We have some freedom in choosing which method to replace data; however, some methods may seem more reasonable than others. We will apply each method to many different columns:Replace by mean: "normalized-losses": 41 missing data, replace them with mean "stroke": 4 missing data, replace them with mean "bore": 4 missing data, replace them with mean "horsepower": 2 missing data, replace them with mean "peak-rpm": 2 missing data, replace them with meanReplace by frequency: "num-of-doors": 2 missing data, replace them with "four". Reason: 84% sedans is four doors. Since four doors is most frequent, it is most likely to occurDrop the whole row: "price": 4 missing data, simply delete the whole row Reason: price is what we want to predict. Any data entry without price data cannot be used for prediction; therefore any row now without price data is not useful to us Calculate the average of the column
###Code
avg_norm_loss = df["normalized-losses"].astype("float").mean(axis=0)
print("Average of normalized-losses:", avg_norm_loss)
###Output
Average of normalized-losses: 122.0
###Markdown
Replace "NaN" by mean value in "normalized-losses" column
###Code
df["normalized-losses"].replace(np.nan, avg_norm_loss, inplace=True)
###Output
_____no_output_____
###Markdown
Calculate the mean value for 'bore' column and replace nan
###Code
avg_bore=df['bore'].astype('float').mean(axis=0)
print("Average of bore:", avg_bore)
df["bore"].replace(np.nan, avg_bore, inplace=True)
###Output
Average of bore: 3.3297512437810943
###Markdown
For Stroke
###Code
avg_stroke = df["stroke"].astype("float").mean(axis = 0)
print("Average of stroke:", avg_stroke)
# replace NaN by mean value in "stroke" column
df["stroke"].replace(np.nan, avg_stroke, inplace = True)
###Output
Average of stroke: 3.255422885572139
###Markdown
For Horse Power
###Code
avg_horsepower = df['horsepower'].astype('float').mean(axis=0)
print("Average horsepower:", avg_horsepower)
df['horsepower'].replace(np.nan, avg_horsepower, inplace=True)
###Output
Average horsepower: 104.25615763546799
###Markdown
For Peak-RPM
###Code
avg_peakrpm=df['peak-rpm'].astype('float').mean(axis=0)
print("Average peak rpm:", avg_peakrpm)
df['peak-rpm'].replace(np.nan, avg_peakrpm, inplace=True)
###Output
Average peak rpm: 5125.369458128079
###Markdown
To see which values are present in a particular column, we can use the ".value_counts()" method: For Number of Doors
###Code
df['num-of-doors'].value_counts()
df['num-of-doors'].value_counts().idxmax() #calculate Most Common
#replace the missing 'num-of-doors' values by the most frequent
df["num-of-doors"].replace(np.nan, "four", inplace=True)
###Output
_____no_output_____
###Markdown
For Price DataFinally, let's drop all rows that do not have price data:
###Code
# simply drop whole row with NaN in "price" column
df.dropna(subset=["price"], axis=0, inplace=True)
# reset index, because we droped two rows
df.reset_index(drop=True, inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Convert data types to proper format
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
As we can see above, some columns are not of the correct data type. Numerical variables should have type 'float' or 'int', and variables with strings such as categories should have type 'object'. For example, 'bore' and 'stroke' variables are numerical values that describe the engines, so we should expect them to be of the type 'float' or 'int'; however, they are shown as type 'object'. We have to convert data types into a proper format for each column using the "astype()" method.
###Code
df[["bore", "stroke"]] = df[["bore", "stroke"]].astype("float")
df[["normalized-losses"]] = df[["normalized-losses"]].astype("int")
df[["price"]] = df[["price"]].astype("float")
df[["peak-rpm"]] = df[["peak-rpm"]].astype("float")
df.dtypes
###Output
_____no_output_____
###Markdown
Data StandardizationStandardization is the process of transforming data into a common format which allows the researcher to make the meaningful comparison.ExampleTransform mpg to L/100km:In our dataset, the fuel consumption columns "city-mpg" and "highway-mpg" are represented by mpg (miles per gallon) unit. Assume we are developing an application in a country that accept the fuel consumption with L/100km standardWe will need to apply data transformation to transform mpg into L/100km?The formula for unit conversion isL/100km = 235 / mpgWe can do many mathematical operations directly in Pandas.
###Code
# Convert mpg to L/100km by mathematical operation (235 divided by mpg)
df['city-L/100km'] = 235/df["city-mpg"]
# check your transformed data
df.head()
# transform mpg to L/100km by mathematical operation (235 divided by mpg)
df["highway-mpg"] = 235/df["highway-mpg"]
# rename column name from "highway-mpg" to "highway-L/100km"
df.rename(columns={'"highway-mpg"':'highway-L/100km'}, inplace=True)
# check your transformed data
df.head()
###Output
_____no_output_____
###Markdown
Data Normalization Why normalization?Normalization is the process of transforming values of several variables into a similar range. Typical normalizations include scaling the variable so the variable average is 0, scaling the variable so the variance is 1, or scaling variable so the variable values range from 0 to 1ExampleTo demonstrate normalization, let's say we want to scale the columns "length", "width" and "height"Target:would like to Normalize those variables so their value ranges from 0 to 1.
###Code
# replace (original value) by (original value)/(maximum value)
df['length'] = df['length']/df['length'].max()
df['width'] = df['width']/df['width'].max()
df['height'] = df['height']/df['height'].max()
# show the scaled columns
df[["length","width","height"]].head()
###Output
_____no_output_____
###Markdown
Binning Why binning?Binning is a process of transforming continuous numerical variables into discrete categorical 'bins', for grouped analysis.Example:In our dataset, "horsepower" is a real valued variable ranging from 48 to 288, it has 57 unique values. What if we only care about the price difference between cars with high horsepower, medium horsepower, and little horsepower (3 types)? Can we rearrange them into three โbins' to simplify analysis?We will use the Pandas method 'cut' to segment the 'horsepower' column into 3 bins
###Code
df["horsepower"]=df["horsepower"].astype(int, copy=True)
###Output
_____no_output_____
###Markdown
Lets plot the histogram of horspower, to see what the distribution of horsepower looks like.
###Code
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
plt.pyplot.hist(df["horsepower"])
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
###Output
_____no_output_____
###Markdown
We would like 3 bins of equal size bandwidth so we use numpy's linspace(start_value, end_value, numbers_generated function.Since we want to include the minimum value of horsepower we want to set start_value=min(df["horsepower"]).Since we want to include the maximum value of horsepower we want to set end_value=max(df["horsepower"]).Since we are building 3 bins of equal length, there should be 4 dividers, so numbers_generated=4.We build a bin array, with a minimum value to a maximum value, with bandwidth calculated above. The bins will be values used to determine when one bin ends and another begins.
###Code
bins = np.linspace(min(df["horsepower"]), max(df["horsepower"]), 4)
bins
group_names = ['Low', 'Medium', 'High'] # set group
df['horsepower-binned'] = pd.cut(df['horsepower'], bins, labels=group_names, include_lowest=True )
df[['horsepower','horsepower-binned']].head(20)
df["horsepower-binned"].value_counts()
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
pyplot.bar(group_names, df["horsepower-binned"].value_counts())
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
###Output
_____no_output_____
###Markdown
Bins visualizationNormally, a histogram is used to visualize the distribution of bins we created above.
###Code
%matplotlib inline
import matplotlib as plt
from matplotlib import pyplot
a = (0,1,2)
# draw historgram of attribute "horsepower" with bins = 3
plt.pyplot.hist(df["horsepower"], bins = 3)
# set x/y labels and plot title
plt.pyplot.xlabel("horsepower")
plt.pyplot.ylabel("count")
plt.pyplot.title("horsepower bins")
###Output
_____no_output_____
###Markdown
Indicator variable (or dummy variable) What is an indicator variable?An indicator variable (or dummy variable) is a numerical variable used to label categories. They are called 'dummies' because the numbers themselves don't have inherent meaning.Why we use indicator variables?So we can use categorical variables for regression analysis in the later modules.ExampleWe see the column "fuel-type" has two unique values, "gas" or "diesel". Regression doesn't understand words, only numbers. To use this attribute in regression analysis, we convert "fuel-type" into indicator variables.We will use the panda's method 'get_dummies' to assign numerical values to different categories of fuel type.
###Code
df.columns
###Output
_____no_output_____
###Markdown
get indicator variables and assign it to data frame "dummy_variable_1"
###Code
dummy_variable_1 = pd.get_dummies(df["fuel-type"])
dummy_variable_1.head()
###Output
_____no_output_____
###Markdown
change column names for clarity
###Code
dummy_variable_1.rename(columns={'fuel-type-diesel':'gas', 'fuel-type-diesel':'diesel'}, inplace=True)
dummy_variable_1.head()
# merge data frame "df" and "dummy_variable_1"
df = pd.concat([df, dummy_variable_1], axis=1)
# drop original column "fuel-type" from "df"
df.drop("fuel-type", axis = 1, inplace=True)
df.head()
# get indicator variables of aspiration and assign it to data frame "dummy_variable_2"
dummy_variable_2 = pd.get_dummies(df['aspiration'])
# change column names for clarity
dummy_variable_2.rename(columns={'std':'aspiration-std', 'turbo': 'aspiration-turbo'}, inplace=True)
# show first 5 instances of data frame "dummy_variable_1"
dummy_variable_2.head()
#merge the new dataframe to the original datafram
df = pd.concat([df, dummy_variable_2], axis=1)
# drop original column "aspiration" from "df"
df.drop('aspiration', axis = 1, inplace=True)
df.head()
###Output
_____no_output_____ |
scratch/deep-gaussian-processes.ipynb | ###Markdown
Sparse Gaussian Process
###Code
import matplotlib as mpl; mpl.use('pgf')
%matplotlib inline
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# import tensorflow as tf
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
from collections import defaultdict
from matplotlib import animation
from IPython.display import HTML
from scribbles.gaussian_processes import gp_sample_custom, dataframe_from_gp_samples
golden_ratio = 0.5 * (1 + np.sqrt(5))
golden_size = lambda width: (width, width / golden_ratio)
width = 10
rc = {
"figure.figsize": golden_size(width),
"text.usetex": True,
}
sns.set(context="notebook", style="ticks", palette="colorblind", font="serif", rc=rc)
# shortcuts
tfd = tfp.distributions
kernels = tfp.math.psd_kernels
# constants
n_train = 500
observation_noise_variance = 1e-1
n_features = 1 # dimensionality
n_index_points = 256 # nbr of index points
n_samples = 8 # nbr of GP prior samples
jitter = 1e-2
kernel_cls = kernels.ExponentiatedQuadratic
n_inducing_points = 20
n_epochs = 2000
batch_size = 50
seed = 42 # set random seed for reproducibility
random_state = np.random.RandomState(seed)
x_min, x_max = -1.0, 1.0
y_min, y_max = -3.0, 3.0
x_loc = -0.5
# index points
X_q = np.linspace(x_min, x_max, n_index_points).reshape(-1, n_features)
f = lambda x: np.sin(12.0*x) + 0.66*np.cos(25.0*x)
X = x_loc + random_state.rand(n_train, n_features)
eps = observation_noise_variance * random_state.randn(n_train, n_features)
Y = np.squeeze(f(X) + eps)
fig, ax = plt.subplots()
ax.plot(X_q, f(X_q), label="true")
ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
ax.legend()
ax.set_xlim(x_loc, -x_loc)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
plt.show()
class BatchIdentity(tf.keras.initializers.Identity):
def __call__(self, shape, dtype=None):
return super(BatchIdentity, self).__call__(shape[-2:], dtype=None)
shape = (16, 5, 5)
shape[:-2]
tf.keras.backend.repeat(tf.keras.initializers.Identity(gain=1.0)(shape=(5, 5)), n=2)
def identity_initializer(shape, dtype=None):
*batch_shape, num_rows, num_columns = shape
return tf.eye(num_rows, num_columns,
batch_shape=batch_shape, dtype=dtype)
class VGP(tf.keras.layers.Layer):
def __init__(self, units, kernel_provider, num_inducing_points=64, mean_fn=None, jitter=1e-6, **kwargs):
self.units = units # TODO: Maybe generalize to `event_shape`?
self.num_inducing_points = num_inducing_points
self.kernel_provider = kernel_provider
self.mean_fn = mean_fn
self.jitter = jitter
super(VGP, self).__init__(**kwargs)
def build(self, input_shape):
input_dim = input_shape[-1]
self.inducing_index_points = self.add_weight(
name="inducing_index_points",
shape=(self.units, self.num_inducing_points, input_dim),
initializer=tf.keras.initializers.RandomUniform(-1, 1), # TODO: initialization
trainable=True)
self.variational_inducing_observations_loc = self.add_weight(
name="variational_inducing_observations_loc",
shape=(self.units, self.num_inducing_points),
initializer='zeros', trainable=True)
self.variational_inducing_observations_scale = self.add_weight(
name="variational_inducing_observations_scale",
shape=(self.units, self.num_inducing_points, self.num_inducing_points),
initializer=identity_initializer, trainable=True)
super(VGP, self).build(input_shape)
def call(self, x):
base = tfd.VariationalGaussianProcess(
kernel=self.kernel_provider.kernel,
index_points=x,
inducing_index_points=self.inducing_index_points,
variational_inducing_observations_loc=self.variational_inducing_observations_loc,
variational_inducing_observations_scale=self.variational_inducing_observations_scale,
mean_fn=self.mean_fn,
predictive_noise_variance=1e-1,
jitter=self.jitter
)
# sum KL divergence between `units` independent processes
self.add_loss(tf.reduce_sum(base.surrogate_posterior_kl_divergence_prior()))
bijector = tfp.bijectors.Transpose(rightmost_transposed_ndims=2)
qf = tfd.TransformedDistribution(tfd.Independent(base, reinterpreted_batch_ndims=1),
bijector=bijector)
return qf.sample()
def compute_output_shape(self, input_shape):
return (input_shape[0], self.units)
class RBFKernelFn(tf.keras.layers.Layer):
# TODO: automatic relevance determination
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self.ln_amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype, name='amplitude')
self.ln_length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype, name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return kernels.ExponentiatedQuadratic(
amplitude=tf.exp(self.ln_amplitude),
length_scale=tf.exp(self.ln_length_scale)
)
model = tf.keras.models.Sequential([
VGP(16, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6),
VGP(32, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6),
VGP(1, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6)
])
model.losses
model(X_q)
model.losses
kl = tf.reduce_sum(model.losses) / n_train
kl
f1 = VGP(16, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6)(X_q)
f2 = VGP(32, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6)(f1)
f3 = VGP(1, kernel_provider=RBFKernelFn(dtype="float64"), jitter=1e-6)(f2)
f3
fig, ax = plt.subplots()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
ax.plot(X_q, f1.eval())
plt.show()
m(X_q)
inducing_index_points_initial = random_state.choice(X.squeeze(), size=(5, n_inducing_points)) \
.reshape(5, n_inducing_points, n_features)
inducing_index_points_initial.shape
inducing_index_points = tf.Variable(inducing_index_points_initial,
name='inducing_index_points')
variational_inducing_observations_loc = tf.Variable(np.zeros((5, n_inducing_points)),
name='variational_inducing_observations_loc')
variational_inducing_observations_scale = tf.Variable(
tf.eye(n_inducing_points, batch_shape=(5,), dtype="float64"), name='variational_inducing_observations_scale')
vgp = tfd.VariationalGaussianProcess(
kernel=kernel,
index_points=X_q,
inducing_index_points=inducing_index_points,
variational_inducing_observations_loc=variational_inducing_observations_loc,
variational_inducing_observations_scale=variational_inducing_observations_scale,
observation_noise_variance=0.0,
jitter=jitter
)
vgp.sample()
bijector = tfp.bijectors.Transpose(rightmost_transposed_ndims=2)
bijector
res = tfd.TransformedDistribution(tfd.Independent(vgp, reinterpreted_batch_ndims=1),
bijector=bijector)
res
res.sample()
variational_inducing_observations_scale
?np.identity
# amplitude = tf.exp(tf.Variable(np.float64(0)), name='amplitude')
# length_scale = tf.exp(tf.Variable(np.float64(-1)), name='length_scale')
# observation_noise_variance = tf.exp(tf.Variable(np.float64(-5)), name='observation_noise_variance')
# kernel = kernel_cls(amplitude=amplitude, length_scale=length_scale)
# gp = tfd.GaussianProcess(
# kernel=kernel,
# index_points=X,
# observation_noise_variance=observation_noise_variance
# )
# nll = - gp.log_prob(Y)
# nll
# optimizer = tf.train.AdamOptimizer(learning_rate=.05, beta1=.5, beta2=.99)
# optimize = optimizer.minimize(nll)
# history = defaultdict(list)
# with tf.Session() as sess:
# sess.run(tf.global_variables_initializer())
# for i in range(500):
# (_, nll_value, amplitude_value, length_scale_value,
# observation_noise_variance_value) = sess.run([optimize, nll, amplitude, length_scale, observation_noise_variance])
# history["nll"].append(nll_value)
# history["amplitude"].append(amplitude_value)
# history["length_scale"].append(length_scale_value)
# history["observation_noise_variance"].append(observation_noise_variance_value)
# fig, ax = plt.subplots()
# sns.lineplot(x='amplitude', y='length_scale',
# sort=False, data=pd.DataFrame(history), alpha=0.8, ax=ax)
# ax.set_xlabel(r"amplitude $\sigma$")
# ax.set_ylabel(r"lengthscale $\ell$")
# plt.show()
# kernel_history = kernel_cls(amplitude=history.get("amplitude"), length_scale=history.get("length_scale"))
# gprm_history = tfd.GaussianProcessRegressionModel(
# kernel=kernel_history, index_points=X_q, observation_index_points=X, observations=Y,
# observation_noise_variance=history.get("observation_noise_variance"), jitter=jitter
# )
# gprm_mean = gprm_history.mean()
# gprm_stddev = gprm_history.stddev()
# with tf.Session() as sess:
# gprm_mean_value, gprm_stddev_value = sess.run([gprm_mean, gprm_stddev])
# fig, ax = plt.subplots()
# ax.plot(X_q, gprm_mean_value[0])
# ax.fill_between(np.squeeze(X_q),
# gprm_mean_value[0] - 2*gprm_stddev_value[0],
# gprm_mean_value[0] + 2*gprm_stddev_value[0], alpha=0.1)
# ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
# ax.set_xlabel('$x$')
# ax.set_ylabel('$y$')
# ax.set_ylim(y_min, y_max)
# plt.show()
# fig, ax = plt.subplots()
# ax.plot(X_q, gprm_mean_value[-1])
# ax.fill_between(np.squeeze(X_q),
# gprm_mean_value[-1] - 2*gprm_stddev_value[-1],
# gprm_mean_value[-1] + 2*gprm_stddev_value[-1], alpha=0.1)
# ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
# ax.set_xlabel('$x$')
# ax.set_ylabel('$y$')
# ax.set_ylim(y_min, y_max)
# plt.show()
amplitude = tf.exp(tf.Variable(np.float64(0)), name='amplitude')
length_scale = tf.exp(tf.Variable(np.float64(-1)), name='length_scale')
observation_noise_variance = tf.exp(tf.Variable(np.float64(-5)), name='observation_noise_variance')
kernel = kernel_cls(amplitude=amplitude, length_scale=length_scale)
inducing_index_points_initial = random_state.choice(X.squeeze(), size=(5, n_inducing_points)) \
.reshape(5, n_inducing_points, n_features)
inducing_index_points_initial.shape
.shape
# bijector = tfp.bijectors.Chain([tfp.bijectors.CholeskyOuterProduct(),
# ])
# bijector
n_inducing_points = 20
inducing_index_points = tf.Variable(inducing_index_points_initial,
name='inducing_index_points')
# variational_inducing_observations_loc = tf.Variable(np.zeros(n_inducing_points),
# name='variational_inducing_observations_loc')
# variational_inducing_observations_scale = tf.Variable(
# np.eye(n_inducing_points), name='variational_inducing_observations_scale')
# variational_inducing_observations_scale = tfp.util.TransformedVariable(
# np.eye(n_inducing_points), tfp.bijectors.FillTriangular(), name='variational_inducing_observations_scale'
# )
# variational_inducing_observations_scale_flat = tf.Variable(
# random_state.rand(n_inducing_points * (n_inducing_points + 1) // 2),
# name='variational_inducing_observations_scale_flat')
# variational_inducing_observations_scale = tfp.math.fill_triangular(variational_inducing_observations_scale_flat)
dataset = tf.data.Dataset.from_tensor_slices((X, Y)) \
.shuffle(buffer_size=500) \
.batch(batch_size, drop_remainder=True)
iterator = tf.data.make_initializable_iterator(dataset)
X_batch, Y_batch = iterator.get_next()
X_batch, Y_batch
[variational_inducing_observations_loc,
variational_inducing_observations_scale] = tfd.VariationalGaussianProcess.optimal_variational_posterior(
kernel=kernel,
inducing_index_points=inducing_index_points,
observation_index_points=X,
observations=Y,
observation_noise_variance=observation_noise_variance
)
vgp = tfd.VariationalGaussianProcess(
kernel=kernel,
index_points=X_batch,
inducing_index_points=inducing_index_points,
variational_inducing_observations_loc=variational_inducing_observations_loc,
variational_inducing_observations_scale=variational_inducing_observations_scale,
observation_noise_variance=observation_noise_variance,
jitter=jitter
)
vgp
tfd.Independent(vgp, reinterpreted_batch_ndims=1)
bij = transpose_lib.Transpose(rightmost_transposed_ndims=2)
d = transformed_distribution_lib.TransformedDistribution(ind, bijector=bij)
vgp.sample(index_points=X_batch)
nelbo = vgp.variational_loss(
observations=Y_batch,
observation_index_points=X_batch,
kl_weight=batch_size/n_train
)
optimizer = tf.train.AdamOptimizer()
optimize = optimizer.minimize(nelbo)
steps_per_epoch = n_train // batch_size
steps_per_epoch
n_epochs = 100
history = defaultdict(list)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(n_epochs):
sess.run(iterator.initializer)
for j in range(steps_per_epoch):
# sess.run(optimize)
(_, nelbo_value,
amplitude_value,
length_scale_value,
observation_noise_variance_value,
inducing_index_points_value,
variational_inducing_observations_loc_value,
variational_inducing_observations_scale_value) = sess.run([optimize,
nelbo,
amplitude,
length_scale,
observation_noise_variance,
inducing_index_points,
variational_inducing_observations_loc,
variational_inducing_observations_scale])
history["nelbo"].append(nelbo_value)
history["amplitude"].append(amplitude_value)
history["length_scale"].append(length_scale_value)
history["observation_noise_variance"].append(observation_noise_variance_value)
history["inducing_index_points"].append(inducing_index_points_value)
history["variational_inducing_observations_loc"].append(variational_inducing_observations_loc_value)
history["variational_inducing_observations_scale"].append(variational_inducing_observations_scale_value)
inducing_index_points_history = np.stack(history["inducing_index_points"])
inducing_index_points_history.shape
segments_min_history = np.dstack(np.broadcast_arrays(inducing_index_points_history, y_min))
segments_max_history = np.dstack([inducing_index_points_history,
history["variational_inducing_observations_loc"]])
segments_history = np.stack([segments_max_history, segments_min_history], axis=-2)
segments_history.shape
kernel_history = kernel_cls(amplitude=history.get("amplitude"), length_scale=history.get("length_scale"))
vgp_history = tfd.VariationalGaussianProcess(
kernel=kernel_history,
index_points=X_q,
inducing_index_points=np.stack(history.get("inducing_index_points")),
variational_inducing_observations_loc=np.stack(history.get("variational_inducing_observations_loc")),
variational_inducing_observations_scale=np.stack(history.get("variational_inducing_observations_scale")),
observation_noise_variance=history.get("observation_noise_variance")
)
vgp_mean = vgp_history.mean()
vgp_stddev = vgp_history.stddev()
with tf.Session() as sess:
vgp_mean_value, vgp_stddev_value = sess.run([vgp_mean[::10], vgp_stddev[::10]])
fig, ax = plt.subplots()
ax.plot(X_q, gprm_mean_value[-1])
ax.fill_between(np.squeeze(X_q),
gprm_mean_value[-1] - 2*gprm_stddev_value[-1],
gprm_mean_value[-1] + 2*gprm_stddev_value[-1], alpha=0.1)
ax.plot(X_q, vgp_mean_value[-1])
ax.fill_between(np.squeeze(X_q),
vgp_mean_value[-1] - 2*vgp_stddev_value[-1],
vgp_mean_value[-1] + 2*vgp_stddev_value[-1], alpha=0.1)
ax.scatter(X, Y, marker='x', color='k', label="noisy observations")
ax.vlines(history["inducing_index_points"][-1], ymin=y_min,
ymax=history["variational_inducing_observations_loc"][-1],
color='k', linewidth=1.0, alpha=0.4)
ax.set_xlabel('$x$')
ax.set_ylabel('$y$')
ax.set_ylim(y_min, y_max)
plt.show()
fig, (ax1, ax2) = plt.subplots(nrows=2, sharex=True, gridspec_kw=dict(hspace=0.1))
ax1.scatter(X, Y, marker='x', color='k')
ax1.plot(X_q, gprm_mean_value[-1])
ax1.fill_between(np.squeeze(X_q),
gprm_mean_value[-1] - 2*gprm_stddev_value[-1],
gprm_mean_value[-1] + 2*gprm_stddev_value[-1], alpha=0.1)
line_mean, = ax1.plot(X_q, vgp_mean_value[-1], color="tab:orange")
line_stddev_lower, = ax1.plot(X_q, vgp_mean_value[-1] - 2*vgp_stddev_value[-1],
color="tab:orange", alpha=0.4)
line_stddev_upper, = ax1.plot(X_q, vgp_mean_value[-1] + 2*vgp_stddev_value[-1],
color="tab:orange", alpha=0.4)
vlines_inducing_index_points = ax1.vlines(inducing_index_points_history[-1].squeeze(),
ymax=history["variational_inducing_observations_loc"][-1],
ymin=y_min, linewidth=1.0, alpha=0.4)
ax1.set_ylabel(r'$y$')
ax1.set_ylim(y_min, y_max)
lines_inducing_index_points = ax2.plot(inducing_index_points_history.squeeze(), range(n_epochs),
color='k', linewidth=1.0, alpha=0.4)
ax2.set_xlabel(r"$x$")
ax2.set_ylabel("epoch")
plt.show()
def animate(i):
line_mean.set_data(X_q, vgp_mean_value[i])
line_stddev_lower.set_data(X_q, vgp_mean_value[i] - 2*vgp_stddev_value[i])
line_stddev_upper.set_data(X_q, vgp_mean_value[i] + 2*vgp_stddev_value[i])
vlines_inducing_index_points.set_segments(segments_history[i])
for j, line in enumerate(lines_inducing_index_points):
line.set_data(inducing_index_points_history[:i, j], range(i))
ax2.relim()
ax2.autoscale_view(scalex=False)
return line_mean, line_stddev_lower, line_stddev_upper
anim = animation.FuncAnimation(fig, animate, frames=n_epochs,
interval=60, repeat_delay=5, blit=True)
# HTML(anim.to_html5_video())
###Output
_____no_output_____ |
2-EDA/2-Pandas/Practica/02_Filtering_&_Sorting/Fictional Army/Fictional Army aula.ipynb | ###Markdown
Fictional Army - Filtering and Sorting
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. This is the data given as a dictionary
###Code
# Create an example dataframe about a fictional army
raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'],
'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'],
'deaths': [523, 52, 25, 616, 43, 234, 523, 62, 62, 73, 37, 35],
'battles': [5, 42, 2, 2, 4, 7, 8, 3, 4, 7, 8, 9],
'size': [1045, 957, 1099, 1400, 1592, 1006, 987, 849, 973, 1005, 1099, 1523],
'veterans': [1, 5, 62, 26, 73, 37, 949, 48, 48, 435, 63, 345],
'readiness': [1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 2, 3],
'armored': [1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1],
'deserters': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3],
'origin': ['Arizona', 'California', 'Texas', 'Florida', 'Maine', 'Iowa', 'Alaska', 'Washington', 'Oregon', 'Wyoming', 'Louisana', 'Georgia']}
###Output
_____no_output_____
###Markdown
Step 3. Create a dataframe and assign it to a variable called army. Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically.
###Code
army = pd.DataFrame(data=raw_data)
army.head()
###Output
_____no_output_____
###Markdown
Step 4. Set the 'origin' colum as the index of the dataframe
###Code
army = army.set_index('origin')
army.set_index('origin', inplace = True)
army
###Output
_____no_output_____
###Markdown
Step 5. Print only the column veterans
###Code
army['veterans']
###Output
_____no_output_____
###Markdown
Step 6. Print the columns 'veterans' and 'deaths'
###Code
army[['veterans', 'deaths']]
army.loc[:, ['veterans', 'deaths']]
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns.
###Code
army.columns
###Output
_____no_output_____
###Markdown
Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska
###Code
army.loc['Maine': 'Alaska', ['deaths', 'size', 'deserters']]
army.loc[['Maine','Iowa','Alaska'], ['deaths', 'size', 'deserters']]
###Output
_____no_output_____
###Markdown
Step 9. Select the rows 3 to 7 and the columns 3 to 6
###Code
army.iloc[3:8, 3:7]
###Output
_____no_output_____
###Markdown
Step 10. Select every row after the fourth row and all columns
###Code
army[4:]
army.iloc[4:,:]
###Output
_____no_output_____
###Markdown
Step 11. Select every row up to the 4th row and all columns
###Code
army.iloc[:4, :]
###Output
_____no_output_____
###Markdown
Step 12. Select the 3rd column up to the 7th column
###Code
army.iloc[:, 2:7]
###Output
_____no_output_____
###Markdown
Step 13. Select rows where df.deaths is greater than 50
###Code
#army[army['deaths']>50]
#display(army.loc[army['deaths']>50])
'''
mask = army.deaths > 50
mask
print(mask)
display(army)
army.loc[mask]
'''
#display(army)
#army.loc[army.deaths > 50 , 'deserters' ]
#army.loc[army['deaths'] > 50 , ['deserters', 'size'] ]
army.loc[army['deaths']>50]
###Output
_____no_output_____
###Markdown
Step 14. Select rows where df.deaths is greater than 500 or less than 50
###Code
mask = (army['deaths'] > 500) | (army['deaths'] < 50)
army[mask]
#army.loc[mask]
###Output
_____no_output_____
###Markdown
Step 15. Select all the regiments not named "Dragoons"
###Code
army['regiment'] != 'Dragoons'
army.loc[army['regiment'] != 'Dragoons']
###Output
_____no_output_____
###Markdown
Step 16. Select the rows called Texas and Arizona
###Code
army.loc[ ['Texas', 'Arizona'] ]
army[(army.index == 'Texas') | (army.index == 'Arizona')]
army[army.index.isin(['Texas', 'Arizona'])]
###Output
_____no_output_____
###Markdown
Step 17. Select the third cell in the row named Arizona
###Code
#display(army)
army.iloc[:,2].loc["Arizona"]
army.loc['Arizona'][2]
army.iloc[army.index == 'Arizona', 2]
army.loc[['Arizona'], army.columns[2]]
###Output
_____no_output_____
###Markdown
Step 18. Select the third cell down in the column named deaths
###Code
army['deaths'][-3]
###Output
_____no_output_____ |
lab10/decomposition/plot_image_denoising.ipynb | ###Markdown
Image denoising using dictionary learningAn example comparing the effect of reconstructing noisy fragmentsof a raccoon face image using firstly online `DictionaryLearning` andvarious transform methods.The dictionary is fitted on the distorted left half of the image, andsubsequently used to reconstruct the right half. Note that even betterperformance could be achieved by fitting to an undistorted (i.e.noiseless) image, but here we start from the assumption that it is notavailable.A common practice for evaluating the results of image denoising is by lookingat the difference between the reconstruction and the original image. If thereconstruction is perfect this will look like Gaussian noise.It can be seen from the plots that the results of `omp` with twonon-zero coefficients is a bit less biased than when keeping only one(the edges look less prominent). It is in addition closer from the groundtruth in Frobenius norm.The result of `least_angle_regression` is much more strongly biased: thedifference is reminiscent of the local intensity value of the original image.Thresholding is clearly not useful for denoising, but it is here to show thatit can produce a suggestive output with very high speed, and thus be usefulfor other tasks such as object classification, where performance is notnecessarily related to visualisation.
###Code
print(__doc__)
from time import time
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
from sklearn.decomposition import MiniBatchDictionaryLearning
from sklearn.feature_extraction.image import extract_patches_2d
from sklearn.feature_extraction.image import reconstruct_from_patches_2d
try: # SciPy >= 0.16 have face in misc
from scipy.misc import face
face = face(gray=True)
except ImportError:
face = sp.face(gray=True)
# Convert from uint8 representation with values between 0 and 255 to
# a floating point representation with values between 0 and 1.
face = face / 255.
# downsample for higher speed
face = face[::2, ::2] + face[1::2, ::2] + face[::2, 1::2] + face[1::2, 1::2]
face /= 4.0
height, width = face.shape
# Distort the right half of the image
print('Distorting image...')
distorted = face.copy()
distorted[:, width // 2:] += 0.075 * np.random.randn(height, width // 2)
# Extract all reference patches from the left half of the image
print('Extracting reference patches...')
t0 = time()
patch_size = (7, 7)
data = extract_patches_2d(distorted[:, :width // 2], patch_size)
data = data.reshape(data.shape[0], -1)
data -= np.mean(data, axis=0)
data /= np.std(data, axis=0)
print('done in %.2fs.' % (time() - t0))
# #############################################################################
# Learn the dictionary from reference patches
print('Learning the dictionary...')
t0 = time()
dico = MiniBatchDictionaryLearning(n_components=100, alpha=1, n_iter=500)
V = dico.fit(data).components_
dt = time() - t0
print('done in %.2fs.' % dt)
plt.figure(figsize=(4.2, 4))
for i, comp in enumerate(V[:100]):
plt.subplot(10, 10, i + 1)
plt.imshow(comp.reshape(patch_size), cmap=plt.cm.gray_r,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.suptitle('Dictionary learned from face patches\n' +
'Train time %.1fs on %d patches' % (dt, len(data)),
fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
# #############################################################################
# Display the distorted image
def show_with_diff(image, reference, title):
"""Helper function to display denoising"""
plt.figure(figsize=(5, 3.3))
plt.subplot(1, 2, 1)
plt.title('Image')
plt.imshow(image, vmin=0, vmax=1, cmap=plt.cm.gray,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.subplot(1, 2, 2)
difference = image - reference
plt.title('Difference (norm: %.2f)' % np.sqrt(np.sum(difference ** 2)))
plt.imshow(difference, vmin=-0.5, vmax=0.5, cmap=plt.cm.PuOr,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.suptitle(title, size=16)
plt.subplots_adjust(0.02, 0.02, 0.98, 0.79, 0.02, 0.2)
show_with_diff(distorted, face, 'Distorted image')
# #############################################################################
# Extract noisy patches and reconstruct them using the dictionary
print('Extracting noisy patches... ')
t0 = time()
data = extract_patches_2d(distorted[:, width // 2:], patch_size)
data = data.reshape(data.shape[0], -1)
intercept = np.mean(data, axis=0)
data -= intercept
print('done in %.2fs.' % (time() - t0))
transform_algorithms = [
('Orthogonal Matching Pursuit\n1 atom', 'omp',
{'transform_n_nonzero_coefs': 1}),
('Orthogonal Matching Pursuit\n2 atoms', 'omp',
{'transform_n_nonzero_coefs': 2}),
('Least-angle regression\n5 atoms', 'lars',
{'transform_n_nonzero_coefs': 5}),
('Thresholding\n alpha=0.1', 'threshold', {'transform_alpha': .1})]
reconstructions = {}
for title, transform_algorithm, kwargs in transform_algorithms:
print(title + '...')
reconstructions[title] = face.copy()
t0 = time()
dico.set_params(transform_algorithm=transform_algorithm, **kwargs)
code = dico.transform(data)
patches = np.dot(code, V)
patches += intercept
patches = patches.reshape(len(data), *patch_size)
if transform_algorithm == 'threshold':
patches -= patches.min()
patches /= patches.max()
reconstructions[title][:, width // 2:] = reconstruct_from_patches_2d(
patches, (height, width // 2))
dt = time() - t0
print('done in %.2fs.' % dt)
show_with_diff(reconstructions[title], face,
title + ' (time: %.1fs)' % dt)
plt.show()
###Output
_____no_output_____ |
Addition.ipynb | ###Markdown
Heat maps addition
###Code
%load_ext autoreload
%autoreload
import heat_maps
import utils
import itertools
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
plt.style.use('ggplot')
sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
Load sample data
###Code
hm1 = heat_maps.read('test_hm_1.bmp')
hm2 = heat_maps.read('test_hm_2.bmp')
print hm1.shape, hm2.shape
utils.compare_2d(hm1, hm2, title_left='Heatmap A', title_right='Heatmap B')
###Output
_____no_output_____
###Markdown
Fit B-spline surface
###Code
tck1 = heat_maps.fit(hm1)
tck2 = heat_maps.fit(hm2)
###Output
_____no_output_____
###Markdown
Addition
###Code
# compute average net of knot vectors in both directions (U & V)
t_average = [
heat_maps.linearMean(tck1[0], tck2[0], [0.0, 319.0], order=4),
heat_maps.linearMean(tck1[1], tck2[1], [0.0, 179.0], order=4)
]
f, ax = plt.subplots(1, 2, figsize=(15, 5))
for i in range(2):
ax[i].plot(tck1[i], 'b.')
ax[i].plot(tck2[i], 'g.')
ax[i].plot(t_average[i], 'r-')
ax[0].set_title('Knots in direction U')
ax[1].set_title('Knots in direction V')
plt.show()
# Fit surface above average knot net
tck_hm1_mean = heat_maps.fit(hm1, t_average)
tck_hm2_mean = heat_maps.fit(hm2, t_average)
# add coefficients of both heat maps
tkc_merged = [
t_average[0],
t_average[1],
tck_hm1_mean[2] + tck_hm2_mean[2],
tck1[3],
tck2[4]
]
X, Y = np.arange(0.0, hm2.shape[1], 1.0), np.arange(0.0, hm2.shape[0], 1.0)
hm_merged = heat_maps.approx(tkc_merged, X, Y)
utils.compare_2d(hm1 + hm2, hm_merged, title_left='Values addition', title_right='Surface addition')
utils.err(hm1+hm2, hm_merged)
###Output
_____no_output_____ |
Teoria_HW3.ipynb | ###Markdown
Basically, we need to find the maximum sum of non-consecutive values.Below, the pseudocode of the algorithm. Input: an array v of N elementsOutpu: list of value to obtain the max sum of non consecutive value1 **Start**2 While v is not empty:3 list=[] empty list in which we will insert the values4 find max in v and consider it's index i5 list.append(v) Add the first element 6 delete v [i-1], v [i], v [i + 1] delete the found value and its neighbors7 **end**8 Return(list) This is the implemtented program:
###Code
import numpy as np
n=int(input("How many appointment requests are there? --->"))
l = []
for i in range(1,n+1):
x=int(input("Insert a value that correspond to the duration of the appointment --->"))
l.append(x)
print (l) #our list
l_1=[] #list for new values
while len(l)>0:
m=max(l) #find max
i=l.index(m) #max index
index=[i-1,i,i+1] #consecutive values
l_1.append(m)
for j in indici:
try:
l.pop(j)
#print(l)
except IndexError:
pass
print(l_1)
###Output
_____no_output_____ |
in-class/.ipynb_checkpoints/week_2_inclass_exercises_DEMO-checkpoint.ipynb | ###Markdown
Week 2 - Flow Control The following play critical roles: 1. Indentation - running blocks of code.2. Time Delays - pacing the speed of our code.3. For Loops - iterating through data4. For Loops through multiple but related lists5. For Loops through Dictionaries6. Conditional Statements 1. Indentation* Python is unique in requiring indentations.* Indentations signify the start and end of code that belongs together (code blocks).* Without proper indentation, your code won't do what you expect.* Not working as expected? Check if you have indented correctly! Basic Flow Example: A Counter
###Code
## Using a While loop build a counter that counts from 1 to 5.
## Print the counter numbers in statement that reads "The count is" whatever the count is.
## Once it reaches 5, it should print "Done counting to 5!"
count = 1
while count <=5:
print(f"The count is {count}")
count = count +1
print("done counting to 5")
###Output
The count is 1
The count is 2
The count is 3
The count is 4
The count is 5
done counting to 5
###Markdown
You just controlled flow using indentation and a while loop. How fast does our code run?
###Code
import datetime as dt
counter = 1
while counter <6:
current = dt.datetime.now() # the exact current time
print(f"The count is {counter} and this process ran at {current}")
# counter = counter + 1
counter += 1
print("Done printing to 5")
###Output
The count is 1 and this process ran at 2021-09-13 13:59:17.292169
The count is 2 and this process ran at 2021-09-13 13:59:17.292419
The count is 3 and this process ran at 2021-09-13 13:59:17.292441
The count is 4 and this process ran at 2021-09-13 13:59:17.292453
The count is 5 and this process ran at 2021-09-13 13:59:17.292464
Done printing to 5
###Markdown
2. Time Delays**Delay timers** are critical when scraping data from websites for several reasons. The **two** most important reasons are:1. Sometimes your scraper clicks on links and must wait for the content to actually populated on the new page. Your script is likely to run faster than a page can load.2. You don't want your scraper to be mistaken for a hostile attack on a server. You have to slow down the scrapes. Step 1 - Import required libraries
###Code
import time # time is required. we will use its sleep function
# import datetime as dt ## we already imported this earlier, but you'd need it if starting fresh
###Output
_____no_output_____
###Markdown
Let's add a 5-second delay:
###Code
counter = 1
while counter <6:
current = dt.datetime.now() # the exact current time
print(f"The count is {counter} and this process ran at {current}")
# counter = counter + 1
counter += 1
time.sleep(5)
print("Done printing to 5")
###Output
The count is 1 and this process ran at 2021-09-13 14:04:38.373754
The count is 2 and this process ran at 2021-09-13 14:04:43.378471
The count is 3 and this process ran at 2021-09-13 14:04:48.382150
The count is 4 and this process ran at 2021-09-13 14:04:53.387505
The count is 5 and this process ran at 2021-09-13 14:04:58.392683
Done printing to 5
###Markdown
RandomizeSoftware that tracks traffic to a server might grow suspicious about a hit every nth seconds.Let's **randomize** the time between hits by using ```randint``` from the ```random``` library.You might sometimes see me use ```randrange``` from the ```random``` library: ``` from random import randrange```. What's the difference?**Difference 1**```randrange``` is exclusive of the final range value.```randint``` is inclusive of the final range value.**Difference 2**```randrange``` allows you to add a step: ```randrange(start, end, step)``````randint ``` only has start and end: ```randint(start, end)
###Code
from random import randint # import necessary library
randint(0,10)
from random import randrange ## import necessary library
randrange(0, 10)
# we've already imported random
counter = 1
while counter <6:
mysnoozer = randint(4, 10)
current = dt.datetime.now() # the exact current time
print(f"The count is {counter} and this process ran at {current}.\
snooze for {mysnoozer} seconds!")
counter += 1
time.sleep(mysnoozer)
print("Done printing to 5")
pip install icecream
from icecream import ic
rent = 1000
food = 400
expenses = rent + food
print(f"the total expenses is: {expenses} and rent is {rent} and food is {food} ")
ic(rent)
ic(expenses)
###Output
ic| expenses: 1400
###Markdown
3. For Loops For Loops are your best friend - most used Python expression for journalists: Iterate over:* data stored in a list and run some calculation on each value;* a list of URLs and visit each site to scrape data;* data stored in dictionary keys and values and return what you are looking for. A simple ```for loop``` example: Let's take **For Loops** for test drive:
###Code
## RUN THIS CELL - Use this list of CEO salaries from 1985
ceo_salaries_1985 = [150_000, 201_000, 110_000, 75_000, 92_000, 55_000]
ceo_salaries_1985
## Print each salary with in the following format:
## "A CEO earned [some value] in 1985."
for each_salary in ceo_salaries_1985:
print(f"A ceo earned ${each_salary:,} in 1985")
## Now update each salary to 2019 dollars.
## Print the following info:
## "A CEO's salary of [1985 salary] in 1985 is worth [updated salary] in 2019 dollars."
## The CPI for 1985 is 107.6
## The 2019 CPI is 255.657
## The formula is: updated_salary = (oldSalary/oldCPI) * currentCPI
salaries_2019 = []
old_salary = [150_000, 201_000, 110_000, 75_000, 92_000, 55_000]
for ceo in old_salary:
updated_salary = (ceo/107.6) * 255.657
print(f"A CEO's salary of ${ceo:,} in 1985 is worth ${updated_salary:,.0f}\
in 2019 dollars.")
salaries_2019.append(updated_salary)
salaries_2019
ceo_salaries_1985
sals_2019 = [(salary/107.6) * 255.657 for salary in ceo_salaries_1985]
sals_2019
###Output
_____no_output_____
###Markdown
4. For Loops through multiple but related lists
###Code
## RUN THIS CELL - You scrape a site and each datapoint is stored in different lists
first_names = ["Irene", "Ursula", "Elon", "Tim"]
last_names = ["Rosenfeld", "Burns", "Musk", "Cook"]
titles = ["Chairman and CEO", "Chairman and CEO", "CEO", "CEO"]
companies = ["Kraft Foods", "Xerox", "Tesla", "Apple"]
industries = ["Food and Beverage", "Process and Document Management", "Auto Manufacturing", "Consumer Technology"]
###Output
_____no_output_____
###Markdown
Use ```zip()``` to zip lists together
###Code
## with zip
## also print what each type of data is.
ceo_list = []
for item in zip(first_names, last_names, titles, companies, industries):
print(item)
print(type(item))
ceo_list.append(item)
item
ceo_list
## zip it and store in a list called ceo_list
## export to a pandas dataframe
import pandas as pd
df = pd.DataFrame(ceo_list)
df.columns =["first_name", "last_name", "title", "company", "industry"]
df
## export to a csv
filename = "ceo_bios.csv"
df.to_csv(filename, index = False, encoding ="UTF-8")
## recall that dictionaries are like columns and rows in a csv
## let's turn this csv into a dataframe
pd.read_csv("ceo_bios.csv")
###Output
_____no_output_____
###Markdown
Turn tuples into lists
###Code
## zip lists into a list of lists
ceo_list = []
for item in zip(first_names, last_names, titles, companies, industries):
print(list(item))
print(type(list(item)))
ceo_list.append(list(item))
###Output
['Irene', 'Rosenfeld', 'Chairman and CEO', 'Kraft Foods', 'Food and Beverage']
<class 'list'>
['Ursula', 'Burns', 'Chairman and CEO', 'Xerox', 'Process and Document Management']
<class 'list'>
['Elon', 'Musk', 'CEO', 'Tesla', 'Auto Manufacturing']
<class 'list'>
['Tim', 'Cook', 'CEO', 'Apple', 'Consumer Technology']
<class 'list'>
###Markdown
5. For Loops through Dictionaries
###Code
## You have a list of CEO salaries from 1969.
sals_1969 = [47_000, 65_000, 39_000, 96_000]
sals_1969
## We need the value of these salaries updated for every decade till 2019
## Here are the CPIs for each decade in list of dictionaries from 1969 to 2019.
decades_cpi = [
{"year": 1979, "cpi": 72.6,},
{"year": 1989, "cpi": 124},
{"year": 1999, "cpi": 166.6},
{"year": 2009, "cpi": 214.537},
{"year": 2019, "cpi": 255.657}
]
## Show the contents of this list of dictionaries
decades_cpi
## What datatype is decades_cpi
type(decades_cpi)
# Check what type of data each list item is within decades_cpi
for item in decades_cpi:
print(type(item))
## Print out each value in this format:
## "key --> value"
decades_cpi[0]
for decade in decades_cpi:
# print(decade)
for key, value in decade.items():
print(f"{key} ---->{value}")
print("********")
value
###Output
_____no_output_____
###Markdown
The key alternates between the strings "year" and "cpi" in this loop. How do we actually target the values for "year" and "cpi" and place them in our calculations?
###Code
## show it here:
for decade in decades_cpi:
that_year = decade.get("year")
old_cpi = decade.get("cpi")
print(old_cpi)
old_cpi
sals_1969
## Loop through each salary and update its value for each decade
sals_1969
updated_sals = []
cpi_69 = 36.7
## iterate through list of salaries with for loop
for salary in sals_1969:
# ic(salary)
## iterate through list of dictionaries to retrive year and old cpi
for decade in decades_cpi:
that_year = decade.get("year")
old_cpi = decade.get("cpi")
# ic(that_year)
# ic(old_cpi)
## calc updated salary for that decade
updated_salary = (salary/cpi_69) * old_cpi
# ic(updated_salary)
updated_sals.append(updated_salary)
len(updated_sals)
updated_sals
###Output
_____no_output_____
###Markdown
6. Conditional Statements
###Code
## create a list of 10 random numbers anywhere from -100 to 100
##name the list numbers
###Output
_____no_output_____
###Markdown
Create conditional statements that tell us if the last number and the penultimate number are positive or negative. Print a sentence that reads:```"The last number [what is it?] is [positive or negative] while the penultimate number [what is it?] is [negative or positive]."```
###Code
## if else statements
###Output
_____no_output_____
###Markdown
Tenary Expression```variable = value1 if some_condition else value2```
###Code
## ternary expression
# as ternary expression
###Output
_____no_output_____
###Markdown
Multiple Tenary Expression```variable = value1 if condition1 else value2 if condition2 else value3 ```
###Code
## A simple example
'''
write a simple if else statement that prints out x is greater than y,
or y is greater than x or if they are equal.
'''
## Now as a ternary
###Output
_____no_output_____
###Markdown
Conditionals as Tuples```("False: Does not meet condition", "True: Meets condition")[conditional expression]```
###Code
age = 20
## conditional tuple
###Output
_____no_output_____
###Markdown
ChallengeWrite a tenary expression to update the first conditional exercise above to deal Zeros. For example if the random list generates:```[46, 30, 31, -56, 18, 57, -90, 81, 0, 0]```It should print out:```The last number (0) is neither negative or positive at zero while the penultimate number (0) is neither negative or positive at zero.```
###Code
## activate the list
numbers = [46, 30, 31, -56, 18, 57, -90, 81, 0, 0]
## write your multiple ternary expression
###Output
_____no_output_____ |
01-01.train_model.ipynb | ###Markdown
XXXXXXXX
###Code
!git clone https://github.com/Kazuhito00/7-segment-display-reader
###Output
Cloning into '7-segment-display-reader'...
remote: Enumerating objects: 21908, done.[K
remote: Counting objects: 100% (21908/21908), done.[K
remote: Compressing objects: 100% (21889/21889), done.[K
remote: Total 21908 (delta 39), reused 21869 (delta 16), pack-reused 0[K
Receiving objects: 100% (21908/21908), 62.92 MiB | 30.55 MiB/s, done.
Resolving deltas: 100% (39/39), done.
Checking out files: 100% (42000/42000), done.
###Markdown
XXXXXXXX
###Code
!git clone https://github.com/Kazuhito00/7seg-image-generator.git
!python '7seg-image-generator/create_7segment_dataset_da(easy).py' \
--erase_debug_window \
--steps=4000 \
--start_count=10000000
###Output
100% 4000/4000 [00:37<00:00, 107.80it/s]
###Markdown
XXXXXXXX
###Code
%cp -rf './7-segment-display-reader/01.dataset/00' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/01' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/02' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/03' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/04' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/05' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/06' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/07' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/08' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/09' './dataset'
%cp -rf './7-segment-display-reader/01.dataset/11' './dataset'
###Output
_____no_output_____
###Markdown
XXXXXXXX
###Code
import os
dataset_directory = './dataset'
train_directory = './train'
validation_directory = './validation'
# ๅญฆ็ฟใใผใฟๆ ผ็ดใใฃใฌใฏใใชไฝๆใโปใdataset_directoryใใจๅๆงใฎๆงๆ
for dir_path in os.listdir(dataset_directory):
os.makedirs(train_directory + '/' + dir_path, exist_ok=True)
# ๆค่จผใใผใฟๆ ผ็ดใใฃใฌใฏใใชไฝๆใโปใdataset_directoryใใจๅๆงใฎๆงๆ
for dir_path in os.listdir(dataset_directory):
os.makedirs(validation_directory + '/' + dir_path, exist_ok=True)
import os
import random
import numpy as np
import tensorflow as tf
seed = 42
random.seed(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
import glob
import shutil
import random
train_ratio = 0.75 # ๅญฆ็ฟใใผใฟใฎๅฒๅ
random.seed(42)
# ใณใใผๅ
ใใฃใฌใฏใใชๅๅพ
directory_list = glob.glob(dataset_directory + '/*')
for temp_directory in directory_list:
file_list = glob.glob(temp_directory + '/*')
# ใใฃใฌใฏใใชใธใณใใผ
for index, filepath in enumerate(file_list):
if index < int(len(file_list) * train_ratio):
# ๅญฆ็ฟ็จใใผใฟ
shutil.copy2(filepath, train_directory + '/' + os.path.basename(temp_directory))
else:
# ๆค่จผ็จใใผใฟ
shutil.copy2(filepath, validation_directory + '/' + os.path.basename(temp_directory))
###Output
_____no_output_____
###Markdown
XXXXXXX
###Code
!pip install -U albumentations
# Albumentationsใ็จใใใใผใฟๆกๅผต่จญๅฎ
import albumentations as A
def preprocessing_augmentation_function(param_p = 0.0):
transform = [
A.ShiftScaleRotate(shift_limit=0.1,
scale_limit=0.1,
rotate_limit=10,
p=param_p),
A.MotionBlur(blur_limit=15, p=param_p),
A.GlassBlur(sigma=0.15, max_delta=4, iterations=1, p=param_p),
A.RandomBrightnessContrast(brightness_limit=0.2,
contrast_limit=0.2,
brightness_by_max=True,
p=param_p),
A.RGBShift(r_shift_limit=10,
g_shift_limit=10,
b_shift_limit=10,
p=param_p),
A.Cutout(num_holes=8,
max_h_size=8,
max_w_size=8,
fill_value=0,
p=param_p),
A.Cutout(num_holes=8,
max_h_size=8,
max_w_size=8,
fill_value=255,
p=param_p),
]
augmentation_function = A.Compose(transform)
def augmentation(x):
augmentation_image = augmentation_function(image=x)
return augmentation_image['image']
return augmentation
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_image_da_generator = ImageDataGenerator(
rescale=1.0/255,
preprocessing_function=preprocessing_augmentation_function(0.1),
)
validation_image_generator = ImageDataGenerator(rescale=1.0/255)
batch_size = 64
image_height, image_width = 96, 96
train_data_gen = train_image_da_generator.flow_from_directory(
batch_size=batch_size,
directory=train_directory,
shuffle=True,
target_size=(image_height, image_width),
class_mode='categorical'
)
validation_data_gen = validation_image_generator.flow_from_directory(
batch_size=batch_size,
directory=validation_directory,
shuffle=False,
target_size=(image_height, image_width),
class_mode='categorical'
)
base_model = tf.keras.applications.MobileNetV2(include_top=False, weights='imagenet', input_shape=(96, 96, 3), alpha=0.35)
base_model.trainable = True
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
output = tf.keras.layers.Dense(12, activation='softmax', name='last_output')(x)
model = tf.keras.Model(inputs=base_model.inputs, outputs=output, name='model')
model.compile(
optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy']
)
# ใขใใซใใงใใฏใใคใณใไฟๅญ็จใณใผใซใใใฏ
checkpoint_path = os.path.join(os.getcwd(), 'checkpoints', 'weights.hdf5')
cp_callback = tf.keras.callbacks.ModelCheckpoint(
checkpoint_path,
verbose=1,
save_best_only=True,
mode='auto',
save_weights_only=False,
save_freq='epoch'
)
# ่ฉไพกๅคใฎๆนๅใ่ฆใใใชใๅ ดๅใซๅญฆ็ฟ็ใๆธใใใณใผใซใใใฏ
lrp_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1)
# ๆฉๆๆใกๅใ็จใณใผใซใใใฏ
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1)
epochs = 100
history = model.fit(
train_data_gen,
epochs=epochs,
validation_data=validation_data_gen,
callbacks=[cp_callback, lrp_callback, es_callback]
)
###Output
Epoch 1/100
1055/1055 [==============================] - ETA: 0s - loss: 0.7381 - accuracy: 0.7472
Epoch 00001: val_loss improved from inf to 1.44679, saving model to /content/checkpoints/weights.hdf5
###Markdown
XXXXXXXX
###Code
evaluate_result = model.evaluate_generator(validation_data_gen)
print('Validation Loss:' + str(evaluate_result[0]))
print('Validation Accuracy:' + str(evaluate_result[1]))
import matplotlib.pyplot as plt
def plot_history(history):
plt.figure(figsize=(19, 6))
# ็ฒพๅบฆใฎๅฑฅๆญดใใใญใใ
plt.subplot(1, 2, 1)
plt.title('accuracy')
plt.plot(history.history['accuracy'],"-",label="accuracy")
plt.plot(history.history['val_accuracy'],"-",label="val_accuracy")
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(loc="lower right")
# ๆๅคฑใฎๅฑฅๆญดใใใญใใ
plt.subplot(1, 2, 2)
plt.title('loss')
plt.plot(history.history['loss'],"-",label="loss",)
plt.plot(history.history['val_loss'],"-",label="val_loss")
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(loc='upper right')
plt.show()
plot_history(history)
import math
import cv2 as cv
import numpy as np
def draw_images(images):
column = 6
row = math.ceil(len(images) / column)
plt.figure(figsize=(16, 11))
plt.subplots_adjust(wspace=0.4, hspace=0.6)
for i, image in enumerate(images):
debug_image = cv.imread(image[1])
plt.subplot(row, column, i+1)
plt.title(str(image[0]), fontsize=10)
plt.tick_params(color='white')
plt.tick_params(labelbottom=False, labelleft=False, labelright=False, labeltop=False)
plt.imshow(cv.cvtColor(debug_image, cv.COLOR_BGR2RGB))
plt.xlabel('', fontsize=15)
plt.ylabel('', rotation=0, fontsize=15, labelpad=20)
plt.show()
Y_pred = model.predict_generator(validation_data_gen)
y_pred = np.argmax(Y_pred, axis=1)
incorrect_numbers = []
for true_num, pred_num, filepath in zip(validation_data_gen.classes, y_pred, validation_data_gen.filepaths):
if true_num != pred_num:
if pred_num == 10:
incorrect_numbers.append(['-(10)', filepath])
elif pred_num == 11:
incorrect_numbers.append(['N(11)', filepath])
else:
incorrect_numbers.append([pred_num, filepath])
draw_images(incorrect_numbers)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:25: UserWarning: `Model.predict_generator` is deprecated and will be removed in a future version. Please use `Model.predict`, which supports generators.
###Markdown
XXXXXXXX
###Code
model_path = 'checkpoints/weights.hdf5'
load_model = tf.keras.models.load_model(model_path)
from IPython.display import Image, display_png
from tensorflow.keras.preprocessing.image import img_to_array, load_img
test_image = tf.keras.preprocessing.image.load_img('./validation/07/00007004.jpg', target_size=(96, 96))
display_png(test_image)
test_image = img_to_array(test_image)
test_image = test_image.reshape(-1, 96, 96, 3)
test_image = test_image.astype('float32')
test_image = test_image * 1.0/255
predict_result = load_model.predict(test_image)
print(np.squeeze(predict_result))
print(np.argmax(np.squeeze(predict_result)))
###Output
[2.2122640e-06 6.4582217e-08 4.0999529e-07 7.6501377e-10 1.3395091e-07
1.1328411e-06 2.9578780e-08 9.9999595e-01 2.8156240e-09 2.3762277e-08
1.2261952e-08 6.3104553e-09]
7
###Markdown
XXXXXXXX
###Code
load_model.save('7seg_classifier.hdf5', include_optimizer=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quantized_model = converter.convert()
open('7seg_classifier.tflite', 'wb').write(tflite_quantized_model)
###Output
INFO:tensorflow:Assets written to: /tmp/tmpl9niaoo4/assets
###Markdown
XXXXXXXX
###Code
interpreter = tf.lite.Interpreter(model_path="7seg_classifier.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
print(output_details)
interpreter.set_tensor(input_details[0]['index'], test_image)
interpreter.invoke()
tflite_results = interpreter.get_tensor(output_details[0]['index'])
print(np.squeeze(tflite_results))
print(np.argmax(np.squeeze(tflite_results)))
###Output
[2.3053269e-06 5.3328641e-08 4.0264879e-07 8.3536766e-10 1.3598803e-07
1.3750700e-06 3.1720809e-08 9.9999571e-01 2.5853140e-09 2.5920599e-08
9.1866488e-09 5.2289262e-09]
7
|
past-team-code/Fall2018Team2/Final Solution - Old Iteration/Full_MVP_Sentiment_JK-2.ipynb | ###Markdown
Predict Prices using Sentiment and ARIMA Strategy:1. Predict Sentiment 2. Show relationship between sentiment and price3. Create article scoring based on sentiment and price prediction 1. Import and Preprocess Data 1.1 Import and pre-process articles
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import datetime as dt
from pandas import DataFrame
from datetime import datetime,tzinfo
from pytz import timezone
import time
import pytz
import csv
plt.style.use('fivethirtyeight')
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
articles = pd.read_csv("./data/Classified Articles.csv")
articles.head(1)
articles["timeStamp"] = pd.to_datetime(articles['date'] + ' ' + articles['time'])
articles = articles.set_index("timeStamp")
articles.head(1)
len(articles)
min(articles.index)
max(articles.index)
###Output
_____no_output_____
###Markdown
1.1.1 Import and Pre-process hand-labeled sentiment
###Code
data = pd.read_csv("./data/Articles Reading Assignment.csv")
data.head(1)
data = data.dropna()
#data["Sentiment"] += 1
#data["Sentiment"] /= 2
data["contents"] = ["" for i in range(len(data))]
data["title"] = ["" for i in range(len(data))]
data["date"] = ["" for i in range(len(data))]
data["time"] = ["" for i in range(len(data))]
data["marks"] = ["" for i in range(len(data))]
for i, row in data.iterrows():
x = row["URL"]
key_words = articles[articles["source_url"] == x][:1]["contents"].values[0]
data.at[i, "contents"] = str(key_words)
title = articles[articles["source_url"] == x][:1]["title"].values[0]
data.at[i, "title"] = str(title)
date = articles[articles["source_url"] == x][:1]["date"].values[0]
data.at[i, "date"] = date
time = articles[articles["source_url"] == x][:1]["time"].values[0]
data.at[i, "time"] = time
marks = articles[articles["source_url"] == x][:1]["marks"].values[0]
data.at[i, "marks"] = marks
data.head(1)
data["timeStamp"] = pd.to_datetime(data['date'] + ' ' + data['time'])
data = data.set_index("timeStamp")
data.head(1)
len(data.loc[data["Sentiment"] == data["marks"]])
len(data)
data["Sentiment"] = pd.to_numeric(data["Sentiment"])
data["marks"] = pd.to_numeric(data["marks"])
data.head(1)
###Output
_____no_output_____
###Markdown
1.2 Import and pre-process Bitcoin price data
###Code
# import data: bitcoin prices
btc = pd.read_csv("./data/coinbaseUSD_1-min_data_2014-12-01_to_2018-03-27.csv")
# preprocess bitcoin price data
btc.Timestamp = pd.to_datetime(btc.Timestamp, unit='s')
btc.Timestamp = btc.Timestamp.dt.tz_localize('UTC')
btc['log_close'] = np.log(btc.Close) - np.log(btc.Close.shift(1))
btc['Date'] = pd.to_datetime(btc['Timestamp']).dt.date
min_periods = 43200 # 60minutes*24hours*30days
price=btc['Close']
# Calculate the sd and volatility
mean=price.rolling(min_periods).mean()
sd=price.rolling(min_periods).std()
vol = price.rolling(min_periods).std() * np.sqrt(min_periods)
btc['Average']=mean
btc['Volatility']=vol
btc['SD']=sd
price_log=btc['log_close']
# Calculate the sd and volatility
mean=price_log.rolling(min_periods).mean()
sd=price_log.rolling(min_periods).std()
vol = price_log.rolling(min_periods).std() * np.sqrt(min_periods)
btc['Average_log']=mean
btc['Volatility_log']=vol
btc['SD_log']=sd
index_1 = btc[btc.Date == datetime.date(dt.datetime.strptime('01/23/18', '%x'))].index[0]
index_2 = btc[btc.Date == datetime.date(dt.datetime.strptime('03/27/18', '%x'))].index[0]
btc_1= btc.loc[index_1:index_2]
btc_1 = btc_1.set_index("Timestamp")
max(articles.index)
min(articles.index)
###Output
_____no_output_____
###Markdown
Plot data
###Code
btc_close = btc_1['Close']
plt.plot(btc_close)
plt.show()
btc_log_close = btc_1['log_close']
plt.plot(btc_log_close)
plt.show()
###Output
_____no_output_____
###Markdown
Create function to change Bitcoin-Price to any time frame 1.3 Create lags of responses and merge data from 1 and 2
###Code
# create response add response with multiple lags in seconds
import datetime
import time
start = time.time()
response = pd.DataFrame()
benchmark_naive = pd.DataFrame()
# 1 minute, 5 minutes, 10 minutes, 30 minutes, 60 minutes, 12 hours, 1 day, 2 days, 4 days
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d" : 345600}
train_time = pd.to_datetime(articles.index)
for colname in colnames:
count = 0
stock_return = []
stock_return_naive = []
lag = colnames[colname]
for i in train_time:
count +=1
try:
start_price = btc_1.Close.iloc[btc_1.index.get_loc(i,method = "nearest")]
end_price = btc_1.Close.iloc[btc_1.index.get_loc((i+datetime.timedelta(0,lag)),method = "nearest")]
stock_return.append(end_price/start_price-1)
end_price_naive = btc_1.Close.iloc[btc_1.index.get_loc(i,method = "nearest")]
start_price_naive = btc_1.Close.iloc[btc_1.index.get_loc((i-datetime.timedelta(0,lag)),method = "nearest")]
stock_return_naive.append(end_price_naive/start_price_naive-1)
#if lag ==86400:
# print(start_price,i)
#print(end_price,(i+datetime.timedelta(0,lag)))
#print("")
except:
stock_return.append(0)
stock_return_naive.append(0)
print("exception raised")
#if count ==10:
#break
response[colname] = stock_return
benchmark_naive[colname] = stock_return_naive
print("time elapsed:",round((time.time()-start)/60,1),"minutes")
response.head(1)
benchmark_naive.head(1)
###Output
_____no_output_____
###Markdown
1.4.1 Sentiment Assignment
###Code
data.head(1)
len(data)
###Output
_____no_output_____
###Markdown
1.4.1.1 NLP
###Code
# create class which handles NLP tokenization, stemming/lemmatizing and tranformation to vector
class nlp_validation_sets:
def __init__(self,fold,validation,train_data,max_features=10,method_nlp = "stem",ngram = 1):
self.fold = fold
self.validation = validation
self.max_features = max_features
self.method_nlp = method_nlp
self.ngram = ngram
self.train_data = train_data
self.stemmed_word_list = []
self.tokenized_word_list = []
self.stemmed_word_list_only_bad = []
self.tokenized_word_list_only_bad = []
self.stemmed_word_list_val = []
self.tokenized_word_list_val = []
self.stemmed_word_list_train = []
self.tokenized_word_list_train = []
def choose_w2v(self,method = "tfidf"):
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
if method == "tfidf":
self.tfidf = TfidfVectorizer(max_features = self.max_features,ngram_range =(1,self.ngram),
max_df = 1.0,min_df = 1)
if method == "count":
self.tfidf = CountVectorizer(max_features = self.max_features,ngram_range = (1,self.ngram),
max_df = 1.0,min_df = 1)
def fit(self):
import time
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
import re
start = time.time()
stop_words_english = set(stopwords.words('english'))
stem = PorterStemmer()
for text in self.fold.contents:
wordList = re.sub("[^\w]", " ",text).split()
stem_words = []
token_words = []
for word in wordList:
if not word.lower() in stop_words_english:
stem_words.append(stem.stem(word.lower()))
token_words.append(word.lower())
self.stemmed_word_list.append(" ".join(str(x) for x in stem_words))
self.tokenized_word_list.append(" ".join(str(x) for x in token_words))
for text in self.train_data.contents:
wordList = re.sub("[^\w]", " ",text).split()
stem_words = []
token_words = []
for word in wordList:
if not word.lower() in stop_words_english:
stem_words.append(stem.stem(word.lower()))
token_words.append(word.lower())
self.stemmed_word_list_train.append(" ".join(str(x) for x in stem_words))
self.tokenized_word_list_train.append(" ".join(str(x) for x in token_words))
if self.method_nlp == "stem":
self.tfidf.fit(self.stemmed_word_list_train)
if self.method_nlp == "token":
self.tfidf.fit(self.tokenized_word_list_train)
for text in self.validation.contents:
wordList = re.sub("[^\w]", " ",text).split()
stem_words = []
token_words = []
for word in wordList:
if not word.lower() in stop_words_english:
stem_words.append(stem.stem(word.lower()))
token_words.append(word.lower())
self.stemmed_word_list_val.append(" ".join(str(x) for x in stem_words))
self.tokenized_word_list_val.append(" ".join(str(x) for x in token_words))
print("time elapsed",round((time.time()-start)/60,1))
def transform_test(self):
if self.method_nlp == "stem":
return self.tfidf.transform(self.stemmed_word_list_val)
if self.method_nlp == "token":
return self.tfidf.transform(self.tokenized_word_list_val)
def transform_train(self):
return self.tfidf.transform(self.stemmed_word_list)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train_sen, Y_test_sen = train_test_split(data,data.Sentiment, test_size = 0.25, random_state = 42)
X_train, X_test, Y_train_mar, Y_test_mar = train_test_split(data,data.marks, test_size = 0.25, random_state = 42)
X_test.shape
X_train.shape
sentiment_nlp = nlp_validation_sets(fold = X_train,validation = X_test,train_data = X_train,
max_features=100,method_nlp = "stem",ngram = 1)
sentiment_nlp.choose_w2v(method = "tfidf")
sentiment_nlp.fit()
X = sentiment_nlp.transform_train()
X.shape
X_test = sentiment_nlp.transform_test()
X_test.shape
###Output
_____no_output_____
###Markdown
1.4.1.2 Model fitting Marks
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
lr = LogisticRegression()
lr.fit(X,Y_train_mar)
pred = lr.predict(X_test)
pred
accuracy_score(Y_test_mar,pred)
print("no information accuracy", np.mean(Y_test_mar))
confusion_matrix(pred,Y_test_mar)
###Output
_____no_output_____
###Markdown
Hand-labeled Sentiment Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
rf = RandomForestClassifier()
n_estimators = [500]
max_features = [.3,.5,.7,1.0]
max_depth = [None]
param_grid = {'n_estimators': n_estimators,'max_features': max_features,"max_depth":max_depth}
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 2, n_jobs = -1, verbose = 2)
# Fit the random search model
grid_search.fit(X, Y_train_sen)
best_param = grid_search.best_params_
rf_d = RandomForestClassifier(n_estimators = best_param["n_estimators"],max_features = best_param["max_features"]
,max_depth = best_param["max_depth"])
rf_d.fit(X,Y_train_sen)
pred = rf_d.predict(X_test)
best_param
pred
accuracy_score(Y_test_sen,pred)
print("no information accuracy", len(X_train.loc[X_train.Sentiment == 0])/len(X_train))
confusion_matrix(Y_test_sen,pred)
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
#https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
gbm = GradientBoostingClassifier()
max_features = [0.3,0.6,0.8]
subsample = [1]
max_depth = [1,3,6]
learning_rate = [.01]
n_estimators=[50,100,150,200]
param_grid = {'max_features': max_features,"max_depth":max_depth,
"subsample":subsample,"learning_rate":learning_rate,"n_estimators":n_estimators}
grid_search = GridSearchCV(estimator = gbm, param_grid = param_grid,
cv = 2, n_jobs = -1, verbose = 2)
# Fit the random search model
grid_search.fit(X,Y_train_sen)
best_param = grid_search.best_params_
gbm_d = GradientBoostingClassifier(n_estimators = best_param["n_estimators"],max_features = best_param["max_features"]
,max_depth = best_param["max_depth"],learning_rate = best_param["learning_rate"])
gbm_d.fit(X,Y_train_sen)
pred = gbm_d.predict(X_test)
best_param
pred
accuracy_score(Y_test_sen,pred)
print("no information accuracy", len(X_train.loc[X_train.Sentiment == 0])/len(X_train))
confusion_matrix(Y_test_sen,pred)
###Output
_____no_output_____
###Markdown
Prediction
###Code
main_articles_nlp = nlp_validation_sets(fold = X_train,validation = articles,train_data = X_train,
max_features=100,method_nlp = "stem",ngram = 1)
main_articles_nlp.choose_w2v(method = "count")
main_articles_nlp.fit()
X = main_articles_nlp.transform_test()
X.shape
pred_main_articles = rf_d.predict(X)
pred_main_articles.shape
articles["predicted_sentiment"] = pred_main_articles
articles.head(1)
articles.to_csv("Classified Articles with Predicted Sentiment")
###Output
_____no_output_____
###Markdown
2. Relationship between Sentiment and Price 2.1 train test split
###Code
articles.head(1)
articles.shape
response.shape
response.head()
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d": 345600}
response = response.set_index(articles.index)
for i in colnames:
articles[i] = response[i]
#split = .75
#train = articles.iloc[0:(round(len(articles)*split)),:]
#test = articles.iloc[-(round(len(articles)*(1-split))):]
#train_response = response.iloc[0:(round(len(response)*split)),:]
#test_response = response.iloc[-(round(len(response)*(1-split))):]
#train_naive = benchmark_naive.iloc[0:(round(len(benchmark_naive)*split)),:]
#test_naive = benchmark_naive.iloc[-(round(len(benchmark_naive)*(1-split))):]
from sklearn.model_selection import train_test_split
train, test, train_response, test_response = train_test_split(articles,response, test_size = 0.25, random_state = 42)
train_naive,test_naive,a,b = train_test_split(benchmark_naive,response, test_size = 0.25, random_state = 42)
train.shape
train_response.shape
test.shape
articles.shape
test_response.shape
test.shape
train_response.shape
sns.boxplot(train_response["lag_1d"])
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d": 345600}
###Output
_____no_output_____
###Markdown
2.2 Mean Difference for Sentiment
###Code
articles.head(1)
round(np.mean(articles.lag_30m)*100,4)
round(np.mean(articles.lag_30m.loc[articles.predicted_sentiment==0.0])*100,4)
round(np.mean(articles.lag_30m.loc[articles.predicted_sentiment==1.0])*100,4)
round(np.mean(articles.lag_30m.loc[articles.predicted_sentiment==-1.0])*100,4)
round(np.mean(articles.lag_30m.loc[articles.marks==0.0])*100,4)
round(np.mean(articles.lag_30m.loc[articles.marks==1.0])*100,4)
###Output
_____no_output_____
###Markdown
2.3 Evaluate difference using linear regression
###Code
import statsmodels.api as sm
for j in colnames:
lm = sm.OLS(articles[j],sm.add_constant(articles.predicted_sentiment)).fit()
print("OLS for ",j)
print(lm.summary())
print("")
print("")
import statsmodels.api as sm
for j in colnames:
lm = sm.OLS(articles[j],sm.add_constant(articles.marks)).fit()
print("OLS for ",j)
print(lm.summary())
print("")
print("")
###Output
OLS for lag_1m
OLS Regression Results
==============================================================================
Dep. Variable: lag_1m R-squared: 0.000
Model: OLS Adj. R-squared: -0.000
Method: Least Squares F-statistic: 0.5884
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.443
Time: 22:59:45 Log-Likelihood: 1.9797e+05
No. Observations: 40732 AIC: -3.959e+05
Df Residuals: 40730 BIC: -3.959e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -2.335e-05 1.77e-05 -1.320 0.187 -5.8e-05 1.13e-05
marks 1.595e-05 2.08e-05 0.767 0.443 -2.48e-05 5.67e-05
==============================================================================
Omnibus: 9475.151 Durbin-Watson: 1.842
Prob(Omnibus): 0.000 Jarque-Bera (JB): 267636.258
Skew: 0.495 Prob(JB): 0.00
Kurtosis: 15.519 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_5m
OLS Regression Results
==============================================================================
Dep. Variable: lag_5m R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: 9.921
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.00163
Time: 22:59:45 Log-Likelihood: 1.6592e+05
No. Observations: 40732 AIC: -3.318e+05
Df Residuals: 40730 BIC: -3.318e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0001 3.89e-05 -3.356 0.001 -0.000 -5.43e-05
marks 0.0001 4.57e-05 3.150 0.002 5.43e-05 0.000
==============================================================================
Omnibus: 9405.589 Durbin-Watson: 1.662
Prob(Omnibus): 0.000 Jarque-Bera (JB): 184538.855
Skew: 0.615 Prob(JB): 0.00
Kurtosis: 13.355 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_10m
OLS Regression Results
==============================================================================
Dep. Variable: lag_10m R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: 6.322
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.0119
Time: 22:59:45 Log-Likelihood: 1.5148e+05
No. Observations: 40732 AIC: -3.030e+05
Df Residuals: 40730 BIC: -3.029e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -9.132e-05 5.54e-05 -1.649 0.099 -0.000 1.72e-05
marks 0.0002 6.51e-05 2.514 0.012 3.61e-05 0.000
==============================================================================
Omnibus: 11248.319 Durbin-Watson: 1.568
Prob(Omnibus): 0.000 Jarque-Bera (JB): 251043.943
Skew: 0.802 Prob(JB): 0.00
Kurtosis: 15.056 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_30m
OLS Regression Results
==============================================================================
Dep. Variable: lag_30m R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: 2.647
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.104
Time: 22:59:45 Log-Likelihood: 1.2956e+05
No. Observations: 40732 AIC: -2.591e+05
Df Residuals: 40730 BIC: -2.591e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0001 9.49e-05 -1.148 0.251 -0.000 7.71e-05
marks 0.0002 0.000 1.627 0.104 -3.71e-05 0.000
==============================================================================
Omnibus: 8941.356 Durbin-Watson: 1.376
Prob(Omnibus): 0.000 Jarque-Bera (JB): 135923.786
Skew: 0.640 Prob(JB): 0.00
Kurtosis: 11.857 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_60m
OLS Regression Results
==============================================================================
Dep. Variable: lag_60m R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: 7.281
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.00697
Time: 22:59:45 Log-Likelihood: 1.1617e+05
No. Observations: 40732 AIC: -2.323e+05
Df Residuals: 40730 BIC: -2.323e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0001 0.000 -1.065 0.287 -0.000 0.000
marks 0.0004 0.000 2.698 0.007 0.000 0.001
==============================================================================
Omnibus: 5897.620 Durbin-Watson: 1.324
Prob(Omnibus): 0.000 Jarque-Bera (JB): 50146.005
Skew: 0.437 Prob(JB): 0.00
Kurtosis: 8.365 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_12h
OLS Regression Results
==============================================================================
Dep. Variable: lag_12h R-squared: 0.005
Model: OLS Adj. R-squared: 0.005
Method: Least Squares F-statistic: 211.0
Date: Sun, 02 Dec 2018 Prob (F-statistic): 1.09e-47
Time: 22:59:45 Log-Likelihood: 69379.
No. Observations: 40732 AIC: -1.388e+05
Df Residuals: 40730 BIC: -1.387e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0027 0.000 -6.494 0.000 -0.004 -0.002
marks 0.0071 0.000 14.526 0.000 0.006 0.008
==============================================================================
Omnibus: 5945.965 Durbin-Watson: 1.042
Prob(Omnibus): 0.000 Jarque-Bera (JB): 31241.052
Skew: 0.605 Prob(JB): 0.00
Kurtosis: 7.116 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_1d
OLS Regression Results
==============================================================================
Dep. Variable: lag_1d R-squared: 0.001
Model: OLS Adj. R-squared: 0.001
Method: Least Squares F-statistic: 34.36
Date: Sun, 02 Dec 2018 Prob (F-statistic): 4.62e-09
Time: 22:59:45 Log-Likelihood: 55219.
No. Observations: 40732 AIC: -1.104e+05
Df Residuals: 40730 BIC: -1.104e+05
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0008 0.001 -1.367 0.172 -0.002 0.000
marks 0.0041 0.001 5.862 0.000 0.003 0.005
==============================================================================
Omnibus: 4370.854 Durbin-Watson: 0.788
Prob(Omnibus): 0.000 Jarque-Bera (JB): 14077.125
Skew: 0.553 Prob(JB): 0.00
Kurtosis: 5.659 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_2d
OLS Regression Results
==============================================================================
Dep. Variable: lag_2d R-squared: 0.000
Model: OLS Adj. R-squared: -0.000
Method: Least Squares F-statistic: 0.2860
Date: Sun, 02 Dec 2018 Prob (F-statistic): 0.593
Time: 22:59:46 Log-Likelihood: 42943.
No. Observations: 40732 AIC: -8.588e+04
Df Residuals: 40730 BIC: -8.587e+04
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0015 0.001 1.912 0.056 -3.83e-05 0.003
marks -0.0005 0.001 -0.535 0.593 -0.002 0.001
==============================================================================
Omnibus: 2304.937 Durbin-Watson: 0.511
Prob(Omnibus): 0.000 Jarque-Bera (JB): 4945.843
Skew: 0.384 Prob(JB): 0.00
Kurtosis: 4.525 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS for lag_4d
OLS Regression Results
==============================================================================
Dep. Variable: lag_4d R-squared: 0.003
Model: OLS Adj. R-squared: 0.003
Method: Least Squares F-statistic: 123.2
Date: Sun, 02 Dec 2018 Prob (F-statistic): 1.39e-28
Time: 22:59:46 Log-Likelihood: 26699.
No. Observations: 40732 AIC: -5.339e+04
Df Residuals: 40730 BIC: -5.338e+04
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.0053 0.001 -4.505 0.000 -0.008 -0.003
marks 0.0155 0.001 11.099 0.000 0.013 0.018
==============================================================================
Omnibus: 367.505 Durbin-Watson: 0.348
Prob(Omnibus): 0.000 Jarque-Bera (JB): 398.400
Skew: 0.203 Prob(JB): 3.08e-87
Kurtosis: 3.265 Cond. No. 3.58
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
what if we add past price as additional predictor? 3. Article Scoring 3.1 nlp features
###Code
nlp = nlp_validation_sets(fold = train,validation = test,train_data = articles,max_features=100,method_nlp = "stem",ngram = 1)
nlp.choose_w2v(method = "count")
nlp.fit()
X = nlp.transform_train()
X_test = nlp.transform_test()
X.shape
X_test.shape
###Output
_____no_output_____
###Markdown
3.2 train models and evaluate models
###Code
train_response.shape
X.shape
X_test.shape
test_response.shape
###Output
_____no_output_____
###Markdown
3.3 benchmark: naive (use last price change as predictor for current price change)
###Code
from sklearn.metrics import mean_squared_error
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d": 345600}
for j in colnames:
print("benchmark training error:",j,":",round(mean_squared_error(train_response[j],train_naive[j]),5))
for j in colnames:
print("benchmark testing error:",j,":",round(mean_squared_error(test_response[j],test_naive[j]),5))
###Output
benchmark testing error: lag_1m : 1e-05
benchmark testing error: lag_5m : 3e-05
benchmark testing error: lag_10m : 6e-05
benchmark testing error: lag_30m : 0.00019
benchmark testing error: lag_60m : 0.0004
benchmark testing error: lag_12h : 0.00397
benchmark testing error: lag_1d : 0.00893
benchmark testing error: lag_2d : 0.01604
benchmark testing error: lag_4d : 0.02795
###Markdown
3.4 benchmark: AR-1 linear regresion (use weighted past price change as predictor for current price change)
###Code
from sklearn.linear_model import LinearRegression
for j in colnames:
reg = LinearRegression()
reg.fit(train_naive,train_response[j])
#pred = reg.predict(train_naive)
#print("linear regression training error:",j,round(mean_squared_error(pred,train_response[j])/
#mean_squared_error(train_response[j],train_naive[j]),5))
pred = reg.predict(test_naive)
#print("")
print("AR-1 linear regression test error:",j,round(mean_squared_error(pred,test_response[j])/
mean_squared_error(test_response[j],test_naive[j]),5))
###Output
AR-1 linear regression test error: lag_1m 0.52073
AR-1 linear regression test error: lag_5m 0.52643
AR-1 linear regression test error: lag_10m 0.49202
AR-1 linear regression test error: lag_30m 0.49134
AR-1 linear regression test error: lag_60m 0.46501
AR-1 linear regression test error: lag_12h 0.47748
AR-1 linear regression test error: lag_1d 0.42965
AR-1 linear regression test error: lag_2d 0.4483
AR-1 linear regression test error: lag_4d 0.56196
###Markdown
3.5 linear regression with NLP features 2.3.1 NLP features only
###Code
X = np.concatenate((X.toarray(),train[["predicted_sentiment"]].values),axis = 1)
X.shape
X_test = np.concatenate((X_test.toarray(),test[["predicted_sentiment"]].values),axis = 1)
X_test.shape
test_response.shape
# linear regression: train
#a = np.concatenate((X,train_dummies),axis = 1)
#b = np.concatenate((X_test,test_dummies),axis = 1)
for j in colnames:
reg = LinearRegression()
reg.fit(X,train_response[j])
#pred = reg.predict(X)
#print("linear regression training error:",j,round(mean_squared_error(pred,train_response[j])/
# mean_squared_error(train_response[j],train_naive[j]),5))
pred = reg.predict(X_test)
#print("")
print("linear regression test error:",j,round(mean_squared_error(pred,test_response[j])/
mean_squared_error(test_response[j],test_naive[j]),5))
###Output
linear regression test error: lag_1m 0.52084
linear regression test error: lag_5m 0.52696
linear regression test error: lag_10m 0.49382
linear regression test error: lag_30m 0.49053
linear regression test error: lag_60m 0.46523
linear regression test error: lag_12h 0.48613
linear regression test error: lag_1d 0.43401
linear regression test error: lag_2d 0.44412
linear regression test error: lag_4d 0.55177
###Markdown
3.6 lasso regression with NLP features (with 5-fold cross validation)
###Code
#from: https://www.kaggle.com/floser/aw6-the-lasso-cross-validated
#https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html
from sklearn.preprocessing import scale
from sklearn.linear_model import Lasso, LassoCV, Ridge, RidgeCV
#from sklearn import cross_validation
for j in colnames:
alphas = [1,5,10,50,100,1000]
lassocv = LassoCV(alphas=alphas, cv=5, max_iter=100000, normalize=True)
lassocv.fit(X, train_response[j])
lasso = Lasso()
lasso.set_params(alpha=lassocv.alpha_)
#print("Alpha=", lassocv.alpha_)
lasso.fit(X, train_response[j])
#print("mse = ",mean_squared_error(y_test, lasso.predict(X_test)))
#print("best model coefficients:")
#pd.Series(lasso.coef_, index=X.columns)
#pred = lasso.predict(X)
#print("lasso regression training error:",j,round(mean_squared_error(pred,train_response[j])/
#mean_squared_error(train_response[j],train_naive[j]),5))
pred = lasso.predict(X_test)
print("lasso R squared:",j,lasso.score(X,train_response[j]))
print("lasso regression test error:",j,round(mean_squared_error(pred,test_response[j])/
mean_squared_error(test_response[j],test_naive[j]),5))
###Output
lasso R squared: lag_1m 0.0
lasso regression test error: lag_1m 0.52028
lasso R squared: lag_5m 0.0
lasso regression test error: lag_5m 0.52693
lasso R squared: lag_10m 0.0
lasso regression test error: lag_10m 0.49379
lasso R squared: lag_30m 0.0
lasso regression test error: lag_30m 0.49205
lasso R squared: lag_60m 0.0
lasso regression test error: lag_60m 0.46784
lasso R squared: lag_12h 0.0
lasso regression test error: lag_12h 0.49192
lasso R squared: lag_1d 0.0
lasso regression test error: lag_1d 0.44055
lasso R squared: lag_2d 0.0
lasso regression test error: lag_2d 0.45166
lasso R squared: lag_4d 0.0
lasso regression test error: lag_4d 0.56981
###Markdown
3.7 random forest regressor with NLP features
###Code
#from: https://towardsdatascience.com/hyperparameter-tuning-the-random-forest-in-python-using-scikit-learn-28d2aa77dd74
#https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
#from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
start = time.time()
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d": 345600}
model = ["lag_1d"]
count = 0
for j in model:
count+=1
#a = np.concatenate((train_dummies,np.array(train_naive[j]).reshape(len(train_naive[j]),1),X),axis = 1)
#b = np.concatenate((test_dummies,np.array(test_naive[j]).reshape(len(test_naive[j]),1),X_test),axis = 1)
rf = RandomForestRegressor()
n_estimators = [500]
max_features = [.3]
max_depth = [None]
param_grid = {'n_estimators': n_estimators,'max_features': max_features,"max_depth":max_depth}
grid_search = GridSearchCV(estimator = rf, param_grid = param_grid,
cv = 2, n_jobs = -1, verbose = 2,scoring = "neg_mean_squared_error")
#rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 4,
#cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Fit the random search model
grid_search.fit(X, train_response[j])
best_param = grid_search.best_params_
rf_d = RandomForestRegressor(n_estimators = best_param["n_estimators"],max_features = best_param["max_features"]
,max_depth = best_param["max_depth"])
rf_d.fit(X,train_response[j])
pred = rf_d.predict(X_test)
pred_rf = rf_d.predict(X_test)
print("random forest test error:",j,round(mean_squared_error(pred,test_response[j])/
mean_squared_error(test_response[j],test_naive[j]),5))
#if count>0:
# break
print("time elapsed:",round((time.time()-start)/60,1),"minutes")
best_param
import matplotlib.pyplot as plt
plt.plot(pred_rf,test_response["lag_1d"],"o")
plt.plot(test_naive["lag_1d"],test_response["lag_1d"],"o")
import random
rand = []
for x in range(10):
rand.append(random.randint(0,len(pred_rf)))
for i in rand:
print("true value:",round(test_response["lag_4d"].iloc[i],2),"","predicted value:",round(pred_rf[i],2))
#print(i)
for i in rand:
print(test["title"].iloc[i])
#print(i)
###Output
Use blockchain technology to prevent PNB like scam: Foreign data expert
Unpacking Facebookโs Bitcoin ban
Bitcoinโs Nosedive Hasnโt Hurt Red-Hot Coin Offerings
Nine suspects arrested over theft of bitcoin machines
Global Cryptocurrency Markets Improve in February as Bitcoin Rises Again
Rippleโs XRP crypto token is more volatile than just about everything
Bitcoin Has Triggered the Energy Arms Race
Bitcoin, Ethereum, Bitcoin Cash, Ripple, Stellar, Litecoin, Cardano, NEO, EOS: Price Analysis, March 10
Supposed Bitcoin co-inventor sued for more than $10 billion in cryptocurrency
Wall Street embraces bitcoin
###Markdown
3.8 gbm with cv
###Code
#https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
start = time.time()
colnames = {"lag_1m":60,"lag_5m":300,"lag_10m":600,"lag_30m":1800,"lag_60m":3600,"lag_12h":43200,"lag_1d":86400,
"lag_2d":172800,"lag_4d": 345600}
model = ["lag_1d"]
count = 0
for j in model:
count+=1
#a = np.concatenate((train_dummies,np.array(train_naive[j]).reshape(len(train_naive[j]),1),X),axis = 1)
#b = np.concatenate((test_dummies,np.array(test_naive[j]).reshape(len(test_naive[j]),1),X_test),axis = 1)
gbm = GradientBoostingRegressor(n_estimators = 500,validation_fraction = .75,
n_iter_no_change = 2)
max_features = [0.6,0.8]
subsample = [.7]
max_depth = [1,5]
learning_rate = [.01]
n_estimators=[200]
param_grid = {'max_features': max_features,"max_depth":max_depth,
"subsample":subsample,"learning_rate":learning_rate}
#grid_search = GridSearchCV(estimator = gbm, param_grid = param_grid,
# cv = 2, n_jobs = -1, verbose = 2,scoring = "neg_mean_squared_error")
#rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 4,
#cv = 3, verbose=2, random_state=42, n_jobs = -1)
# Fit the random search model
#grid_search.fit(X, train_response[j])
#best_param = grid_search.best_params_
gbm_d = GradientBoostingRegressor(
)
gbm_d.fit(X,train_response[j])
pred = gbm_d.predict(X_test)
pred_gbm = gbm_d.predict(X_test)
print("gbm test error:",j,round(mean_squared_error(pred,test_response[j])/
mean_squared_error(test_response[j],test_naive[j]),5))
#if count>0:
# break
print("time elapsed:",round((time.time()-start)/60,1),"minutes")
best_param
gbm_d.n_estimators
#grid_search.cv_results_
import matplotlib.pyplot as plt
plt.plot(pred,test_response["lag_1d"],"o")
import random
rand = []
for x in range(20):
rand.append(random.randint(0,len(pred)))
for i in rand:
print("true value:",round(test_response["lag_4d"].iloc[i],2),"","predicted value:",round(pred[i],2))
#print(i)
###Output
true value: -0.11 predicted value: 0.01
true value: 0.03 predicted value: -0.01
true value: 0.12 predicted value: 0.01
true value: -0.18 predicted value: -0.0
true value: -0.17 predicted value: 0.0
true value: -0.13 predicted value: -0.02
true value: -0.14 predicted value: 0.0
true value: -0.0 predicted value: 0.01
true value: 0.07 predicted value: -0.0
true value: 0.01 predicted value: 0.0
true value: 0.04 predicted value: 0.02
true value: -0.07 predicted value: -0.0
true value: 0.0 predicted value: 0.01
true value: -0.14 predicted value: 0.01
true value: 0.04 predicted value: 0.0
true value: -0.14 predicted value: -0.0
true value: -0.06 predicted value: 0.0
true value: 0.03 predicted value: 0.0
true value: 0.06 predicted value: -0.0
true value: -0.13 predicted value: -0.01
###Markdown
An error ocurred while starting the kernelOMP: Error 15: Initializing libomp.dylib, but found libiomp5.dylib already initialized.OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://openmp.llvm.org/ 3.9 Final Scoring
###Code
pred_rf
np.percentile(pred_rf,95)
pred_rf[0]
def create_score(prediction):
min_ = np.percentile(abs(prediction),2)
max_ = np.percentile(abs(prediction),98)
new_list = []
for i in abs(prediction):
if i < min_ or i> max_:
new_list.append(10)
else:
new_list.append(round((i-min_)/(max_-min_)*10))
return new_list
scoring = create_score(pred_rf)
min(scoring)
max(scoring)
###Output
_____no_output_____
###Markdown
4. Evaluation
###Code
import matplotlib.pyplot as plt
plt.plot(pred_rf,test_response["lag_1d"],"o")
plt.xlabel('Predicted Daily Price Change')
plt.ylabel('Actual Daily Price Change')
import random
rand = []
for x in range(10):
rand.append(random.randint(0,len(pred_rf)))
for i in rand:
print("true value:",round(test_response["lag_4d"].iloc[i],2),"","predicted value:",round(pred_rf[i],2),"",
"predicted scoring:",scoring[i])
print("")
print(test["title"].iloc[i])
print("")
print("")
###Output
true value: 0.11 predicted value: 0.0 predicted scoring: 0.0
Craig Wright, who once claimed to be Bitcoin founder Satoshi Nakamoto, sued for $10B by his deceased partner Dave Kleiman's estate for stealing $5B in bitcoin (Russell Brandom/The Verge)
true value: -0.21 predicted value: -0.02 predicted scoring: 2.0
Arun Jaitley has just killed Indiaโs cryptocurrency party
true value: -0.04 predicted value: -0.02 predicted scoring: 2.0
Why did Ethereum Drop so hard? Bitcoin is Correcting, and Cryptocurrency Markets Follow.
true value: 0.3 predicted value: 0.16 predicted scoring: 10
Bitcoin slides below $6000; half its value lost in 2018 - Fox Business
true value: -0.04 predicted value: 0.05 predicted scoring: 4.0
Dutch Court Finds Bitcoin A Legitimate โTransferable Valueโ
true value: 0.14 predicted value: 0.12 predicted scoring: 10
Bitcoin plunges below $10,000 as major crypto exchange to share user details with US tax authorities
true value: 0.19 predicted value: 0.11 predicted scoring: 10
Expert Believes Bitcoin Price Crash Will Continue
true value: 0.05 predicted value: -0.03 predicted scoring: 3.0
Experts Discuss: What Can Blockchain Really Do for Advertising?
true value: 0.02 predicted value: 0.03 predicted scoring: 3.0
There's now a vibrator that will order you a pizza when you, um, finish
true value: 0.04 predicted value: 0.04 predicted scoring: 4.0
Friend or Foe: Inside Polandโs Strange War on Cryptocurrencies
|
04 - NLP - Applied Text Mining/Assignment 1.ipynb | ###Markdown
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 1In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. Here is a list of some of the variants you might encounter in this dataset:* 04/20/2009; 04/20/09; 4/20/09; 4/3/09* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009* Feb 2009; Sep 2009; Oct 2010* 6/2008; 12/2009* 2009; 2010Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:* Assume all dates in xx/xx/xx format are mm/dd/yy* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).* Watch out for potential typos as this is a raw, real-life derived dataset.With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.For example if the original series was this: 0 1999 1 2010 2 1978 3 2015 4 1985Your function should return this: 0 2 1 4 2 0 3 1 4 3Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.*This function should return a Series of length 500 and dtype int.*
###Code
def date_sorter():
import pandas as pd
import re
from calendar import month_name
import dateutil.parser
from datetime import datetime
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df1 = pd.Series(doc)
df = pd.DataFrame(df1, columns=['text'])
print(df)
pattern = "[,.]? \d{4}|".join(month_name[1:]) + "[,.]? \d{4}";
date_list=[]
i=0
for word in df['text']:
word=word.strip('\n')
dte = re.findall(r'\d{1,2}\/\d{1,2}\/\d{2,4}|\d{1,2}\-\d{1,2}\-\d{2,4}|[A-Z][a-z]+\-\d{1,2}\-\d{4}|[A-Z][a-z]+[,.]? \d{2}[a-z]*,? \d{4}|\d{1,2} [A-Z][a-z,.]+ \d{4}|[A-Z][a-z]{2}[,.]? \d{4}|'+pattern+r'|\d{1,2}\/\d{4}|\d{4}',word)
date_list.append(dte)
i=i+1
date_list[271]=[date_list[271][1]]
temp=""
date_list[461]=[temp.join(date_list[461]).split(',')[1]]
date_list[465]=[temp.join(date_list[465]).split(',')[2]]
date_final=[]
for text in date_list:
date_final.append(text[0])
i=0
num=[]
while (i < len(date_final)):
num.append(i)
temp=""
temp=temp.join(date_final[i])
if(re.match(r'\d{4}',temp)) :
temp = 'January 1, '+temp
elif (re.match(r'\d{1,2}\/\d{4}',temp)) :
date_split = temp.split('/')
temp = date_split[0] + '/1/'+date_split[1]
elif(re.match(r'[A-Z][a-z]+[,.]? \d{4}',temp)) :
date_split = temp.split(' ')
temp = date_split[0] + ' 1 '+date_split[1]
date_final[i] = dateutil.parser.parse(temp).strftime("%m/%d/%Y")
date_final[i] = datetime.strptime(date_final[i], "%m/%d/%Y").date()
i = i+1
df['Date']=date_final
print(df)
df['rank']=num
sorted_date= df.sort_values(by="Date")
df1=sorted_date.index
df_series=pd.Series(df1)
return df_series
date_sorter()
###Output
text
0 03/25/93 Total time of visit (in minutes):\n
1 6/18/85 Primary Care Doctor:\n
2 sshe plans to move as of 7/8/71 In-Home Servic...
3 7 on 9/27/75 Audit C Score Current:\n
4 2/6/96 sleep studyPain Treatment Pain Level (N...
5 .Per 7/06/79 Movement D/O note:\n
6 4, 5/18/78 Patient's thoughts about current su...
7 10/24/89 CPT Code: 90801 - Psychiatric Diagnos...
8 3/7/86 SOS-10 Total Score:\n
9 (4/10/71)Score-1Audit C Score Current:\n
10 (5/11/85) Crt-1.96, BUN-26; AST/ALT-16/22; WBC...
11 4/09/75 SOS-10 Total Score:\n
12 8/01/98 Communication with referring physician...
13 1/26/72 Communication with referring physician...
14 5/24/1990 CPT Code: 90792: With medical servic...
15 1/25/2011 CPT Code: 90792: With medical servic...
16 4/12/82 Total time of visit (in minutes):\n
17 1; 10/13/1976 Audit C Score, Highest/Date:\n
18 4, 4/24/98 Relevant Drug History:\n
19 ) 59 yo unemployed w referred by Urgent Care f...
20 7/21/98 Total time of visit (in minutes):\n
21 10/21/79 SOS-10 Total Score:\n
22 3/03/90 CPT Code: 90792: With medical services\n
23 2/11/76 CPT Code: 90792: With medical services\n
24 07/25/1984 CPT Code: 90791: No medical services\n
25 4-13-82 Other Child Mental Health Outcomes Sca...
26 9/22/89 CPT Code: 90792: With medical services\n
27 9/02/76 CPT Code: 90791: No medical services\n
28 9/12/71 [report_end]\n
29 10/24/86 Communication with referring physicia...
.. ...
470 y1983 Clinic Hospital, first hospitalization, ...
471 tProblems Urinary incontinence : mild urge inc...
472 .2010 - wife; nightmares and angry outbursts; ...
473 shx of TBI (1975) ISO MVA.Medical History:\n
474 sPatient reported losing three friends that pa...
475 TSH okay in 2015 Prior EKG:\n
476 1989 Family Psych History: Family History of S...
477 oEnjoys animals, had a dog x 14 yrs who died i...
478 eHistory of small right parietal subgaleal hem...
479 sIn KEP Psychiatryfor therapy and medications ...
480 1. Esophageal cancer, dx: 2013, on FOLFOX with...
481 y1974 (all)\n
482 h/o restraining order by sister/mother in 1990...
483 sTexas Medical Center; Oklahoma for 2 weeks; 1...
484 Death of former partner in 2004 by overdose as...
485 Was "average" student. "I didn't have too man...
486 Contemplating jumping off building - 1973 - di...
487 appendectomy s/p delivery 1992 Prior relevant ...
488 tProblems renal cell cancer : s/p nephrectomy ...
489 ran own business for 35 years, sold in 1985\n
490 Lab: B12 969 2007\n
491 )and 8mo in 2009\n
492 .Moved to USA in 1986. Suffered from malnutrit...
493 r1978\n
494 . Went to Emerson, in Newfane Alaska. Started ...
495 1979 Family Psych History: Family History of S...
496 therapist and friend died in ~2006 Parental/Ca...
497 2008 partial thyroidectomy\n
498 sPt describes a history of sexual abuse as a c...
499 . In 1980, patient was living in Naples and de...
[500 rows x 1 columns]
text Date
0 03/25/93 Total time of visit (in minutes):\n 1993-03-25
1 6/18/85 Primary Care Doctor:\n 1985-06-18
2 sshe plans to move as of 7/8/71 In-Home Servic... 1971-07-08
3 7 on 9/27/75 Audit C Score Current:\n 1975-09-27
4 2/6/96 sleep studyPain Treatment Pain Level (N... 1996-02-06
5 .Per 7/06/79 Movement D/O note:\n 1979-07-06
6 4, 5/18/78 Patient's thoughts about current su... 1978-05-18
7 10/24/89 CPT Code: 90801 - Psychiatric Diagnos... 1989-10-24
8 3/7/86 SOS-10 Total Score:\n 1986-03-07
9 (4/10/71)Score-1Audit C Score Current:\n 1971-04-10
10 (5/11/85) Crt-1.96, BUN-26; AST/ALT-16/22; WBC... 1985-05-11
11 4/09/75 SOS-10 Total Score:\n 1975-04-09
12 8/01/98 Communication with referring physician... 1998-08-01
13 1/26/72 Communication with referring physician... 1972-01-26
14 5/24/1990 CPT Code: 90792: With medical servic... 1990-05-24
15 1/25/2011 CPT Code: 90792: With medical servic... 2011-01-25
16 4/12/82 Total time of visit (in minutes):\n 1982-04-12
17 1; 10/13/1976 Audit C Score, Highest/Date:\n 1976-10-13
18 4, 4/24/98 Relevant Drug History:\n 1998-04-24
19 ) 59 yo unemployed w referred by Urgent Care f... 1977-05-21
20 7/21/98 Total time of visit (in minutes):\n 1998-07-21
21 10/21/79 SOS-10 Total Score:\n 1979-10-21
22 3/03/90 CPT Code: 90792: With medical services\n 1990-03-03
23 2/11/76 CPT Code: 90792: With medical services\n 1976-02-11
24 07/25/1984 CPT Code: 90791: No medical services\n 1984-07-25
25 4-13-82 Other Child Mental Health Outcomes Sca... 1982-04-13
26 9/22/89 CPT Code: 90792: With medical services\n 1989-09-22
27 9/02/76 CPT Code: 90791: No medical services\n 1976-09-02
28 9/12/71 [report_end]\n 1971-09-12
29 10/24/86 Communication with referring physicia... 1986-10-24
.. ... ...
470 y1983 Clinic Hospital, first hospitalization, ... 1983-01-01
471 tProblems Urinary incontinence : mild urge inc... 1999-01-01
472 .2010 - wife; nightmares and angry outbursts; ... 2010-01-01
473 shx of TBI (1975) ISO MVA.Medical History:\n 1975-01-01
474 sPatient reported losing three friends that pa... 1972-01-01
475 TSH okay in 2015 Prior EKG:\n 2015-01-01
476 1989 Family Psych History: Family History of S... 1989-01-01
477 oEnjoys animals, had a dog x 14 yrs who died i... 1994-01-01
478 eHistory of small right parietal subgaleal hem... 1993-01-01
479 sIn KEP Psychiatryfor therapy and medications ... 1996-01-01
480 1. Esophageal cancer, dx: 2013, on FOLFOX with... 2013-01-01
481 y1974 (all)\n 1974-01-01
482 h/o restraining order by sister/mother in 1990... 1990-01-01
483 sTexas Medical Center; Oklahoma for 2 weeks; 1... 1995-01-01
484 Death of former partner in 2004 by overdose as... 2004-01-01
485 Was "average" student. "I didn't have too man... 1987-01-01
486 Contemplating jumping off building - 1973 - di... 1973-01-01
487 appendectomy s/p delivery 1992 Prior relevant ... 1992-01-01
488 tProblems renal cell cancer : s/p nephrectomy ... 1977-01-01
489 ran own business for 35 years, sold in 1985\n 1985-01-01
490 Lab: B12 969 2007\n 2007-01-01
491 )and 8mo in 2009\n 2009-01-01
492 .Moved to USA in 1986. Suffered from malnutrit... 1986-01-01
493 r1978\n 1978-01-01
494 . Went to Emerson, in Newfane Alaska. Started ... 2002-01-01
495 1979 Family Psych History: Family History of S... 1979-01-01
496 therapist and friend died in ~2006 Parental/Ca... 2006-01-01
497 2008 partial thyroidectomy\n 2008-01-01
498 sPt describes a history of sexual abuse as a c... 2005-01-01
499 . In 1980, patient was living in Naples and de... 1980-01-01
[500 rows x 2 columns]
|
Style Transfer/Picture Change.ipynb | ###Markdown
Model Load
###Code
from keras import backend as K
target_image = K.constant(preprocess_image(target_image_path))
style_reference_image = K.constant(preprocess_image(style_reference_image_path))
# This placeholder will contain our generated image
combination_image = K.placeholder((1, img_height, img_width, 3))
# We combine the 3 images into a single batch
input_tensor = K.concatenate([target_image,
style_reference_image,
combination_image], axis=0)
# We build the VGG19 network with our batch of 3 images as input.
# The model will be loaded with pre-trained ImageNet weights.
model = vgg19.VGG19(input_tensor=input_tensor,
weights='imagenet',
include_top=False)
print('Model loaded.')
def content_loss(base, combination):
return K.sum(K.square(combination - base))
###Output
_____no_output_____
###Markdown
Set loss function
###Code
def gram_matrix(x):
features = K.batch_flatten(K.permute_dimensions(x, (2, 0, 1)))
gram = K.dot(features, K.transpose(features))
return gram
def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_height * img_width
return K.sum(K.square(S - C)) / (4. * (channels ** 2) * (size ** 2))
def total_variation_loss(x):
a = K.square(
x[:, :img_height - 1, :img_width - 1, :] - x[:, 1:, :img_width - 1, :])
b = K.square(
x[:, :img_height - 1, :img_width - 1, :] - x[:, :img_height - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# Dict mapping layer names to activation tensors
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# Name of layer used for content loss
content_layer = 'block5_conv2'
# Name of layers used for style loss
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
# Weights in the weighted average of the loss components
total_variation_weight = 1e-4
style_weight = 1.
content_weight = 0.025
# Define the loss by adding all components to a `loss` variable
loss = K.variable(0.)
layer_features = outputs_dict[content_layer]
target_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss += content_weight * content_loss(target_image_features,
combination_features)
for layer_name in style_layers:
layer_features = outputs_dict[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layers)) * sl
loss += total_variation_weight * total_variation_loss(combination_image)
# Get the gradients of the generated image wrt the loss
grads = K.gradients(loss, combination_image)[0]
# Function to fetch the values of the current loss and the current gradients
fetch_loss_and_grads = K.function([combination_image], [loss, grads])
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
x = x.reshape((1, img_height, img_width, 3))
outs = fetch_loss_and_grads([x])
loss_value = outs[0]
grad_values = outs[1].flatten().astype('float64')
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
###Output
_____no_output_____
###Markdown
Learning
###Code
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
import time
result_prefix = 'style_transfer_result'
iterations = 10
# Run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the neural style loss.
# This is our initial state: the target image.
# Note that `scipy.optimize.fmin_l_bfgs_b` can only process flat vectors.
x = preprocess_image(target_image_path)
x = x.flatten()
for i in range(iterations):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x,
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# Save current generated image
img = x.copy().reshape((img_height, img_width, 3))
img = deprocess_image(img)
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
end_time = time.time()
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
###Output
Start of iteration 0
###Markdown
View picture
###Code
from matplotlib import pyplot as plt
# Content image
plt.imshow(load_img(target_image_path, target_size=(img_height, img_width)))
plt.figure()
# Style image
plt.imshow(load_img(style_reference_image_path, target_size=(img_height, img_width)))
plt.figure()
# Generate image
plt.imshow(img)
plt.show()
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imsave
import time
result_prefix = 'style_transfer_result'
iterations = 20
# Run scipy-based optimization (L-BFGS) over the pixels of the generated image
# so as to minimize the neural style loss.
# This is our initial state: the target image.
# Note that `scipy.optimize.fmin_l_bfgs_b` can only process flat vectors.
x = preprocess_image('www.jpg')
x = x.flatten()
for i in range(5):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x,
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# Save current generated image
img = x.copy().reshape((img_height, img_width, 3))
img = deprocess_image(img)
fname = result_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
end_time = time.time()
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
###Output
Start of iteration 0
Current loss value: 1.20559e+09
Image saved as style_transfer_result_at_iteration_0.png
Iteration 0 completed in 191s
Start of iteration 1
Current loss value: 5.43658e+08
Image saved as style_transfer_result_at_iteration_1.png
Iteration 1 completed in 275s
Start of iteration 2
|
kernels/sequential_testing.ipynb | ###Markdown
Sequential A/B TestingA/B testing (also known as split testing) is the process of comparing two versions of an asset and measuring the difference in performance.Involves conducting the test on 2 versions of a single variable at a time. It goes with the belief that not more than one factor should be varied at the same time.**Case Overview**:SmartAd is a mobile first advertiser agency. The company provides an additional service called Brand Impact Optimiser (BIO), a lightweight questionnaire, served with every campaign to determine the impact of the ad they design.The task at hand is to design a reliable hypothesis testing algorithm for the BIO service and determine whether the recent advertising campaign resulted in a significant lift in brand awareness.**Data**:The BIO data for this project is a โYesโ and โNoโ response of online users to the following question:`Q: Do you know the brand SmartAd?` Yes NoThe data has the following columns: **auction_id**, **experiment**, **date**, **hour**, **device_make**, **platform_os**, **browser**, **yes**, **no**. Table of Contents1. [Libraries](Libraries)2. [Dataset](Dataset)3. [Sample conditional SPRT](Sample-conditional-SPRT) 3.1 [conditional SPRT function](conditional-SPRT-function) 3.2 [Boundaries and Plots](Boundaries-and-Plots) 3.3 [Data Transformation](Data-Transformation) 3.4 [Data Summary plot and print functions](Data-Summary-plot-and-print-functions) 3.5 [Testing](Testing) 1. Libraries
###Code
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import pandas as pd
import numpy as np
import math
from scipy.stats import binom
from math import *
import seaborn as sns
import matplotlib.pyplot as plt
import json
import warnings
def ignore_warn(*args, **kwargs):
pass
warnings.warn = ignore_warn
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
2. Dataset
###Code
# function to fetch data
def fetch_data(id, file_name):
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile(file_name)
data=pd.read_csv(file_name)
return data
# fetch the data
data = fetch_data('1YSn01vvlHKQaAIBtwIXRNd-oTaTuDN09', 'ABAdRecall.csv')
data.head()
###Output
_____no_output_____
###Markdown
3. Sample conditional SPRT 3.1 conditional SPRT function
###Code
def ConditionalSPRT(x,y,t1,alpha=0.05,beta=0.10,stop=None):
if t1<=1:
printLog('warning',"Odd ratio should exceed 1.")
if (alpha >0.5) | (beta >0.5):
printLog('warning',"Unrealistic values of alpha or beta were passed."
+" You should have good reason to use large alpha & beta values")
if stop!=None:
stop=math.floor(n0)
def comb(n, k):
return factorial(n) // factorial(k) // factorial(n - k)
def lchoose(b, j):
a=[]
if (type(j) is list) | (isinstance(j,np.ndarray)==True):
if len(j)<2:
j=j[0]
if (type(j) is list) | (isinstance(j,np.ndarray)==True):
for k in j:
n=b
if (0 <= k) & (k<= n):
a.append(math.log(comb(n,k)))
else:
a.append(0)
else:
n=b
k=j
if (0 <= k) & (k<= n):
a.append(math.log(comb(n,k)))
else:
a.append(0)
return np.array(a)
def g(x,r,n,t1,t0=1):
return -math.log(h(x,r,n,t1))+math.log(h(x,r,n,t0))
def h(x,r,n,t=1):
return f(r,n,t,offset=ftermlog(x,r,n,t))
def f(r,n,t,offset=0):
upper=max(0,r-n)
lower=min(n,r)
rng=list(range(upper,lower+1))
return np.sum(fterm(rng,r,n,t,offset))
def fterm(j,r,n,t,offset=0):
ftlog=ftermlog(j,r,n,t,offset)
return np.array([math.exp(ex) for ex in ftlog])
def ftermlog(j,r,n,t,offset=0):
xx=r-j
lch=lchoose(n,j)
lchdiff=lchoose(n,xx)
lg=np.array(j)*math.log(t)
lgsum=lch+lchdiff
lgsum2=lgsum+lg
lgdiff=lgsum2-offset
return lgdiff
def logf(r,n,t,offset=0):
z=f(r,n,t,offset)
if z>0:
return math.log(z)
else:
return np.nan
def clowerUpper(r,n,t1c,t0=1,alpha=0.05,beta=0.10):
offset=ftermlog(math.ceil(r/2),r,n,t1c)
z=logf(r,n,t1c,logf(r,n,t0,offset)+offset)
a=-math.log(alpha/(1-beta))
b=math.log(beta/(1-alpha))
lower=b
upper=1+a
return (np.array([lower,upper])+z)/math.log(t1c/t0)
l=math.log(beta/(1-alpha))
u=-math.log(alpha/(1-beta))
sample_size=min(len(x),len(y))
n=np.array(range(1,sample_size+1))
if stop!=None:
n=np.array([z for z in n if z<=stop])
x1=np.cumsum(x[n-1])
r=x1+np.cumsum(y[n-1])
stats=np.array(list(map(g,x1, r, n, [t1]*len(x1)))) #recurcively calls g
clu=list(map(clowerUpper,r,n,[t1]*len(r),[1]*len(r),[alpha]*len(r), [beta]*len(r)))
limits=[]
for v in clu:
inArray=[]
for vin in v:
inArray.append(math.floor(vin))
limits.append(np.array(inArray))
limits=np.array(limits)
k=np.where((stats>=u) | (stats<=l))
cvalues=stats[k]
if cvalues.shape[0]<1:
k= np.nan
outcome='Unable to conclude.Needs more sample.'
else:
k=np.min(k)
if stats[k]>=u:
outcome=f'Exposed group produced a statistically significant increase.'
else:
outcome='There is no statistically significant difference between two test groups'
if (stop!=None) & (k==np.nan):
c1=clowerUpper(r,stop,t1,alpha,beta)
c1=math.floor(np.mean(c1)-0.5)
if x1[n0]<=c1:
truncate_decision='h0'
outcome='Maximum Limit Decision. The aproximate decision point shows their is no statistically significant difference between two test groups'
else:
truncate_decision='h1'
outcome=f'Maximum Limit Decision. The aproximate decision point shows exposed group produced a statistically significant increase.'
truncated=stop
else:
truncate_decision='Non'
truncated=np.nan
return (outcome,n, k,l,u,truncated,truncate_decision,x1,r,stats,limits)
###Output
_____no_output_____
###Markdown
3.2 Boundaries and Plots
###Code
class SequentialTest:
def __init__(t1 = 2, alpha = 0.05, beta = 0.1, stop = None):
'''
initialise startup variables
'''
if t1<=1:
printLog('warning',"Odd ratio should exceed 1.")
if (alpha >0.5) | (beta >0.5):
printLog('warning',"Unrealistic values of alpha or beta were passed."
+" You should have good reason to use large alpha & beta values")
if stop!=None:
stop=math.floor(n0)
def computeBoundaries(self,alpha, beta):
'''
This function shoud compute boundaries
'''
a=math.log(beta/(1-alpha))
b=math.log((1 - beta)/alpha)
return a, b
def plotTest(self):
'''
showing the cumulative statistical test (e.g., log probability ratio) and the upper and lower limits.
'''
def plotBoundaries(self, exposed):
'''cumulative sums of exposed successes, bounded by the critical limits.
'''
# e_df = pd.DataFrame(exposed)
# a = e_df.cumsum()
# a.columns = ['value']
# sns.lineplot(x = a.index, y = a.value)
b
###Output
_____no_output_____
###Markdown
3.3 Data Transformation
###Code
def transform_data(df):
'''
segment data into exposed and control groups
consider that SmartAd runs the experment hourly, group data into hours.
Hint: create new column to hold date+hour and use df.column.map(lambda x: pd.Timestamp(x,tz=None).strftime('%Y-%m-%d:%H'))
create two dataframes with bernouli series 1 for posetive(yes) and 0 for negative(no)
Hint: Given engagement(sum of yes and no until current observation as an array) and success (yes count as an array), the method generates random binomial distribution
#Example
engagement = np.array([5, 3, 3])
yes = np.array([2, 0, 3])
Output is "[1] 1 0 1 0 0 0 0 0 1 1 1", showing a binary array of 5+3+3 values
of which 2 of the first 5 are ones, 0 of the next 3 are ones, and all 3 of
the last 3 are ones where the position the ones is randomly distributed within each group.
'''
# split dataset to control and exposed groups
exposed = df.loc[df.experiment == 'exposed'] #exposed set
control = df.loc[df.experiment == 'control'] #control set
#datehour
exposed['dateHour'] = pd.to_datetime(exposed.date)
exposed.dateHour += pd.to_timedelta(exposed.hour, unit='h')
exposed.dateHour = exposed.dateHour.map(lambda x: pd.Timestamp(x,tz=None).strftime('%Y-%m-%d:%H'))
control['dateHour'] = pd.to_datetime(control.date)
control.dateHour += pd.to_timedelta(control.hour, unit='h')
control.dateHour = control.dateHour.map(lambda x: pd.Timestamp(x,tz=None).strftime('%Y-%m-%d:%H'))
# groupby datehour
df_exposed = exposed.groupby('dateHour').agg({'auction_id':'count', 'device_make':'count', 'platform_os':'count', 'browser':'count', 'yes':'sum', 'no':'sum'})
df_control = control.groupby('dateHour').agg({'auction_id':'count', 'device_make':'count', 'platform_os':'count', 'browser':'count', 'yes':'sum', 'no':'sum'})
# engagement
df_exposed['engagement'] = df_exposed['yes'] + df_exposed['no']
df_control['engagement'] = df_control['yes'] + df_control['no']
# success
df_exposed['success'] = df_exposed['yes']
df_control['success'] = df_control['yes']
# p of success
global p_e, p_c
p_e = sum(df_exposed['success']) / sum(df_exposed['engagement'])
p_c = sum(df_control['success']) / sum(df_control['engagement'])
# engagement and success to arrays then p
engagement_e = df_exposed['engagement'].to_numpy()
engagement_c = df_control['engagement'].to_numpy()
# data generation
e = np.random.choice([0, 1], size=((np.sum(engagement_e)),), p=[p_e, 1-p_e])
c = np.random.choice([0, 1], size=((np.sum(engagement_c)),), p=[p_c, 1-p_c])
return e,c
###Output
_____no_output_____
###Markdown
3.4 Data Summary plot and print functions
###Code
def plotDataSummary(exposed, control):
'This function plots cummulated success'
fig, ax = plt.subplots(figsize=(10,8))
kwargs = {'cumulative': True}
sns.distplot(control.success, hist_kws=kwargs, kde_kws=kwargs, color = 'black')
sns.distplot(exposed.success, hist_kws=kwargs, kde_kws=kwargs, color = 'green')
plt.title('A histogram indicating cummulative distributions of success in the 2 groups black: control, green:exposed')
plt.ylabel('frequency')
plt.xlabel('cummulative success')
def pretyPrintTestResult(self, test):
'''This function print final test result. Json format is recommended. For example
{
"name": "",
"engagementCountControl": ,
"engagementCountExposed": ,
"positiveCountControl": ,
"positiveCountExposed": ,
"ControlSuccessProbability": ,
"ExposedSuccessProbability": ,
"basePositiveRate": ,
"significanceSign": ".",
"lift": ,
"oddRatio": ,
"exactSuccessOddRate":,
"confidenceIntervalLevel": ,
"alpha": ,
"beta": ,
"power": ,
"criticalValue": ,
"lower critical(a)":
"upper critical(b)": ,
"TotalObservation":
}'''
###Output
_____no_output_____
###Markdown
3.5 Testing 3.5.1 Parameters
###Code
'statistical parameters for SPRT'
alpha = 0.05
beta = 0.1
'Compute statistical lower and upper decision points such as a and b'
st = SequentialTest()
a, b = st.computeBoundaries(alpha = alpha, beta = beta)
##data processing here
exposed,control=transform_data(data)
# odd ratio
odd_ratio=(p_e/(1-p_e))/(p_c/(1-p_c))
###Output
_____no_output_____
###Markdown
3.5.2 Testing
###Code
test = ConditionalSPRT(x = exposed,y = control,t1 = odd_ratio, alpha=alpha,beta=alpha)
test[0]
###Output
_____no_output_____
###Markdown
3.5.3 Plots
###Code
!pip install sprt
import sprt
##plot data summary
# plotDataSummary(exposed,control)
# 'Print test result.'
# pretyPrintTestResult(resultObject)
# generate the requirements file
!pip freeze > requirements.txt
###Output
_____no_output_____ |
Chapter02/.ipynb_checkpoints/Sequential_method_to_build_a_neural_network-checkpoint.ipynb | ###Markdown
###Code
x = [[1,2],[3,4],[5,6],[7,8]]
y = [[3],[7],[11],[15]]
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from torch.optim import SGD
device = 'cuda' if torch.cuda.is_available() else "cpu"
class MyDataset(Dataset):
def __init__(self, x, y):
super().__init_
import torch
import torch.nn as nn
import numpy as np
from torch.utils.data import Dataset, DataLoader
device = 'cuda' if torch.cuda.is_available() else 'cpu'
class MyDataset(Dataset):
def __init__(self, x, y):
self.x = torch.tensor(x).float().to(device)
self.y = torch.tensor(y).float().to(device)
def __getitem__(self, ix):
return self.x[ix], self.y[ix]
def __len__(self):
return len(self.x)
ds = MyDataset(x, y)
dl = DataLoader(ds, batch_size=2, shuffle=True)
model = nn.Sequential(
nn.Linear(2, 8),
nn.ReLU(),
nn.Linear(8, 1)
).to(device)
!pip install torch_summary
from torchsummary import summary
summary(model, torch.zeros(1,2));
loss_func = nn.MSELoss()
from torch.optim import SGD
opt = SGD(model.parameters(), lr = 0.001)
import time
loss_history = []
start = time.time()
for _ in range(50):
for ix, iy in dl:
opt.zero_grad()
loss_value = loss_func(model(ix),iy)
loss_value.backward()
opt.step()
loss_history.append(loss_value)
end = time.time()
print(end - start)
val = [[8,9],[10,11],[1.5,2.5]]
val = torch.tensor(val).float()
model(val.to(device))
val.sum(-1)
###Output
_____no_output_____ |
examples/detection/convert_detection_model.ipynb | ###Markdown
Download the pretrained model Download the model configuration file and checkpoint containing pretrained weights by using the following command. For improved performance, increase the non-max suppression score threshold in the downloaded config file from 1e-8 to something greater, like 0.1.
###Code
config_path, checkpoint_path = download_detection_model(MODEL, 'data')
###Output
_____no_output_____
###Markdown
Build the frozen graph
###Code
frozen_graph, input_names, output_names = build_detection_graph(
config=config_path,
checkpoint=checkpoint_path,
score_threshold=0.3,
batch_size=1
)
###Output
W0901 09:08:58.797062 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/tf_trt_models-0.0-py3.6.egg/tf_trt_models/detection.py:179: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
W0901 09:08:58.798357 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/tf_trt_models-0.0-py3.6.egg/tf_trt_models/detection.py:183: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
W0901 09:08:58.974009 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:381: The name tf.gfile.MakeDirs is deprecated. Please use tf.io.gfile.makedirs instead.
W0901 09:08:58.975014 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:113: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0901 09:08:58.989618 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/core/preprocessor.py:2412: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead.
W0901 09:08:59.030214 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/meta_architectures/faster_rcnn_meta_arch.py:166: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
W0901 09:09:08.932466 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/predictors/convolutional_box_predictor.py:150: The name tf.logging.info is deprecated. Please use tf.compat.v1.logging.info instead.
W0901 09:09:08.988511 140190800152384 deprecation.py:323] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/core/box_list_ops.py:141: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W0901 09:09:09.545723 140190800152384 deprecation.py:506] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/utils/spatial_transform_ops.py:418: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.
Instructions for updating:
box_ind is deprecated, use box_indices instead
W0901 09:09:11.165808 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/layers/python/layers/layers.py:1634: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.flatten instead.
W0901 09:09:14.836183 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:180: batch_gather (from tensorflow.python.ops.array_ops) is deprecated and will be removed after 2017-10-25.
Instructions for updating:
`tf.batch_gather` is deprecated, please use `tf.gather` with `batch_dims` instead.
W0901 09:09:15.524989 140190800152384 deprecation.py:323] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:362: get_or_create_global_step (from tensorflow.contrib.framework.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.get_or_create_global_step
W0901 09:09:15.531374 140190800152384 deprecation.py:323] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:518: print_model_analysis (from tensorflow.contrib.tfprof.model_analyzer) is deprecated and will be removed after 2018-01-01.
Instructions for updating:
Use `tf.profiler.profile(graph, run_meta, op_log, cmd, options)`. Build `options` with `tf.profiler.ProfileOptionBuilder`. See README.md for details
W0901 09:09:15.533828 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/profiler/internal/flops_registry.py:142: tensor_shape_from_node_def_name (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.tensor_shape_from_node_def_name`
651 ops no flops stats due to incomplete shapes.
651 ops no flops stats due to incomplete shapes.
W0901 09:09:22.164069 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:411: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
W0901 09:09:27.135222 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
W0901 09:09:40.721538 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/tools/freeze_graph.py:233: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
W0901 09:09:40.722523 140190800152384 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/graph_util_impl.py:270: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
W0901 09:09:50.972707 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:288: The name tf.saved_model.builder.SavedModelBuilder is deprecated. Please use tf.compat.v1.saved_model.builder.SavedModelBuilder instead.
W0901 09:09:50.974496 140190800152384 deprecation.py:323] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:291: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
W0901 09:09:50.975758 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:297: The name tf.saved_model.signature_def_utils.build_signature_def is deprecated. Please use tf.compat.v1.saved_model.signature_def_utils.build_signature_def instead.
W0901 09:09:50.976409 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:300: The name tf.saved_model.signature_constants.PREDICT_METHOD_NAME is deprecated. Please use tf.saved_model.PREDICT_METHOD_NAME instead.
W0901 09:09:50.977097 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:305: The name tf.saved_model.tag_constants.SERVING is deprecated. Please use tf.saved_model.SERVING instead.
W0901 09:09:50.977633 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/exporter.py:307: The name tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY is deprecated. Please use tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY instead.
W0901 09:09:53.562907 140190800152384 deprecation_wrapper.py:119] From /root/.local/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/utils/config_util.py:188: The name tf.gfile.Open is deprecated. Please use tf.io.gfile.GFile instead.
###Markdown
Optimize the model with TensorRT
###Code
print(output_names)
trt_graph = trt.create_inference_graph(
input_graph_def=frozen_graph,
outputs=output_names,
max_batch_size=1,
max_workspace_size_bytes=1 << 25,
precision_mode='FP16',
minimum_segment_size=50
)
save_path = os.path.join('.', 'model')
if tf.gfile.Exists(save_path) == False:
tf.gfile.MkDir(save_path)
save_file_path = os.path.join(save_path, MODEL + '_trt_fp16.pb')
with open(save_file_path, 'wb') as f:
f.write(trt_graph.SerializeToString())
###Output
_____no_output_____
###Markdown
Create session and load graph
###Code
tf_config = tf.ConfigProto()
tf_config.gpu_options.allow_growth = True
tf_sess = tf.Session(config=tf_config)
tf.import_graph_def(trt_graph, name='')
tf_input = tf_sess.graph.get_tensor_by_name(input_names[0] + ':0')
tf_scores = tf_sess.graph.get_tensor_by_name('detection_scores:0')
tf_boxes = tf_sess.graph.get_tensor_by_name('detection_boxes:0')
tf_classes = tf_sess.graph.get_tensor_by_name('detection_classes:0')
tf_num_detections = tf_sess.graph.get_tensor_by_name('num_detections:0')
###Output
_____no_output_____
###Markdown
Load and Preprocess Image
###Code
image = Image.open(IMAGE_PATH)
plt.imshow(image)
image_resized = np.array(image.resize((300, 300)))
image = np.array(image)
###Output
_____no_output_____
###Markdown
Run network on Image
###Code
scores, boxes, classes, num_detections = tf_sess.run([tf_scores, tf_boxes, tf_classes, tf_num_detections], feed_dict={
tf_input: image_resized[None, ...]
})
boxes = boxes[0] # index by 0 to remove batch dimension
scores = scores[0]
classes = classes[0]
num_detections = num_detections[0]
###Output
_____no_output_____
###Markdown
Display Results
###Code
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.imshow(image)
# plot boxes exceeding score threshold
for i in range(num_detections.astype('int32')):
# scale box to image coordinates
box = boxes[i] * np.array([image.shape[0], image.shape[1], image.shape[0], image.shape[1]])
# display rectangle
patch = patches.Rectangle((box[1], box[0]), box[3] - box[1], box[2] - box[0], color='g', alpha=0.3)
ax.add_patch(patch)
# display class index and score
plt.text(x=box[1] + 10, y=box[2] - 10, s='%d (%0.2f) ' % (classes[i], scores[i]), color='w')
plt.show()
###Output
_____no_output_____
###Markdown
Benchmark
###Code
num_samples = 50
t0 = time.time()
for i in range(num_samples):
scores, boxes, classes, num_detections = tf_sess.run([tf_scores, tf_boxes, tf_classes, tf_num_detections], feed_dict={
tf_input: image_resized[None, ...]
})
t1 = time.time()
print('Average runtime: %f seconds' % (float(t1 - t0) / num_samples))
###Output
Average runtime: 0.644130 seconds
###Markdown
Close session to release resources
###Code
tf_sess.close()
###Output
_____no_output_____ |
10_ml_clustering/notebooks/02_exercises.ipynb | ###Markdown
Clustering using `scikit-learn`
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing, cluster, metrics
from sklearn.pipeline import Pipeline
from scipy.spatial.distance import cdist, pdist
IRIS_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/iris/bezdekIris.data'
var_names = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
###Output
_____no_output_____ |
data_science/research/archive_exploration/01 - subject coocurrence.ipynb | ###Markdown
Archive dataThe Wellcome archive sits in a collections management system called CALM, which follows a rough set of standards and guidelines for storing archival records called [ISAD(G)](https://en.wikipedia.org/wiki/ISAD(G). The archive is comprised of _collections_, each of which has a hierarchical set of series, sections, subjects, items and pieces sitting underneath it. In the following notebooks I'm going to explore it and try to make as much sense of it as I can programatically.Let's start by loading in a few useful packages and defining some nice utils.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
plt.rcParams['figure.figsize'] = (20, 20)
import pandas as pd
import numpy as np
import networkx as nx
from sklearn.cluster import AgglomerativeClustering
from umap import UMAP
from tqdm import tqdm_notebook as tqdm
def flatten(input_list):
return [item
for sublist in input_list
for item in sublist]
def cartesian(*arrays):
return np.array([x.reshape(-1) for x in np.meshgrid(*arrays)]).T
def clean(subject):
return subject.strip().lower().replace('<p>', '')
###Output
_____no_output_____
###Markdown
let's load up our CALM data. The data has been exported in its entirety as a single `.json` where each line is a record. You can download the data yourself using [this script](https://github.com/wellcometrust/platform/blob/master/misc/download_oai_harvest.py). Stick the `.json` in the neighbouring `/data` directory to run the rest of the notebook seamlessly.
###Code
df = pd.read_json('data/calm_records.json')
len(df)
df.astype(str).describe()
###Output
_____no_output_____
###Markdown
Exploring individual columnsAt the moment I have no idea what kind of information CALM contains - lets look at the list of column names
###Code
list(df)
###Output
_____no_output_____
###Markdown
Here I'm looking through a sample of values in each column, choosing the columns to explore based on the their headings, a bit of contextual info from colleagues and the `df.describe()` above.
###Code
df['Subject']
###Output
_____no_output_____
###Markdown
After much trial and error...Subjects look like an interesting avenue to explore further. Where subjects have _actually_ been filled in and the entry is not `None`, a list of subjects is returned. We can explore some of these subjects' subtleties by creating an adjacency matrix. We'll count the number of times each subject appears alongside every other subject and return a big $n \times n$ matrix, where $n$ is the total number of unique subjects. We can use this adjacency matrix for all sorts of stuff, but we have to build it first. To start, lets get a uniqur list of all subjects. This involves unpacking each sub-list and flattening them out into one long list, before finding the unique elements. We'll also use the `clean` function defined above to get rid of any irregularities which might become annoying later on.
###Code
subjects = flatten(df['Subject'].dropna().tolist())
print(len(subjects))
subjects = list(set(map(clean, subjects)))
print(len(subjects))
###Output
_____no_output_____
###Markdown
At this point it's often helpful to index our data, ie transform words into numbers. We'll create two dictionaries which map back and forth between the subjects and their corresponding indicies:
###Code
index_to_subject = {index: subject for index, subject in enumerate(subjects)}
subject_to_index = {subject: index for index, subject in enumerate(subjects)}
###Output
_____no_output_____
###Markdown
Lets instantiate an empty numpy array which we'll then fill with our coocurrence data. Each column and each row will represent a subject - each cell (the intersection of a column and row) will therefore represent the 'strength' of the interaction between those subjects. As we haven't seen any interactions yet, we'll set every array element to 0.
###Code
adjacency = np.empty((len(subjects), len(subjects)),
dtype=np.uint16)
###Output
_____no_output_____
###Markdown
To populate the matrix, we want to find every possible combination of subject in each sub-list from our original column, ie if we had the subjects`[Disease, Heart, Heart Diseases, Cardiology]`we would want to return `[['Disease', 'Disease'], ['Heart', 'Disease'], ['Heart Diseases', 'Disease'], ['Cardiology', 'Disease'], ['Disease', 'Heart'], ['Heart', 'Heart'], ['Heart Diseases', 'Heart'], ['Cardiology', 'Heart'], ['Disease', 'Heart Diseases'], ['Heart', 'Heart Diseases'], ['Heart Diseases', 'Heart Diseases'], ['Cardiology', 'Heart Diseases'], ['Disease', 'Cardiology'], ['Heart', 'Cardiology'], ['Heart Diseases', 'Cardiology'], ['Cardiology', 'Cardiology']]`The `cartesian()` function which I've defined above will do that for us. We then find the appropriate intersection in the matrix and add another unit of 'strength' to it. We'll do this for every row of subjects in the `['Subjects']` column.
###Code
for row_of_subjects in tqdm(df['Subject'].dropna()):
for subject_pair in cartesian(row_of_subjects, row_of_subjects):
subject_index_1 = subject_to_index[clean(subject_pair[0])]
subject_index_2 = subject_to_index[clean(subject_pair[1])]
adjacency[subject_index_1, subject_index_2] += 1
###Output
_____no_output_____
###Markdown
We can do all sorts of fun stuff now - adjacency matrices are the foundation on which all of graph theory is built. However, because it's a bit more interesting, I'm going to start with some dimensionality reduction. We'll get to the graphy stuff later. Using [UMAP](https://github.com/lmcinnes/umap), we can squash the $n \times n$ dimensional matrix down into a $n \times m$ dimensional one, where $m$ is some arbitrary integer. Setting $m$ to 2 will then allow us to plot each subject as a point on a two dimensional plane. UMAP will try to preserve the 'distances' between subjects - in this case, that means that related or topically similar subjects will end up clustered together, and different subjects will move apart.
###Code
embedding_2d = pd.DataFrame(UMAP(n_components=2)
.fit_transform(adjacency))
embedding_2d.plot.scatter(x=0, y=1);
###Output
_____no_output_____
###Markdown
We can isolate the clusters we've found above using a number of different methods - `scikit-learn` provides easy access to some very powerful algorithms. Here I'll use a technique called _agglomerative clustering_, and make a guess that 15 is an appropriate number of clusters to look for.
###Code
n_clusters = 15
embedding_2d['labels'] = (AgglomerativeClustering(n_clusters)
.fit_predict(embedding_2d.values))
embedding_2d.plot.scatter(x=0, y=1,
c='labels',
cmap='Paired');
###Output
_____no_output_____
###Markdown
We can now use the `index_to_subject` mapping that we created earlier to examine which subjects have been grouped together into clusters
###Code
for i in range(n_clusters):
print(str(i) + ' ' + '-'*80 + '\n')
print(np.sort([index_to_subject[index]
for index in embedding_2d[embedding_2d['labels'] == i].index.values]))
print('\n')
###Output
_____no_output_____ |
Assignemnt_3.ipynb | ###Markdown
Linear Algebra for ChE Assignment 3: Matrices We'll try to explore in greater dimensions now that you have a basic understanding of Python. ObjectivesYou will be able to:Be familiar with matrices and how they relate to linear equations.Basic matrix computations are performed.Matrix equations can be programmed and translated using Python. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
###Output
_____no_output_____
###Markdown
Matrices One of the most fundamental aspects of modern computing is the notation and use of matrices. Matrices are also useful representations of complicated equations or many interconnected equations, ranging from two-dimensional to hundreds of thousands of them. Let's assume you have two variables, A and B, in your equation $$A = \left\{ \begin{array}\ x + y \\ 4x - 10y \end{array}\right. \\B = \left\{ \begin{array}\ x+y+z \\ 3x -2y -z \\ -x + 4y +2z \end{array}\right. \\C = \left\{ \begin{array}\ w-2x+3y-4z \\ 3w- x -2y +z \\ \end{array}\right. $$ Assume you've already covered the fundamental format, types, and operations of matrices. We'll go ahead and do them in Python here. Declaring Matrices We'll express a system of linear equations as a matrix, much like we did in our previous laboratory exercise. The elements of a matrix are the things or numbers in matrices. Matrixes have a list/array-like structure in which these items are grouped and ordered in rows and columns. These elements are indexed according to their location in relation to their rows and columns, exactly like arrays. The equation below can be used to express this. A is a matrix whose elements are indicated by the symbol aij. The number of rows in the matrix is denoted by i whereas the number of columns is denoted by j.It's worth noting that a matrix's size is i x j $$A=\begin{bmatrix}a_{(0,0)}&a_{(0,1)}&\dots&a_{0,j-1)}\\a_{(1,0)}&a_{(1,1)}&\dots&a_{1,j-1)}\\\vdots&\vdots&ddot&\vdots&\\a_{(i-1,0)}&a_{(i-1,1)}&\dots&a_{(i-1,j-1)}\end{bmatrix}$$ We've previously gone over some of the different types of matrices as vectors, but in this lab assignment, we'll go over them again. Because you already know how to create vectors using form, dimensions, and size attributes, we'll use these features to study these matrices
###Code
## Since we'll keep on describing matrices. Let's make a function.
def describe_mat(matrix):
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\n')
## Declaring a 2 x 2 matrix
A = np.array([
[1, 2],
[3, 1]
])
describe_mat(A)
G = np.array([
[1,1],
[2,2]
])
describe_mat(G)
## Declaring a 3 x 2 matrix
B = np.array([
[8, 2],
[5, 4],
[1, 1]
])
describe_mat(B)
H = np.array([1,2,3,4,5])
describe_mat(H)
###Output
Matrix:
[1 2 3 4 5]
Shape: (5,)
Rank: 1
###Markdown
Categorizing Matrices Matrixes can be classified in a variety of ways. One may be based on their form, while the other could be based on their element values. We'll do our best to get through them. According to shape Row and Column Matrices In vector and matrix calculations, row and column matrices are frequent. They can also be used to represent the rows and columns of a larger vector space. A single column or row is used to depict row and column matrices. As a result, the form of row matrices is 1 x j, whereas the shape of column matrices is i x 1.
###Code
## Declaring a Row Matrix
row_mat_1D = np.array([
1, 3, 2
]) ## this is a 1-D Matrix with a shape of (3,), it's not really considered as a row matrix.
row_mat_2D = np.array([
[1,2,3]
]) ## this is a 2-D Matrix with a shape of (1,3)
describe_mat(row_mat_1D)
describe_mat(row_mat_2D)
## Declaring a Column Matrix
col_mat = np.array([
[1],
[2],
[5]
]) ## this is a 2-D Matrix with a shape of (3,1)
describe_mat(col_mat)
###Output
Matrix:
[[1]
[2]
[5]]
Shape: (3, 1)
Rank: 2
###Markdown
Square Matrices Matrixes with the same row and column sizes are known as square matrices. If we can claim that a matrix is square, in order to find square matrices, we may change our matrix descriptor function.
###Code
def describe_mat(matrix):
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
square_mat = np.array([
[1,2,5],
[3,3,8],
[6,1,2]
])
non_square_mat = np.array([
[1,2,5],
[3,3,8]
])
describe_mat(square_mat)
describe_mat(non_square_mat)
###Output
Matrix:
[[1 2 5]
[3 3 8]
[6 1 2]]
Shape: (3, 3)
Rank: 2
Is Square: True
Matrix:
[[1 2 5]
[3 3 8]]
Shape: (2, 3)
Rank: 2
Is Square: False
###Markdown
According to element values Null Matrix A Null Matrix is a matrix that has no elements. It is always a subspace of any vector or matrix
###Code
def describe_mat(matrix):
if matrix.size > 0:
is_square = True if matrix.shape[0] == matrix.shape[1] else False
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\t{matrix.ndim}\nIs Square: {is_square}\n')
else:
print('Matrix is Null')
null_mat = np.array([])
describe_mat(null_mat)
###Output
Matrix is Null
###Markdown
Zero Matrix A zero matrix can be any rectangular matrix but with all elements having a value of 0.
###Code
zero_mat_row = np.zeros((1,2))
zero_mat_sqr = np.zeros((2,2))
zero_mat_rct = np.zeros((3,2))
print(f'Zero Row Matrix: \n{zero_mat_row}')
print(f'Zero Square Matrix: \n{zero_mat_sqr}')
print(f'Zero Rectangular Matrix: \n{zero_mat_rct}')
###Output
Zero Row Matrix:
[[0. 0.]]
Zero Square Matrix:
[[0. 0.]
[0. 0.]]
Zero Rectangular Matrix:
[[0. 0.]
[0. 0.]
[0. 0.]]
###Markdown
Ones Matrix A ones matrix, just like the zero matrix, can be any rectangular matrix but all of its elements are 1s instead of 0s.
###Code
ones_mat_row = np.ones((1,2))
ones_mat_sqr = np.ones((2,2))
ones_mat_rct = np.ones((3,2))
print(f'Ones Row Matrix: \n{ones_mat_row}')
print(f'Ones Square Matrix: \n{ones_mat_sqr}')
print(f'Ones Rectangular Matrix: \n{ones_mat_rct}')
###Output
Ones Row Matrix:
[[1. 1.]]
Ones Square Matrix:
[[1. 1.]
[1. 1.]]
Ones Rectangular Matrix:
[[1. 1.]
[1. 1.]
[1. 1.]]
###Markdown
Diagonal Matrix A diagonal matrix is a square matrix that has values only at the diagonal of the matrix.
###Code
np.array([
[2,0,0],
[0,3,0],
[0,0,5]
])
# a[1,1], a[2,2], a[3,3], ... a[n-1,n-1]
d = np.diag([2,3,5,7])
np.diag(d).shape == d.shape[0] == d.shape[1]
###Output
_____no_output_____
###Markdown
Identity Matrix An identity matrix is a special diagonal matrix in which the values at the diagonal are ones
###Code
np.eye(5)
np.identity(5)
###Output
_____no_output_____
###Markdown
Upper Triangular Matrix An upper triangular matrix is a matrix that has no values below the diagona
###Code
np.array([
[1,2,3],
[0,3,1],
[0,0,5]
])
###Output
_____no_output_____
###Markdown
Lower Triangular Matrix A lower triangular matrix is a matrix that has no values above the diagonal
###Code
np.array([
[1,0,0],
[5,3,0],
[7,8,5]
])
###Output
_____no_output_____
###Markdown
Practice 1. Given the linear combination below, try to create a corresponding matrix representing it :$$\theta = 5x + 3y = :$$ 2. Given the system of linear combinations below, try to encode it as a matrix. Also describe the matrix $$A = \left\{\begin{array}5x_1 + 2x_2 +x_3\\4x_2 - x_3\\10x_3\end{array}\right.$$ Given the matrix below, express it as a linear combination in a markdown.
###Code
G = np.array([
[1,7,8],
[2,2,2],
[4,6,7]
])
###Output
_____no_output_____
###Markdown
Given the matrix below, display the output as a LaTeX makdown also express it as a system of linear combinations
###Code
H = np.tril(G)
H
###Output
_____no_output_____
###Markdown
Matrix Algebra Addition
###Code
A = np.array([
[1,2],
[2,3],
[4,1]
])
B = np.array([
[2,2],
[0,0],
[1,1]
])
A+B
2+A ##Broadcasting
# 2*np.ones(A.shape)+A
###Output
_____no_output_____
###Markdown
Subtraction
###Code
A-B
3-B == 3*np.ones(B.shape)-B
###Output
_____no_output_____
###Markdown
Element-wise Multiplication
###Code
A*B
np.multiply(A,B)
2*A
A@B
alpha=10**-10
A/(alpha+B)
np.add(A,B)
###Output
_____no_output_____
###Markdown
Activity Task 1 Create a function named mat_desc() that througouhly describes a matrix, it should1. Displays the shape, size, and rank of the matrix.2. Displays whether the matrix is square or non-square.3. Displays whether the matrix is an empty matrix.4. Displays if the matrix is an identity, ones, or zeros matrixUse 5 sample matrices in which their shapes are not lower than (3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
## Function area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_desc (matrix):
if matrix.size > 0:
if matrix.shape [0] == matrix.shape[1]:
s = "Square."
else:
s = "Non-square."
if np.all(matrix == np.identity(matrix.shape[0])):
sp = "Identity Matrix."
elif np.all(matrix == np.zeros(matrix.shape)):
sp = "Zero Matrix."
elif np.all(matrix == np.ones(matrix.shape)):
sp = "Ones Matrix."
else:
sp = "None."
print(f'Matrix:\n{matrix}\n\nShape:\t{matrix.shape}\nRank:\{matrix.ndim}\nSquare?: {s}\nSpecial Characteristics: {sp}')
else:
print('Matrix is Empty')
## Matrix declarations
hi = np.array([
[3,1,2,4],
[4,7,9,6],
[9,1,6,7]
])
one = np.array([
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1],
[1,1,1,1,1],
])
id = np.array([
[1,0,0,0],
[0,1,0,0],
[0,0,1,0],
[0,0,0,1]
])
## Test Areas
mat_desc(hi)
mat_desc(one)
mat_desc(id)
###Output
Matrix:
[[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]]
Shape: (4, 4)
Rank:\2
Square?: Square.
Special Characteristics: Identity Matrix.
###Markdown
Task 2 Create a function named mat_operations() that takes in two matrices a input parameters it should1. Determines if the matrices are viable for operation and returns your own error message if they are not viable.2. Returns the sum of the matrices.3. Returns the differen of the matrices.4. Returns the element-wise multiplication of the matrices.5. Returns the element-wise division of the matrices.Use 5 sample matrices in which their shapes are not lower than (3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
# Function Area
import numpy as np
import matplotlib.pyplot as plt
import scipy.linalg as la
%matplotlib inline
def mat_operations(matA,matB):
op = input("Enter opration(+,-,*,/): ")
if matA.shape == matB.shape:
# proceed to operations
if op == '+':
return matA + matB
elif op == '-':
return matA - matB
elif op == '*':
return matA * matB
elif op == '/':
return matA/matB
else:
print("The matrices are viable but you have not inputted a correct operation.")
else:
print("Huhuhu.The matrices are not viable.")
#Matrix declarations
A = np.array([
[1,1,1,1],
[1,1,1,1],
[1,1,1,1],
[1,1,1,1]
])
B = np.array([
[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2],
])
C = np.array([
[3, 3, 3, 3],
[3, 3, 3, 3],
[3, 3, 3, 3]
])
D = np.array([
[4, 4, 4, 4],
[4, 4, 4, 4],
[4, 4, 4, 4]
])
mat_operations(A,C)
mat_operations(D,C)
mat_operations(A,B)
###Output
Enter opration(+,-,*,/): -
|
notebooks/processing/01-signal-preprocessing.ipynb | ###Markdown
Signal Preprocessing Before the raw signal can be used by the machine learning algorithms it must be preprocessed. This notebook will perform the different preprocessing steps on some sample signals to visually verify they are functioning properly.
###Code
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from sklearn.pipeline import Pipeline
from src.data.generators import data_generator_levels, data_generator_signals
from src.models.encoders.levelbinary import LevelBinary
from src.models.encoders.levelmulti import LevelMulti
from src.models.transformers.filter import Filter
from src.models.transformers.baseline import Baseline
from src.models.transformers.truncate import Truncate
mpl.style.use('seaborn-notebook')
plt.rcParams["figure.figsize"] = (12, 5)
###Output
_____no_output_____
###Markdown
Encoding Target Values The target value for this project is a floating point concentation value. This value must be encoded into a class value in order to help analyze the results of each algorithm. The following code will geneate a sample with one in each of the target classes. The encoders are then used determine the class values.
###Code
cvals = np.array([0.15, 0.50, 0.85])
xvalues, ylevel, blexps, ydata0 = data_generator_levels(cvals)
signals = data_generator_signals(cvals, noise=0.0)
ynormal = LevelBinary(targetmin=0.2, targetmax=0.8).transform(cvals)
ymulti = LevelMulti(targetmin=0.2, targetmax=0.8).transform(cvals)
print(cvals)
print(ynormal)
print(ymulti)
###Output
[0.15 0.5 0.85]
[1 0 1]
[0 1 2]
###Markdown
Preprocessing Pipeline The raw signal from each sample will be run through the different steps of the preprocessing pipeline. The following code and plots will show the output of these different steps to ensure the entire transformation produces the best input signal for the machine learning algorithms. The following table gives a brief description of each preprocessing step. | Step | Description | |:----------|:-------------------------------------------------------------------------------------| | filter | Applys Savitsky-Golay filter to raw data to smooth out the signal. | | baseline | Determines baseline signal from the input signal. | | correct | Performs baseline correction by subtracting off the baseline signal. | | truncate | Slices the input signal to only output the region of interest from the signal. |
###Code
xmin = 200
xmax = 450
datapipeline = Pipeline([
('filter', Filter(windowsize=15, polyorder=2)),
('baseline', Baseline(polyorder=3, weight=0.95, outbaseline=True)),
('correct', Baseline(polyorder=3, weight=0.95)),
('truncate', Truncate(xmin=xmin, xmax=xmax))
])
ydata_fl = datapipeline.named_steps['filter'].transform(ydata0.copy())
ydata_bl = datapipeline.named_steps['baseline'].transform(ydata_fl.copy())
ydata_cs = datapipeline.named_steps['correct'].transform(ydata_fl.copy())
ydata_tr = datapipeline.named_steps['truncate'].transform(ydata_cs.copy())
###Output
_____no_output_____
###Markdown
Baseline Signal (Full Signal) The following set of plots will show the raw signal and the computed baseline signal for each sample. The pure signal used to generate the sample is also displayed.
###Code
for i in range(3):
fig, axs = plt.subplots()
axs.plot(xvalues, signals[i], label='signal')
axs.plot(xvalues, ydata0[i], label='raw')
axs.plot(xvalues, ydata_bl[i], label='baseline')
fig.suptitle('Sample:[{0}] Baseline:[{1:.4f}] Target:[{2:.4f}]'.format(i, blexps[i], ylevel[i]))
plt.legend()
###Output
_____no_output_____
###Markdown
Baseline Signal (Region of Interest) The following set of plots will show the different lines computed from the preprocessing pipeline in only the region of interest for this analysis. It will include the filtered/smoothed signal line since this line is visible is this chart.
###Code
for i in range(3):
fig, axs = plt.subplots()
axs.plot(xvalues[xmin:xmax+1], signals[i, xmin:xmax+1], label='signal')
axs.plot(xvalues[xmin:xmax+1], ydata0[i, xmin:xmax+1], label='raw')
axs.plot(xvalues[xmin:xmax+1], ydata_fl[i, xmin:xmax+1], label='filter')
axs.plot(xvalues[xmin:xmax+1], ydata_bl[i, xmin:xmax+1], label='baseline')
fig.suptitle('Sample:[{0}] Baseline:[{1:.4f}] Target:[{2:.4f}]'.format(i, blexps[i], ylevel[i]))
plt.legend()
###Output
_____no_output_____
###Markdown
Baseline Corrected Signal (Region of Interest) The following set of plots will show the baseline corrected lines computed from the preprocessing pipeline in only the region of interest for this analysis. This data displayed in these charts will end up being the data that will be used by the machine learning algorithms.
###Code
for i in range(3):
fig, axs = plt.subplots()
axs.plot(xvalues[xmin:xmax+1], signals[i, xmin:xmax+1], label='signal')
axs.plot(xvalues[xmin:xmax+1], ydata_tr[i], label='corrected')
fig.suptitle('Sample:[{0}] Baseline:[{1:.4f}] Target:[{2:.4f}]'.format(i, blexps[i], ylevel[i]))
plt.legend()
###Output
_____no_output_____ |
docs/introduction/getting-started/regression.ipynb | ###Markdown
Regression Regression is about predicting a numeric output for a given sample. A labeled regression sample is made up of a bunch of features and a number. The number is usually continuous, but it may also be discrete. We'll use the Trump approval rating dataset as an example.
###Code
from river import datasets
dataset = datasets.TrumpApproval()
dataset
###Output
_____no_output_____
###Markdown
This dataset is a streaming dataset which can be looped over.
###Code
for x, y in dataset:
pass
###Output
_____no_output_____
###Markdown
Let's take a look at the first sample.
###Code
x, y = next(iter(dataset))
x
###Output
_____no_output_____
###Markdown
A regression model's goal is to learn to predict a numeric target `y` from a bunch of features `x`. We'll attempt to do this with a nearest neighbors model.
###Code
from river import neighbors
model = neighbors.KNNRegressor()
model.predict_one(x)
###Output
_____no_output_____
###Markdown
The model hasn't been trained on any data, and therefore outputs a default value of 0.The model can be trained on the sample, which will update the model's state.
###Code
model = model.learn_one(x, y)
###Output
_____no_output_____
###Markdown
If we try to make a prediction on the same sample, we can see that the output is different, because the model has learned something.
###Code
model.predict_one(x)
###Output
_____no_output_____
###Markdown
Typically, an online model makes a prediction, and then learns once the ground truth reveals itself. The prediction and the ground truth can be compared to measure the model's correctness. If you have a dataset available, you can loop over it, make a prediction, update the model, and compare the model's output with the ground truth. This is called progressive validation.
###Code
from river import metrics
model = neighbors.KNNRegressor()
metric = metrics.MAE()
for x, y in dataset:
y_pred = model.predict_one(x)
model.learn_one(x, y)
metric.update(y, y_pred)
metric
###Output
_____no_output_____
###Markdown
This is a common way to evaluate an online model. In fact, there is a dedicated `evaluate.progressive_val_score` function that does this for you.
###Code
from river import evaluate
model = neighbors.KNNRegressor()
metric = metrics.MAE()
evaluate.progressive_val_score(dataset, model, metric)
###Output
_____no_output_____ |
Notebooks/Part(2.2) RDM.ipynb | ###Markdown
Load FaceNet Model
###Code
model = load_model('Template_poisoning/model/facenet_keras.h5', compile= False)
model.trainable= False
#model.layers[-2].get_config() 'activation- linear'
#model.layers[-1].get_config() 'batchnorm'
# Read Images (160x160x3)
def load_img(path, resize=None):
img= Image.open(path)
img= img.convert('RGB')
if resize is not None:
img= img.resize((resize, resize))
return np.asarray(img)
def Centroid(vector):
'''
After the random initialization, we first optimize the
glasses using the adversaryโs centroid in feature space(Xc)
# Arguments
Input: feature vector(batch_size x feature)
'''
Xc= tf.math.reduce_mean(vector, axis=0)
return tf.reshape(Xc, (1, vector.shape[-1]))
###Output
_____no_output_____
###Markdown
Data Loading
###Code
def load_data(dir):
X=[]
for i, sample in enumerate(os.listdir(dir)):
image= load_img(os.path.join(dir, sample))
image = cv2.resize(image, (160, 160))
X.append(image/255.0)
return np.array(X)
X= load_data('Template_poisoning/Croped_data/adversary_images')
Target_samples= load_data('Template_poisoning/Croped_data/target_images')
X_ex= X.copy() # Copy of X
print('Adversarial Batch:',X.shape)
print('Target Batch:',Target_samples.shape)
# GET Mask
mask= load_img('Template_poisoning/final_mask.png') #Sacle(0-255), RGB
mask= mask/255.0
mask.shape
###Output
_____no_output_____
###Markdown
Get Predictions
###Code
# img_tr= load_img('Template_poisoning/Croped_data/target_images/ben_afflek_0.jpg')
# feature_tr= model.predict(img_tr[np.newaxis, :, :, :])
#Target= Generate_target(feature_tr, batch_size= X.shape[0])
Targetc= Centroid(model.predict(Target_samples))
Targetc.shape
Xc= Centroid(model.predict(X)) #(1 x 128)
print(Xc.shape)
delta_x= np.random.uniform(low=0.0, high=1.0, size=X.shape) # Scale(0-1)
delta_x.shape
f, ax= plt.subplots(1, 5, figsize=(14, 4))
image= X*(1-mask)+ delta_x*mask
for i in range(5):
ax[i].imshow(image[i+5])
ax[i].set_xticks([]); ax[i].set_yticks([])
plt.show()
del image
###Output
_____no_output_____
###Markdown
Random Distance Method
Conditions:
* Target's cancelable biometric identity revealed
* Target's Stolen Token= 6
###Code
def salt_2dim(X):
samples, features= X.shape
X_out= np.zeros((samples,features//2, 2))
for i, x in enumerate(X):
X_out[i,:, 0]= x[:features//2]
X_out[i,:, 1]= x[features//2:]
return X_out
def shuffle(X_vec, p=4, seed= 0, with_seed= True):
for X in X_vec:
j= 64+p
for i in np.arange(p, j, p):
x= X[(i-p):i]
if with_seed:
np.random.seed(seed)
np.random.shuffle(x[:8])
return X_vec
# For Tensors
def shuffle_tesnsor(X_vec, p=4, seed= 0, with_seed= True):
for X in X_vec:
j= 64+p
for i in np.arange(p, j, p):
x= X[(i-p):i]
if with_seed:
tf.random.set_seed(seed)
tf.random.shuffle(x)
return X_vec
def get_RDM(Fv, token=6, c=100):
'''
INPUT---
Fv.shape: (None, feature)
token: Uses token key
PROCESS---
1. Feature vector(Fv) multiplied by a large constant, say c = 100 due to its low dynamic range.
2. To increase the entropy of the template, fv is salted by ORing it with a random grid RG as fs = fv + RG.
3. Fv is divided into two equal parts.
4. A user-specific key (K) of dimension 1 ร N is generated,
which has randomly distributed non-integral values in the range [โ100, 100].
5. Computation of distance via random feature vectors.
6. In order to provide noninvertibility, median filtering is applied on distance vector D
to generate transformed feature vector T f , where the intensity
values are shuffled in p ร1 neighborhood. T f is stored as the
final transformed template.
OUTPUT---
Out.shape: (None, feature//2)
'''
#1
Fv*= c
#2
np.random.seed(token)
Fv+= np.random.randint(1, 256, size= Fv.shape)
#3
Fv= salt_2dim(Fv)
#4
np.random.seed(token)
K= np.random.randint(-100, 101, size= (1, Fv.shape[-1]))
#5
dist =(Fv- K)**2
dist= np.sqrt(np.sum(dist, 2))
#6
Tf= shuffle(dist.copy(), p=4, seed= token, with_seed= True)
return Tf
#for tensors
def RDM_tf(Fv, token=6, c=100.0):
#Fv= tf.Variable(Fv, dtype=tf.float64)
Fv+=c
tf.random.set_seed(token)
Fv+= tf.random.uniform(
Fv.shape, minval=1, maxval=256, dtype=tf.dtypes.float64, seed=None, name=None)
Fv= tf.convert_to_tensor(salt_2dim(Fv))
tf.random.set_seed(token)
K= tf.random.uniform((1, Fv.shape[-1]), minval=-101, maxval=101, dtype=tf.dtypes.float64, seed=None, name=None)
dist =(Fv- K)**2
dist= tf.math.sqrt(tf.math.reduce_sum(dist, 2))
return shuffle_tesnsor(dist, p=4, seed= token, with_seed= True)
Targetc.shape, Xc.shape
Targetc= get_RDM(Targetc)
Targetc= Generate_target(Targetc, batch_size= X.shape[0])
Targetc.shape
Xc= get_RDM(Xc)
Xc= Generate_target(Xc, batch_size=X.shape[0]) #(46 x 128)
Xc.shape
###Output
_____no_output_____
###Markdown
Back propagation and Loss Pipeline
###Code
def loss_object(pred, label, delta= delta_x, direction= False):
# Loss= euclidean distance + Delta_x pixel Variance
dist= Euclidean_dist(pred, label)
variance= Sample_variance(delta_x)
if direction:
sc= tf.math.subtract(1.0, tf.math.divide(1.0, label.shape[0]))
#print(dist.shape, sc.shape)
vector_mean= dist* tf.cast(sc, dist.dtype) #tf.math.multiply(dist, sc)
target_dir= tf.math.multiply(vector_mean, dist)
Loss= tf.math.add(target_dir, tf.cast(variance, dist.dtype))
return Loss
Loss= tf.math.add(tf.cast(dist, variance.dtype), variance)
return Loss
def back_propagate(model, X, mask, delta_x, label, direction= False):
with tf.GradientTape() as g:
g.watch(delta_x)
X_batch= Generate_sample(X, delta_x, mask)
feature= tf.cast(model(X_batch), tf.float64)
rdm_feature= RDM_tf(feature)
loss= loss_object(pred= rdm_feature, label= label, delta= delta_x, direction= direction)
# Get the gradients of the loss w.r.t to the input image.
gradient = g.gradient(loss, delta_x)
return gradient, tf.reduce_mean(loss).numpy()
# Tf Variables
X= tf.Variable(X, dtype=tf.float64)
delta_x= tf.Variable(delta_x, dtype=tf.float64)
mask= tf.Variable(mask, dtype=tf.float64)
Xc= tf.Variable(Xc)
Targetc= tf.Variable(Targetc)
###Output
_____no_output_____
###Markdown
Modify mask wrt Adversarial Batch
###Code
HIS= {}
HIS['adv_cen_loss']=[]
HIS['target_cen_loss']=[]
epoch= 300
Lambda= 0.6
for ep in range(epoch):
grad, loss= back_propagate(model, X, mask, delta_x, Xc)
HIS['adv_cen_loss'].append(loss)
# Gradient step
delta_x= delta_x - Lambda*grad
if ep%10 == 0:
print('Epoch: {} Loss: {:.3f}'.format((ep+1), loss))
###Output
Epoch: 1 Loss: 1282.865
Epoch: 11 Loss: 1253.664
Epoch: 21 Loss: 1226.173
Epoch: 31 Loss: 1200.859
Epoch: 41 Loss: 1178.377
Epoch: 51 Loss: 1159.589
Epoch: 61 Loss: 1145.446
Epoch: 71 Loss: 1136.291
Epoch: 81 Loss: 1130.966
Epoch: 91 Loss: 1127.865
Epoch: 101 Loss: 1126.035
Epoch: 111 Loss: 1124.951
Epoch: 121 Loss: 1124.305
Epoch: 131 Loss: 1123.927
Epoch: 141 Loss: 1125.602
Epoch: 151 Loss: 1125.664
Epoch: 161 Loss: 1125.689
Epoch: 171 Loss: 1125.702
Epoch: 181 Loss: 1125.709
Epoch: 191 Loss: 1125.713
Epoch: 201 Loss: 1125.715
Epoch: 211 Loss: 1125.717
Epoch: 221 Loss: 1125.719
Epoch: 231 Loss: 1125.720
Epoch: 241 Loss: 1125.720
Epoch: 251 Loss: 1125.721
Epoch: 261 Loss: 1125.721
Epoch: 271 Loss: 1125.722
Epoch: 281 Loss: 1125.722
Epoch: 291 Loss: 1125.722
###Markdown
Modify mask wrt Targets Batch
###Code
Lambda= 1
for ep in range(int(3*epoch)):
grad, loss= back_propagate(model, X, mask, delta_x, Targetc, direction= True)
HIS['target_cen_loss'].append(loss)
# Gradient step
delta_x= delta_x - Lambda*grad
if ep== 0:
delta_x0= tf.identity(delta_x)
if ep== 1:
delta_x1= tf.identity(delta_x)
if ep== 2:
delta_x2= tf.identity(delta_x)
if ep== 3:
delta_x2= tf.identity(delta_x)
if ep== 170:
Lambda= 0.1
if ep== 250:
Lambda= 0.01
if ep== 450:
Lambda= 0.00001
if ep%10 == 0:
print('Epoch: {}, Loss: {}'.format((ep+1), loss))
adv_sample0=Generate_sample(X, delta_x, mask)
adv_sample0=adv_sample0.numpy()
adv_sample1=Generate_sample(X, delta_x1, mask)
adv_sample1=adv_sample1.numpy()
adv_sample1.shape
adv_sample0=np.clip(adv_sample0, 0, 1)
adv_sample1=np.clip(adv_sample1, 0, 1)
f, ax= plt.subplots(1, 5, figsize=(14, 4))
for i in range(5):
ax[i].imshow(adv_sample0[i+5])
ax[i].set_xticks([]); ax[i].set_yticks([])
plt.show()
adv_feature= model.predict(X_ex)
df_adv= pd.DataFrame(adv_feature)
adv_modified_feature0= model.predict(adv_sample0)
df_adv_modify0= pd.DataFrame(adv_modified_feature0)
adv_modified_feature1= model.predict(adv_sample1)
df_adv_modify1= pd.DataFrame(adv_modified_feature1)
target_feature= model.predict(Target_samples)
df_target= pd.DataFrame(target_feature)
df_adv['target']= 'Adversarial_sample'
df_adv_modify0['target']= 'Adversarial Pubertation initial step'
df_adv_modify1['target']= 'Adversarial Pubertation final step'
df_target['target']= 'Target_sample'
df=pd.concat([df_target, df_adv_modify0,df_adv_modify1, df_adv], ignore_index= True)
df.shape
pca = PCA(n_components=2)
# Fit pca to 'X'
df1= pd.DataFrame(pca.fit_transform(df.drop(['target'], 1)))
df1.shape
df1['target']= df.target
fig, ax = plt.subplots(figsize=(12, 6))
plt.grid(True)
plt.xlabel('feature-1'); plt.ylabel('feature-2')
sns.scatterplot(x=df1.iloc[:, 0] , y= df1.iloc[:, 1], hue = df1.iloc[:, 2], data= df1, palette='Set1', ax= ax)
plt.show()
np.save('Benchmark_RDM.npy', HIS)
###Output
_____no_output_____ |
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/Hoodsport/201905_Hindcast/2014_Hoodsport_Evaluations.ipynb | ###Markdown
This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data.
###Code
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2014
modver = "HC201905"
mooring = "Hoodsport"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/201905_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
###Output
_____no_output_____
###Markdown
Map of Buoy Location.
###Code
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
###Output
_____no_output_____
###Markdown
Temperature
###Code
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
###Output
/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>
plt.locator_params(axis="x", nbins=20)
###Markdown
Salinity
###Code
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
###Output
_____no_output_____ |
mnist_isfive.ipynb | ###Markdown
MNIST: Classification of the symbol 'five'
###Code
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
X, y = mnist['data'], mnist['target']
def show_digit(index):
digit = X[index]
digit_img = digit.reshape(28, 28)
plt.imshow(digit_img, cmap=matplotlib.cm.binary, interpolation='nearest')
plt.axis('off')
plt.show()
show_digit(36000)
# Split up dataset in test and training data
X_train, y_train, X_test, y_test = X[:60000], y[:60000], X[60000:], y[60000:]
# and shuffle training data
shuffle_index = np.random.permutation(60000)
X_train, y_train = X[shuffle_index], y_train[shuffle_index]
# Filter out non-fives
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
# Predict the five above
sgd_clf.predict([X[36000]])
# And validate the model
from sklearn.model_selection import cross_val_score
y_train_accuracy = cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy')
# and use a confusion matrix
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function')
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
from sklearn.metrics import precision_score, recall_score, f1_score
print('SGDClassifier accuracy:', y_train_accuracy)
print('SGDClassifier precision:', precision_score(y_train_5, y_train_pred))
print('SGDClassifier recall:', recall_score(y_train_5, y_train_pred))
print('SGDClassifier F1:', f1_score(y_train_5, y_train_pred))
from sklearn.metrics import roc_curve, roc_auc_score
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('False Negative Rate')
plot_roc_curve(fpr, tpr)
plt.show()
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method='predict_proba')
y_scores_forest = y_probas_forest[:, 1]
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
plt.plot(fpr, tpr, 'b:', label='SGD')
plot_roc_curve(fpr_forest, tpr_forest, 'Random Forest')
plt.legend(loc='lower right')
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
print('RandomForestClassifier precision:', precision_score(y_train_5, y_train_pred_forest))
print('RandomForestClassifier recall:', recall_score(y_train_5, y_train_pred_forest))
###Output
RandomForestClassifier precision: 0.982969432314
RandomForestClassifier recall: 0.830474082273
|
wrangle-lesson-api-review.ipynb | ###Markdown
API Review Create the spark session
###Code
import pyspark
spark = pyspark.sql.SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
Create Dataframes
###Code
import pandas as pd
import numpy as np
np.random.seed(123)
pd_df = pd.DataFrame(dict(n=np.arange(20), group=np.random.choice(list("abc"), 20)))
pd_df.head()
###Output
_____no_output_____
###Markdown
Convert to a spark dataframe
###Code
df = spark.createDataFrame(pd_df)
#must run .show() to see the spark dataframe
df.show(2)
df.describe().show()
from pydataset import data
mpg = spark.createDataFrame(data("mpg"))
mpg.show(2)
###Output
+------------+-----+-----+----+---+----------+---+---+---+---+-------+
|manufacturer|model|displ|year|cyl| trans|drv|cty|hwy| fl| class|
+------------+-----+-----+----+---+----------+---+---+---+---+-------+
| audi| a4| 1.8|1999| 4| auto(l5)| f| 18| 29| p|compact|
| audi| a4| 1.8|1999| 4|manual(m5)| f| 21| 29| p|compact|
+------------+-----+-----+----+---+----------+---+---+---+---+-------+
only showing top 2 rows
###Markdown
Create ColumnsThis returns a column object:
###Code
mpg.hwy
###Output
_____no_output_____
###Markdown
To select the values in the column object, we follow it with show. And we can use .select to select multiple column objects.
###Code
# select 3 columns and show 2 rows
mpg.select(mpg.hwy, mpg.cty, mpg.model).show(2)
# select 1 column, then select that column and add one to each of the values, return and show both columns.
mpg.select(mpg.hwy, mpg.hwy + 1).show(2)
# select & alias hwy column name
mpg.select(mpg.hwy.alias("highway_mileage")).show(2)
# create a var col1 to store the column object of hwy, aliased as highway_mileage
col1 = mpg.hwy.alias("highway_mileage")
# create a var col2 to store the column object of hwy divided by 2, aliased as highway_mileage_halved
col2 = (mpg.hwy/2).alias("highway_mileage_halved")
# select both, referencing the new variables, col1 and col2
mpg.select(col1, col2).show(1)
from pyspark.sql.functions import col, expr
col("hwy")
avg_col = (col("hwy") + col("cty")) / 2
mpg.select(
col("hwy").alias("highway_mileage"),
mpg.cty.alias("city_mileage"),
avg_col.alias("avg_mileage")
).show(2)
###Output
+---------------+------------+-----------+
|highway_mileage|city_mileage|avg_mileage|
+---------------+------------+-----------+
| 29| 18| 23.5|
| 29| 21| 25.0|
+---------------+------------+-----------+
only showing top 2 rows
###Markdown
Another way to do what we did above, using expr() ...
###Code
mpg.select(
expr("hwy"), # the same as `col`
expr("hwy + 1"), # an arithmetic expression
expr("hwy AS highway_mileage"), # using an alias
expr("hwy + 1 AS highway_incremented"), # a combination of the above
).show(5)
###Output
+---+---------+---------------+-------------------+
|hwy|(hwy + 1)|highway_mileage|highway_incremented|
+---+---------+---------------+-------------------+
| 29| 30| 29| 30|
| 29| 30| 29| 30|
| 31| 32| 31| 32|
| 30| 31| 30| 31|
| 26| 27| 26| 27|
+---+---------+---------------+-------------------+
only showing top 5 rows
###Markdown
Briging together all the different ways to accomplish the same task...select a column & alias it.
###Code
mpg.select(
mpg.hwy.alias("highway"),
col("hwy").alias("highway"),
expr("hwy").alias("highway"),
expr("hwy AS highway"),
).show(5)
###Output
+-------+-------+-------+-------+
|highway|highway|highway|highway|
+-------+-------+-------+-------+
| 29| 29| 29| 29|
| 29| 29| 29| 29|
| 31| 31| 31| 31|
| 30| 30| 30| 30|
| 26| 26| 26| 26|
+-------+-------+-------+-------+
only showing top 5 rows
###Markdown
Spark SQL
###Code
# register the table with spark
mpg.createOrReplaceTempView("mpg")
spark.sql(
"""
SELECT hwy, cty, (hwy + cty) / 2 as avg
FROM mpg
"""
).show(2)
###Output
+---+---+----+
|hwy|cty| avg|
+---+---+----+
| 29| 18|23.5|
| 29| 21|25.0|
+---+---+----+
only showing top 2 rows
###Markdown
Type Casting
###Code
mpg.dtypes
mpg.printSchema()
mpg.select(mpg.hwy.cast("string")).printSchema()
# shows null because can't be converted.
mpg.select(mpg.model, mpg.model.cast("int")).show(2)
###Output
+-----+-----+
|model|model|
+-----+-----+
| a4| null|
| a4| null|
+-----+-----+
only showing top 2 rows
###Markdown
Built in Functions
###Code
# avg and mean are aliases of each other
from pyspark.sql.functions import concat, sum, avg, min, max, count, mean
# from pyspark.sql.functions import *
mpg.select(
sum(mpg.hwy) / count(mpg.hwy).alias("average_1"),
avg(mpg.hwy).alias("average_2"),
min(mpg.hwy),
max(mpg.hwy),
).show()
mpg.select(concat(mpg.manufacturer, mpg.model)).show(5)
###Output
+---------------------------+
|concat(manufacturer, model)|
+---------------------------+
| audia4|
| audia4|
| audia4|
| audia4|
| audia4|
+---------------------------+
only showing top 5 rows
###Markdown
The function for string literals: lit
###Code
from pyspark.sql.functions import lit
mpg.select(concat(mpg.cyl, lit(" cylinders"))).show(5)
###Output
+-----------------------+
|concat(cyl, cylinders)|
+-----------------------+
| 4 cylinders|
| 4 cylinders|
| 4 cylinders|
| 4 cylinders|
| 6 cylinders|
+-----------------------+
only showing top 5 rows
###Markdown
More String Manipulation
###Code
from pyspark.sql.functions import regexp_extract, regexp_replace
textdf = spark.createDataFrame(
pd.DataFrame(
{
"address": [
"600 Navarro St ste 600, San Antonio, TX 78205",
"3130 Broadway St, San Antonio, TX 78209",
"303 Pearl Pkwy, San Antonio, TX 78215",
"1255 SW Loop 410, San Antonio, TX 78227",
]
}
)
)
textdf.show(truncate=False)
###Output
+---------------------------------------------+
|address |
+---------------------------------------------+
|600 Navarro St ste 600, San Antonio, TX 78205|
|3130 Broadway St, San Antonio, TX 78209 |
|303 Pearl Pkwy, San Antonio, TX 78215 |
|1255 SW Loop 410, San Antonio, TX 78227 |
+---------------------------------------------+
###Markdown
Using regexp_extract - extract at least one capture group and create new column of that.
###Code
textdf.select(
"address",
regexp_extract("address", r"^(\d+)", 1).alias("street_no"),
regexp_extract("address", r"^\d+\s([\w\s]+?),", 1).alias("street"),
).show(truncate=False)
###Output
+---------------------------------------------+---------+------------------+
|address |street_no|street |
+---------------------------------------------+---------+------------------+
|600 Navarro St ste 600, San Antonio, TX 78205|600 |Navarro St ste 600|
|3130 Broadway St, San Antonio, TX 78209 |3130 |Broadway St |
|303 Pearl Pkwy, San Antonio, TX 78215 |303 |Pearl Pkwy |
|1255 SW Loop 410, San Antonio, TX 78227 |1255 |SW Loop 410 |
+---------------------------------------------+---------+------------------+
###Markdown
regexp_replace lets us make substitutions based on a regular expression.
###Code
textdf.select(
"address",
regexp_replace("address", r"^.*?,\s*", "").alias("city_state_zip"),
).show(truncate=False)
###Output
+---------------------------------------------+---------------------+
|address |city_state_zip |
+---------------------------------------------+---------------------+
|600 Navarro St ste 600, San Antonio, TX 78205|San Antonio, TX 78205|
|3130 Broadway St, San Antonio, TX 78209 |San Antonio, TX 78209|
|303 Pearl Pkwy, San Antonio, TX 78215 |San Antonio, TX 78215|
|1255 SW Loop 410, San Antonio, TX 78227 |San Antonio, TX 78227|
+---------------------------------------------+---------------------+
###Markdown
Filtering with .filter and .where
###Code
mpg.filter(mpg.cyl == 4).where(mpg["class"] == "subcompact").show()
###Output
+------------+-----------+-----+----+---+----------+---+---+---+---+----------+
|manufacturer| model|displ|year|cyl| trans|drv|cty|hwy| fl| class|
+------------+-----------+-----+----+---+----------+---+---+---+---+----------+
| honda| civic| 1.6|1999| 4|manual(m5)| f| 28| 33| r|subcompact|
| honda| civic| 1.6|1999| 4| auto(l4)| f| 24| 32| r|subcompact|
| honda| civic| 1.6|1999| 4|manual(m5)| f| 25| 32| r|subcompact|
| honda| civic| 1.6|1999| 4|manual(m5)| f| 23| 29| p|subcompact|
| honda| civic| 1.6|1999| 4| auto(l4)| f| 24| 32| r|subcompact|
| honda| civic| 1.8|2008| 4|manual(m5)| f| 26| 34| r|subcompact|
| honda| civic| 1.8|2008| 4| auto(l5)| f| 25| 36| r|subcompact|
| honda| civic| 1.8|2008| 4| auto(l5)| f| 24| 36| c|subcompact|
| honda| civic| 2.0|2008| 4|manual(m6)| f| 21| 29| p|subcompact|
| hyundai| tiburon| 2.0|1999| 4| auto(l4)| f| 19| 26| r|subcompact|
| hyundai| tiburon| 2.0|1999| 4|manual(m5)| f| 19| 29| r|subcompact|
| hyundai| tiburon| 2.0|2008| 4|manual(m5)| f| 20| 28| r|subcompact|
| hyundai| tiburon| 2.0|2008| 4| auto(l4)| f| 20| 27| r|subcompact|
| subaru|impreza awd| 2.2|1999| 4| auto(l4)| 4| 21| 26| r|subcompact|
| subaru|impreza awd| 2.2|1999| 4|manual(m5)| 4| 19| 26| r|subcompact|
| subaru|impreza awd| 2.5|1999| 4|manual(m5)| 4| 19| 26| r|subcompact|
| subaru|impreza awd| 2.5|1999| 4| auto(l4)| 4| 19| 26| r|subcompact|
| volkswagen| new beetle| 1.9|1999| 4|manual(m5)| f| 35| 44| d|subcompact|
| volkswagen| new beetle| 1.9|1999| 4| auto(l4)| f| 29| 41| d|subcompact|
| volkswagen| new beetle| 2.0|1999| 4|manual(m5)| f| 21| 29| r|subcompact|
+------------+-----------+-----+----+---+----------+---+---+---+---+----------+
only showing top 20 rows
###Markdown
Conditionals with When and Otherwise
###Code
from pyspark.sql.functions import when
mpg.select(
mpg.displ,
(
when(mpg.displ < 2, "small")
.when(mpg.displ < 3, "medium")
.otherwise("large")
.alias("engine_size")
),
).show(10)
###Output
+-----+-----------+
|displ|engine_size|
+-----+-----------+
| 1.8| small|
| 1.8| small|
| 2.0| medium|
| 2.0| medium|
| 2.8| medium|
| 2.8| medium|
| 3.1| large|
| 1.8| small|
| 1.8| small|
| 2.0| medium|
+-----+-----------+
only showing top 10 rows
###Markdown
Sorting & Ordering
###Code
mpg.sort(mpg.hwy).show(8)
from pyspark.sql.functions import asc, desc
mpg.sort(mpg.hwy.desc())
# is the same as
mpg.sort(col("hwy").desc())
# is the same as
mpg.sort(desc("hwy")).show(5)
mpg.sort(desc("class"), mpg.cyl.asc(), col("hwy").desc()).show()
###Output
+------------+------------------+-----+----+---+----------+---+---+---+---+-----+
|manufacturer| model|displ|year|cyl| trans|drv|cty|hwy| fl|class|
+------------+------------------+-----+----+---+----------+---+---+---+---+-----+
| subaru| forester awd| 2.5|2008| 4|manual(m5)| 4| 20| 27| r| suv|
| subaru| forester awd| 2.5|2008| 4| auto(l4)| 4| 20| 26| r| suv|
| subaru| forester awd| 2.5|1999| 4|manual(m5)| 4| 18| 25| r| suv|
| subaru| forester awd| 2.5|2008| 4|manual(m5)| 4| 19| 25| p| suv|
| subaru| forester awd| 2.5|1999| 4| auto(l4)| 4| 18| 24| r| suv|
| subaru| forester awd| 2.5|2008| 4| auto(l4)| 4| 18| 23| p| suv|
| toyota| 4runner 4wd| 2.7|1999| 4|manual(m5)| 4| 15| 20| r| suv|
| toyota| 4runner 4wd| 2.7|1999| 4| auto(l4)| 4| 16| 20| r| suv|
| jeep|grand cherokee 4wd| 3.0|2008| 6| auto(l5)| 4| 17| 22| d| suv|
| nissan| pathfinder 4wd| 4.0|2008| 6| auto(l5)| 4| 14| 20| p| suv|
| toyota| 4runner 4wd| 4.0|2008| 6| auto(l5)| 4| 16| 20| r| suv|
| jeep|grand cherokee 4wd| 4.0|1999| 6| auto(l4)| 4| 15| 20| r| suv|
| toyota| 4runner 4wd| 3.4|1999| 6| auto(l4)| 4| 15| 19| r| suv|
| ford| explorer 4wd| 4.0|1999| 6|manual(m5)| 4| 15| 19| r| suv|
| jeep|grand cherokee 4wd| 3.7|2008| 6| auto(l5)| 4| 15| 19| r| suv|
| mercury| mountaineer 4wd| 4.0|2008| 6| auto(l5)| 4| 13| 19| r| suv|
| ford| explorer 4wd| 4.0|2008| 6| auto(l5)| 4| 13| 19| r| suv|
| nissan| pathfinder 4wd| 3.3|1999| 6| auto(l4)| 4| 14| 17| r| suv|
| ford| explorer 4wd| 4.0|1999| 6| auto(l5)| 4| 14| 17| r| suv|
| ford| explorer 4wd| 4.0|1999| 6| auto(l5)| 4| 14| 17| r| suv|
+------------+------------------+-----+----+---+----------+---+---+---+---+-----+
only showing top 20 rows
###Markdown
Grouping & Aggregating
###Code
mpg.groupBy(mpg.cyl)
mpg.groupBy(col("cyl"))
mpg.groupBy("cyl")
mpg.groupBy(mpg.cyl).agg(avg(mpg.cty), avg(mpg.hwy)).show()
mpg.groupBy("cyl", "class").agg(avg(mpg.cty), avg(mpg.hwy)).show()
###Output
+---+----------+------------------+------------------+
|cyl| class| avg(cty)| avg(hwy)|
+---+----------+------------------+------------------+
| 5| compact| 21.0| 29.0|
| 5|subcompact| 20.0| 28.5|
| 6|subcompact| 17.0|24.714285714285715|
| 6| pickup| 14.5| 17.9|
| 4|subcompact|22.857142857142858| 30.80952380952381|
| 8| suv|12.131578947368421|16.789473684210527|
| 8| pickup| 11.8| 15.8|
| 8| midsize| 16.0| 24.0|
| 4| midsize| 20.5| 29.1875|
| 8| 2seater| 15.4| 24.8|
| 6| compact|16.923076923076923|25.307692307692307|
| 6| minivan| 15.6| 22.2|
| 4| compact| 21.375| 29.46875|
| 8|subcompact| 14.8| 21.6|
| 6| midsize|17.782608695652176| 26.26086956521739|
| 4| minivan| 18.0| 24.0|
| 4| pickup| 16.0|20.666666666666668|
| 6| suv| 14.5| 18.5|
| 4| suv| 18.0| 23.75|
+---+----------+------------------+------------------+
###Markdown
Rollup will do the same aggregations, but also include overall totals.
###Code
mpg.rollup("cyl").count().sort("cyl").show()
###Output
+----+-----+
| cyl|count|
+----+-----+
|null| 234|
| 4| 81|
| 5| 4|
| 6| 79|
| 8| 70|
+----+-----+
###Markdown
Here the null value in cyl indicates the total count.
###Code
mpg.rollup("cyl").agg(expr("avg(hwy)")).sort("cyl").show()
###Output
+----+-----------------+
| cyl| avg(hwy)|
+----+-----------------+
|null|23.44017094017094|
| 4|28.80246913580247|
| 5| 28.75|
| 6|22.82278481012658|
| 8|17.62857142857143|
+----+-----------------+
###Markdown
Here the null value in cyl indicates the total count.
###Code
mpg.rollup("cyl", "class").mean("hwy").sort(col("cyl"), col("class")).show()
###Output
+----+----------+------------------+
| cyl| class| avg(hwy)|
+----+----------+------------------+
|null| null| 23.44017094017094|
| 4| null| 28.80246913580247|
| 4| compact| 29.46875|
| 4| midsize| 29.1875|
| 4| minivan| 24.0|
| 4| pickup|20.666666666666668|
| 4|subcompact| 30.80952380952381|
| 4| suv| 23.75|
| 5| null| 28.75|
| 5| compact| 29.0|
| 5|subcompact| 28.5|
| 6| null| 22.82278481012658|
| 6| compact|25.307692307692307|
| 6| midsize| 26.26086956521739|
| 6| minivan| 22.2|
| 6| pickup| 17.9|
| 6|subcompact|24.714285714285715|
| 6| suv| 18.5|
| 8| null| 17.62857142857143|
| 8| 2seater| 24.8|
+----+----------+------------------+
only showing top 20 rows
###Markdown
Crosstables & Pivot TablesCrosstab is a simple way to get counts.
###Code
mpg.crosstab("class", "cyl").show()
###Output
+----------+---+---+---+---+
| class_cyl| 4| 5| 6| 8|
+----------+---+---+---+---+
| midsize| 16| 0| 23| 2|
|subcompact| 21| 2| 7| 5|
| 2seater| 0| 0| 0| 5|
| pickup| 3| 0| 10| 20|
| minivan| 1| 0| 10| 0|
| suv| 8| 0| 16| 38|
| compact| 32| 2| 13| 0|
+----------+---+---+---+---+
###Markdown
We can use pivot to compute different aggregations than count.
###Code
mpg.groupby("class").pivot("cyl").mean("hwy").show()
###Output
+----------+------------------+----+------------------+------------------+
| class| 4| 5| 6| 8|
+----------+------------------+----+------------------+------------------+
|subcompact| 30.80952380952381|28.5|24.714285714285715| 21.6|
| compact| 29.46875|29.0|25.307692307692307| null|
| minivan| 24.0|null| 22.2| null|
| suv| 23.75|null| 18.5|16.789473684210527|
| midsize| 29.1875|null| 26.26086956521739| 24.0|
| pickup|20.666666666666668|null| 17.9| 15.8|
| 2seater| null|null| null| 24.8|
+----------+------------------+----+------------------+------------------+
###Markdown
Missing Values
###Code
df = spark.createDataFrame(
pd.DataFrame(
{"x": [1, 2, np.nan, 4, 5, np.nan], "y": [np.nan, 0, 0, 3, 1, np.nan]}
)
)
df.show()
df.na.drop().show()
df.na.fill(0).show()
df.na.fill(0, subset="x").show()
df.na.drop(subset="y").show()
###Output
+---+---+
| x| y|
+---+---+
|2.0|0.0|
|NaN|0.0|
|4.0|3.0|
|5.0|1.0|
+---+---+
###Markdown
Transformations of Dataframes
###Code
# how is spark thinking about our df?
mpg.explain()
###Output
== Physical Plan ==
*(1) Scan ExistingRDD[manufacturer#146,model#147,displ#148,year#149L,cyl#150L,trans#151,drv#152,cty#153L,hwy#154L,fl#155,class#156]
###Markdown
Only a single step above ^This one below shows another step after "Scan ExistingRDD", a "Project" that contains the names of the columns we are looking for.
###Code
mpg.select(mpg.cyl, mpg.hwy).explain()
###Output
== Physical Plan ==
*(1) Project [cyl#150L, hwy#154L]
+- *(1) Scan ExistingRDD[manufacturer#146,model#147,displ#148,year#149L,cyl#150L,trans#151,drv#152,cty#153L,hwy#154L,fl#155,class#156]
###Markdown
And now we are going to do a more advanced select calcluation, but this is still just a single step.
###Code
mpg.select(((mpg.cyl + mpg.hwy) / 2).alias("avg_mpg")).explain()
###Output
== Physical Plan ==
*(1) Project [(cast((cyl#150L + hwy#154L) as double) / 2.0) AS avg_mpg#1541]
+- *(1) Scan ExistingRDD[manufacturer#146,model#147,displ#148,year#149L,cyl#150L,trans#151,drv#152,cty#153L,hwy#154L,fl#155,class#156]
###Markdown
Notice that our filter below is also a single step.
###Code
mpg.filter(mpg.cyl == 6).explain()
mpg.select("cyl", "hwy").filter(expr("cyl = 6")).explain()
mpg.filter(expr("cyl = 6")).select("cyl", "hwy").explain()
###Output
== Physical Plan ==
*(1) Project [cyl#150L, hwy#154L]
+- *(1) Filter (isnotnull(cyl#150L) AND (cyl#150L = 6))
+- *(1) Scan ExistingRDD[manufacturer#146,model#147,displ#148,year#149L,cyl#150L,trans#151,drv#152,cty#153L,hwy#154L,fl#155,class#156]
== Physical Plan ==
*(1) Project [cyl#150L, hwy#154L]
+- *(1) Filter (isnotnull(cyl#150L) AND (cyl#150L = 6))
+- *(1) Scan ExistingRDD[manufacturer#146,model#147,displ#148,year#149L,cyl#150L,trans#151,drv#152,cty#153L,hwy#154L,fl#155,class#156]
###Markdown
More DF ManipulationsFor these examples, we'll be working with a dataset of observations of the weather in seattle.
###Code
from vega_datasets import data
weather = data.seattle_weather().assign(date=lambda df: df.date.astype(str))
weather = spark.createDataFrame(weather)
weather.show(6)
# print number of rows & columns
print(weather.count(), "rows", len(weather.columns), "columns")
# get the date range of the dataset.
min_date, max_date = weather.select(min("date"), max("date")).first()
min_date, max_date
# compute temp average
weather = weather.withColumn(
"temp_avg", expr("ROUND(temp_min + temp_max) / 2")
).drop("temp_max", "temp_min")
weather.show(6)
###Output
+----------+-------------+----+-------+--------+
| date|precipitation|wind|weather|temp_avg|
+----------+-------------+----+-------+--------+
|2012-01-01| 0.0| 4.7|drizzle| 9.0|
|2012-01-02| 10.9| 4.5| rain| 6.5|
|2012-01-03| 0.8| 2.3| rain| 9.5|
|2012-01-04| 20.3| 4.7| rain| 9.0|
|2012-01-05| 1.3| 6.1| rain| 6.0|
|2012-01-06| 2.5| 2.2| rain| 3.5|
+----------+-------------+----+-------+--------+
only showing top 6 rows
###Markdown
Calculate total rainfall
###Code
from pyspark.sql.functions import month, year, quarter
(
weather.withColumn("month", month("date"))
.groupBy("month")
.agg(sum("precipitation").alias("total_rainfall"))
.sort("month")
.show()
)
###Output
+-----+------------------+
|month| total_rainfall|
+-----+------------------+
| 1|465.99999999999994|
| 2| 422.0|
| 3| 606.2|
| 4| 375.4|
| 5| 207.5|
| 6| 132.9|
| 7| 48.2|
| 8| 163.7|
| 9|235.49999999999997|
| 10| 503.4|
| 11| 642.5|
| 12| 622.7000000000002|
+-----+------------------+
###Markdown
Let's now take a look at the average temperature for each type of weather in December 2013:
###Code
(
weather.filter(month("date") == 12)
.filter(year("date") == 2013)
.groupBy("weather")
.agg(mean("temp_avg"))
.show()
)
###Output
+-------+-----------------+
|weather| avg(temp_avg)|
+-------+-----------------+
| fog|7.555555555555555|
| sun|2.977272727272727|
+-------+-----------------+
###Markdown
Let's now find out how many days had freezing temperatures in each month of 2013.
###Code
(
weather.filter(year("date") == 2013)
.withColumn("freezing_temps", (weather.temp_avg <= 0).cast("int"))
.withColumn("month", month("date"))
.groupBy("month")
.agg(sum("freezing_temps").alias("no_of_days_with_freezing_temps"))
.sort("month")
.show()
)
###Output
+-----+------------------------------+
|month|no_of_days_with_freezing_temps|
+-----+------------------------------+
| 1| 3|
| 2| 0|
| 3| 0|
| 4| 0|
| 5| 0|
| 6| 0|
| 7| 0|
| 8| 0|
| 9| 0|
| 10| 0|
| 11| 0|
| 12| 5|
+-----+------------------------------+
###Markdown
One last example, let's calculate the average temperature for each quarter of each year:
###Code
(
weather.withColumn("quarter", quarter("date"))
.withColumn("year", year("date"))
.groupBy("year", "quarter")
.agg(mean("temp_avg").alias("temp_avg"))
.sort("year", "quarter")
.show()
)
###Output
+----+-------+------------------+
|year|quarter| temp_avg|
+----+-------+------------------+
|2012| 1| 5.587912087912088|
|2012| 2|12.675824175824175|
|2012| 3| 18.375|
|2012| 4| 8.581521739130435|
|2013| 1| 6.405555555555556|
|2013| 2|14.505494505494505|
|2013| 3| 19.47826086956522|
|2013| 4| 8.032608695652174|
|2014| 1| 7.205555555555556|
|2014| 2|14.296703296703297|
|2014| 3|19.858695652173914|
|2014| 4| 9.88586956521739|
|2015| 1| 8.972222222222221|
|2015| 2|15.258241758241759|
|2015| 3|19.407608695652176|
|2015| 4| 8.956521739130435|
+----+-------+------------------+
###Markdown
We could use a pivot table instead:
###Code
(
weather.withColumn("quarter", quarter("date"))
.withColumn("year", year("date"))
.groupBy("quarter")
.pivot("year")
.agg(expr("ROUND(MEAN(temp_avg), 2) AS temp_avg"))
.sort("quarter")
.show()
)
###Output
+-------+-----+-----+-----+-----+
|quarter| 2012| 2013| 2014| 2015|
+-------+-----+-----+-----+-----+
| 1| 5.59| 6.41| 7.21| 8.97|
| 2|12.68|14.51| 14.3|15.26|
| 3|18.38|19.48|19.86|19.41|
| 4| 8.58| 8.03| 9.89| 8.96|
+-------+-----+-----+-----+-----+
###Markdown
Joins We'll start by creating some data that we can join together:
###Code
users = spark.createDataFrame(
pd.DataFrame(
{
"id": [1, 2, 3, 4, 5, 6],
"name": ["bob", "joe", "sally", "adam", "jane", "mike"],
"role_id": [1, 2, 3, 3, np.nan, np.nan],
}
)
)
roles = spark.createDataFrame(
pd.DataFrame(
{
"id": [1, 2, 3, 4],
"name": ["admin", "author", "reviewer", "commenter"],
}
)
)
print("--- users ---")
users.show()
print("--- roles ---")
roles.show()
###Output
--- users ---
+---+-----+-------+
| id| name|role_id|
+---+-----+-------+
| 1| bob| 1.0|
| 2| joe| 2.0|
| 3|sally| 3.0|
| 4| adam| 3.0|
| 5| jane| NaN|
| 6| mike| NaN|
+---+-----+-------+
--- roles ---
+---+---------+
| id| name|
+---+---------+
| 1| admin|
| 2| author|
| 3| reviewer|
| 4|commenter|
+---+---------+
###Markdown
To join two dataframes together, we'll need to call the .join method on one of them and supply the other as an argument. In addition, we'll need to supply the condition on which we are joining. In our case, we are joining where the role_id column on the users table is equal to the id column on the roles table.
###Code
users.join(roles, on=users.role_id == roles.id).show()
###Output
+---+-----+-------+---+--------+
| id| name|role_id| id| name|
+---+-----+-------+---+--------+
| 1| bob| 1.0| 1| admin|
| 3|sally| 3.0| 3|reviewer|
| 4| adam| 3.0| 3|reviewer|
| 2| joe| 2.0| 2| author|
+---+-----+-------+---+--------+
###Markdown
By default, spark will perform an inner join, meaning that records from both dataframes will have a match with the other. We can also specify either a left or a right join, which will keep all of the records from either the left or right side, even if those records don't have a match with the other dataframe.
###Code
users.join(roles, on=users.role_id == roles.id, how="left").show()
users.join(roles, on=users.role_id == roles.id, how="right").show()
###Output
+----+-----+-------+---+---------+
| id| name|role_id| id| name|
+----+-----+-------+---+---------+
| 1| bob| 1.0| 1| admin|
|null| null| null| 4|commenter|
| 3|sally| 3.0| 3| reviewer|
| 4| adam| 3.0| 3| reviewer|
| 2| joe| 2.0| 2| author|
+----+-----+-------+---+---------+
###Markdown
Notice that examples above have a duplicate id column. There are several ways we could go about dealing with this:alias each dataframe + explicitly select columns after joining (this could also be implemented with spark SQL)rename duplicated columns before mergingdrop duplicated columns after the merge (.drop(right.id)) Wrangling In this lesson, we will acquire and prepare the data we will use in the rest of this module.- Acquiring Data- Data Prep- Train Test Split
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
Acquisition Spark lets us read data in from a variety of data sources using what it calls a DataFrameReader. We can access the read property of our spark object and then set various options and read from a data source. Using Data Schemas
###Code
df = spark.read.csv("source.csv", sep=",", header=True, inferSchema=True)
df.show(7)
df.printSchema()
# can be done this way too.
from pyspark.sql.types import StructType, StructField, StringType
schema = StructType(
[
StructField("source_id", StringType()),
StructField("source_username", StringType()),
]
)
spark.read.csv("source.csv", header=True, inferSchema=True).show(7)
###Output
+---------+-------------------+
|source_id| source_username|
+---------+-------------------+
| 100137| Merlene Blodgett|
| 103582| Carmen Cura|
| 106463| Richard Sanchez|
| 119403| Betty De Hoyos|
| 119555| Socorro Quiara|
| 119868|Michelle San Miguel|
| 120752| Eva T. Kleiber|
+---------+-------------------+
only showing top 7 rows
###Markdown
Writing Data
###Code
# can write to jso, csv, etc.
from pydataset import data
mpg = spark.createDataFrame(data("mpg"))
# write to Json
mpg.write.json("mpg_json", mode = "overwrite")
# write to csv
mpg.write.csv("mpg_csv", mode = "overwrite")
cases = spark.read.csv("case.csv", header = True, inferSchema = True)
###Output
_____no_output_____
###Markdown
Data Prep
###Code
cases.show(5)
###Output
+----------+----------------+----------------+------------+---------+-------------------+-----------+----------------+--------------------+-----------+-----------+---------+--------------------+----------------+
| case_id|case_opened_date|case_closed_date|SLA_due_date|case_late| num_days_late|case_closed| dept_division|service_request_type| SLA_days|case_status|source_id| request_address|council_district|
+----------+----------------+----------------+------------+---------+-------------------+-----------+----------------+--------------------+-----------+-----------+---------+--------------------+----------------+
|1014127332| 1/1/18 0:42| 1/1/18 12:29|9/26/20 0:42| NO| -998.5087616000001| YES|Field Operations| Stray Animal| 999.0| Closed| svcCRMLS|2315 EL PASO ST,...| 5|
|1014127333| 1/1/18 0:46| 1/3/18 8:11| 1/5/18 8:30| NO|-2.0126041669999997| YES| Storm Water|Removal Of Obstru...|4.322222222| Closed| svcCRMSS|2215 GOLIAD RD, ...| 3|
|1014127334| 1/1/18 0:48| 1/2/18 7:57| 1/5/18 8:30| NO| -3.022337963| YES| Storm Water|Removal Of Obstru...|4.320729167| Closed| svcCRMSS|102 PALFREY ST W...| 3|
|1014127335| 1/1/18 1:29| 1/2/18 8:13|1/17/18 8:30| NO| -15.01148148| YES|Code Enforcement|Front Or Side Yar...|16.29188657| Closed| svcCRMSS|114 LA GARDE ST,...| 3|
|1014127336| 1/1/18 1:34| 1/1/18 13:29| 1/1/18 4:34| YES|0.37216435200000003| YES|Field Operations|Animal Cruelty(Cr...| 0.125| Closed| svcCRMSS|734 CLEARVIEW DR...| 7|
+----------+----------------+----------------+------------+---------+-------------------+-----------+----------------+--------------------+-----------+-----------+---------+--------------------+----------------+
only showing top 5 rows
###Markdown
Column Renaming
###Code
cases = cases.withColumnRenamed("SLA_due_date", "case_due_date")
###Output
_____no_output_____
###Markdown
Data Types
###Code
cases.printSchema()
cases.withColumn("case_closed", expr('case_closed == "YES"')).withColumn("case_late", expr("case_late == 'YES'"))
cases.select("case_closed", "case_late").show(7)
cases.printSchema()
###Output
root
|-- case_id: integer (nullable = true)
|-- case_opened_date: string (nullable = true)
|-- case_closed_date: string (nullable = true)
|-- case_due_date: string (nullable = true)
|-- case_late: string (nullable = true)
|-- num_days_late: double (nullable = true)
|-- case_closed: string (nullable = true)
|-- dept_division: string (nullable = true)
|-- service_request_type: string (nullable = true)
|-- SLA_days: double (nullable = true)
|-- case_status: string (nullable = true)
|-- source_id: string (nullable = true)
|-- request_address: string (nullable = true)
|-- council_district: integer (nullable = true)
###Markdown
Data Transformations
###Code
cases.groupBy('council_district').count().show()
cases = cases.withColumn("council_district", col("council_district").cast("string"))
cases.printSchema()
# Convert datefeild to date data types.
cases.select("case_opened_date", "case_closed_date", "case_due_date").show(7)
fmt = "M/d/yy H:mm"
cases = (
cases.withColumn("case_opened_date", to_timestamp("case_opened_date", fmt))
.withColumn("case_closed_date", to_timestamp("case_closed_date", fmt))
.withColumn("case_due_date", to_timestamp("case_due_date", fmt))
)
cases.select("case_opened_date", "case_closed_date", "case_due_date").show(7)
###Output
+-------------------+-------------------+-------------------+
| case_opened_date| case_closed_date| case_due_date|
+-------------------+-------------------+-------------------+
|2018-01-01 00:42:00|2018-01-01 12:29:00|2020-09-26 00:42:00|
|2018-01-01 00:46:00|2018-01-03 08:11:00|2018-01-05 08:30:00|
|2018-01-01 00:48:00|2018-01-02 07:57:00|2018-01-05 08:30:00|
|2018-01-01 01:29:00|2018-01-02 08:13:00|2018-01-17 08:30:00|
|2018-01-01 01:34:00|2018-01-01 13:29:00|2018-01-01 04:34:00|
|2018-01-01 06:28:00|2018-01-01 14:38:00|2018-01-31 08:30:00|
|2018-01-01 06:57:00|2018-01-02 15:32:00|2018-01-17 08:30:00|
+-------------------+-------------------+-------------------+
only showing top 7 rows
###Markdown
Data Transformations
###Code
cases.select("request_address").show(7)
cases = cases.withColumn("request_address", trim(lower(cases.request_address)))
cases = cases.withColumn("num_weeks_late", expr("num_days_late / 7 AS num_weeks_late"))
cases.select("num_days_late", "num_weeks_late").show(7)
# cases = cases.withColumn("council_district", col("council_district").cast("int"))
# %03d # Three digits if not fills leading spaces with 0.
cases = cases.withColumn("council_district", format_string("%03d", col("council_district").cast("int")))
cases.select("council_district").show(7)
cases.printSchema()
###Output
root
|-- case_id: integer (nullable = true)
|-- case_opened_date: timestamp (nullable = true)
|-- case_closed_date: timestamp (nullable = true)
|-- case_due_date: timestamp (nullable = true)
|-- case_late: string (nullable = true)
|-- num_days_late: double (nullable = true)
|-- case_closed: string (nullable = true)
|-- dept_division: string (nullable = true)
|-- service_request_type: string (nullable = true)
|-- SLA_days: double (nullable = true)
|-- case_status: string (nullable = true)
|-- source_id: string (nullable = true)
|-- request_address: string (nullable = true)
|-- council_district: string (nullable = false)
|-- num_weeks_late: double (nullable = true)
###Markdown
New Features
###Code
cases = cases.withColumn("zipcode", regexp_extract("request_address", r"\d+$", 0))
cases.select("zipcode").show(7)
cases = (
cases.withColumn("case_age", datediff(current_timestamp(), "case_opened_date"))
.withColumn("days_to_close", datediff("case_closed_date", "case_opened_date"))
)
cases.select("case_age", "days_to_close").show(7)
###Output
+--------+-------------+
|case_age|days_to_close|
+--------+-------------+
| 1229| 0|
| 1229| 2|
| 1229| 1|
| 1229| 1|
| 1229| 0|
| 1229| 0|
| 1229| 1|
+--------+-------------+
only showing top 7 rows
###Markdown
Joining New Dataset
###Code
dept = spark.read.csv('dept.csv', header = True, inferSchema = True)
dept.show(7)
dept.groupBy("dept_division").count().show(truncate = False)
# Take a look at the unique values in each column
cases.groupBy("dept_division").count().show(truncate = False)
# You can also do this
cases.groupBy("dept_division").count().show() == dept.groupBy("dept_division").count().show()
# To join
cases = (
cases.join(dept, "dept_division", "left")
.drop(dept.dept_division)
.drop(dept.dept_name)
.drop(cases.dept_division)
.withColumnRenamed("standardized_dept_name", "dept")
.withColumn("dept_subject_to_SLA", col("dept_subject_to_SLA") == "YES")
)
cases.show(2, vertical = True)
###Output
-RECORD 0------------------------------------
case_id | 1014127332
case_opened_date | 2018-01-01 00:42:00
case_closed_date | 2018-01-01 12:29:00
case_due_date | 2020-09-26 00:42:00
case_late | NO
num_days_late | -998.5087616000001
case_closed | YES
service_request_type | Stray Animal
SLA_days | 999.0
case_status | Closed
source_id | svcCRMLS
request_address | 2315 el paso st,...
council_district | 005
num_weeks_late | -142.6441088
zipcode | 78207
case_age | 1229
days_to_close | 0
dept | Animal Care Services
dept_subject_to_SLA | true
-RECORD 1------------------------------------
case_id | 1014127333
case_opened_date | 2018-01-01 00:46:00
case_closed_date | 2018-01-03 08:11:00
case_due_date | 2018-01-05 08:30:00
case_late | NO
num_days_late | -2.0126041669999997
case_closed | YES
service_request_type | Removal Of Obstru...
SLA_days | 4.322222222
case_status | Closed
source_id | svcCRMSS
request_address | 2215 goliad rd, ...
council_district | 003
num_weeks_late | -0.28751488099999994
zipcode | 78223
case_age | 1229
days_to_close | 2
dept | Trans & Cap Impro...
dept_subject_to_SLA | true
only showing top 2 rows
###Markdown
Data Splitting
###Code
train, validate, test = cases.randomSplit([.7, .2, .1])
train.count()
validate.count()
test.count()
###Output
_____no_output_____ |
Python for Finance - Code Files/109 Monte Carlo - Euler Discretization - Part I/Online Financial Data (APIs)/Python 3 APIs/MC - Euler Discretization - Part I - Solution_IEX.ipynb | ###Markdown
Monte Carlo - Euler Discretization - Part I *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Download the data for Microsoft (โMSFTโ) from IEX for the period โ2015-1-1โ until '2017-3-21'.
###Code
import numpy as np
import pandas as pd
from pandas_datareader import data as web
from scipy.stats import norm
import matplotlib.pyplot as plt
%matplotlib inline
ticker = 'MSFT'
data = pd.DataFrame()
data[ticker] = web.DataReader(ticker, data_source='iex', start='2015-1-1', end='2017-3-21')['close']
###Output
5y
###Markdown
Store the annual standard deviation of the log returns in a variable, called โstdevโ.
###Code
log_returns = np.log(1 + data.pct_change())
log_returns.tail()
data.plot(figsize=(10, 6));
stdev = log_returns.std() * 250 ** 0.5
stdev
###Output
_____no_output_____
###Markdown
Set the risk free rate, r, equal to 2.5% (0.025).
###Code
r = 0.025
###Output
_____no_output_____
###Markdown
To transform the object into an array, reassign stdev.values to stdev.
###Code
type(stdev)
stdev = stdev.values
stdev
###Output
_____no_output_____
###Markdown
Set the time horizon, T, equal to 1 year, the number of time intervals equal to 250, the iterations equal to 10,000. Create a variable, delta_t, equal to the quotient of T divided by the number of time intervals.
###Code
T = 1.0
t_intervals = 250
delta_t = T / t_intervals
iterations = 10000
###Output
_____no_output_____
###Markdown
Let Z equal a random matrix with dimension (time intervals + 1) by the number of iterations.
###Code
Z = np.random.standard_normal((t_intervals + 1, iterations))
###Output
_____no_output_____
###Markdown
Use the .zeros_like() method to create another variable, S, with the same dimension as Z. S is the matrix to be filled with future stock price data.
###Code
S = np.zeros_like(Z)
###Output
_____no_output_____
###Markdown
Create a variable S0 equal to the last adjusted closing price of Microsoft. Use the โilocโ method.
###Code
S0 = data.iloc[-1]
S[0] = S0
###Output
_____no_output_____
###Markdown
Use the following formula to create a loop within the range (1, t_intervals + 1) that reassigns values to S in time t. $$S_t = S_{t-1} \cdot exp((r - 0.5 \cdot stdev^2) \cdot delta_t + stdev \cdot delta_t^{0.5} \cdot Z_t)$$
###Code
for t in range(1, t_intervals + 1):
S[t] = S[t-1] * np.exp((r - 0.5 * stdev ** 2) * delta_t + stdev * delta_t ** 0.5 * Z[t])
S
S.shape
###Output
_____no_output_____
###Markdown
Plot the first 10 of the 10,000 generated iterations on a graph.
###Code
plt.figure(figsize=(10, 6))
plt.plot(S[:, :10]);
###Output
_____no_output_____ |
examples/example_inverted_pendulum_kalman.ipynb | ###Markdown
System dynamics The system to be controlled is an inverted pendulum on a cart (see next Figure). The system is governed by the following differential equations:\begin{equation} \begin{aligned} (M+m)\ddot p + ml\ddot\phi \cos\phi - ml \dot \phi ^2 \sin \phi + b\dot p &= F \\ l \ddot \phi + \ddot p \cos \phi - g \sin\phi &= -f_\phi\dot \phi\end{aligned}\end{equation}Introducing the state vector $x=[p\; \dot p\; \phi\; \dot \phi]$ and the input $u=F$, the system dynamics are described in state-space by a set of an nonlinear ordinary differential equations: $\dot x = f(x,u)$ with\begin{equation}\begin{split} f(x,u) &= \begin{bmatrix} x_2\\ \frac{-mg \sin x_3\cos x_3 + mlx_4^3\sin x_3 + f_\phi m x_4 \cos x_3 - bx_2 + u }{M+(1-\cos^2 x_3)m}\\ x_3\\ \frac{(M+m)(g \sin x_3 - f_\phi x_4) - (lm x_4^2 \sin x_3 - bx_2 + u)\cos x_3}{l(M+(1-\cos^2 x_3)m)} \end{bmatrix}\\ \end{split} \end{equation}For MPC control design, the system is linearized about the upright (unstable) equilibrium point, i.e., about the point $x_{eq} = [0, \; 0\;, 0,\; 0]^\top$.The linearized system has form $\dot x = A_c x + B_c u$ with\begin{equation} A = \begin{bmatrix} 0& 1& 0& 0\\ 0& -\frac{b}{M}& -g\frac{m}{M}& f_\theta\frac{m}{M}\\ 0&0&0&1\\ 0&\frac{b}{Ml}& \frac{g(M+m)}{Ml}&-\frac{(M+m)f_\theta}{M l} \end{bmatrix},\qquad B= \begin{bmatrix} 0\\ \frac{1}{M}\\ 0\\ -\frac{1}{Ml}& \end{bmatrix} \end{equation} Next, the system is discretized with sampling time $T_s = 10\;\text{ms}$. Here we just use a Forward Euler dsicretization scheme for the sake of simplicity.
###Code
# Constants #
M = 0.5
m = 0.2
b = 0.1
ftheta = 0.1
l = 0.3
g = 9.81
Ts = 10e-3
# System dynamics: \dot x = f_ODE(t,x,u)
def f_ODE(t,x,u):
F = u
v = x[1]
theta = x[2]
omega = x[3]
der = np.zeros(4)
der[0] = v
der[1] = (m * l * np.sin(theta) * omega ** 2 - m * g * np.sin(theta) * np.cos(theta) + m * ftheta * np.cos(theta) * omega + F - b * v) / (M + m * (1 - np.cos(theta) ** 2))
der[2] = omega
der[3] = ((M + m) * (g * np.sin(theta) - ftheta * omega) - m * l * omega ** 2 * np.sin(theta) * np.cos(theta) - (F - b * v) * np.cos(theta)) / (l * (M + m * (1 - np.cos(theta) ** 2)))
return der
# Linearized System Matrices
Ac =np.array([[0, 1, 0, 0],
[0, -b / M, -(g * m) / M, (ftheta * m) / M],
[0, 0, 0, 1],
[0, b / (M * l), (M * g + g * m) / (M * l), -(M * ftheta + ftheta * m) / (M * l)]])
Bc = np.array([
[0.0],
[1.0 / M],
[0.0],
[-1 / (M * l)]
])
Cc = np.array([[1., 0., 0., 0.],
[0., 0., 1., 0.]])
Dc = np.zeros((2, 1))
[nx, nu] = Bc.shape # number of states and number or inputs
ny = np.shape(Cc)[0]
# Simple forward euler discretization
Ad = np.eye(nx) + Ac * Ts
Bd = Bc * Ts
Cd = Cc
Dd = Dc
# Standard deviation of the measurement noise on position and angle
std_npos = 0.005
std_nphi = 0.005
# Reference input and states
xref = np.array([0.3, 0.0, 0.0, 0.0]) # reference state
uref = np.array([0.0]) # reference input
uminus1 = np.array([0.0]) # input at time step negative one - used to penalize the first delta u at time instant 0. Could be the same as uref.
# Constraints
xmin = np.array([-10.0, -10.0, -100, -100])
xmax = np.array([10.0, 10.0, 100, 100])
umin = np.array([-20])
umax = np.array([20])
Dumin = np.array([-5])
Dumax = np.array([5])
# Objective function weights
Qx = sparse.diags([1.0, 0, 5.0, 0]) # Quadratic cost for states x0, x1, ..., x_N-1
QxN = sparse.diags([1.0, 0, 5.0, 0]) # Quadratic cost for xN
Qu = 0.0 * sparse.eye(1) # Quadratic cost for u0, u1, ...., u_N-1
QDu = 0.1 * sparse.eye(1) # Quadratic cost for Du0, Du1, ...., Du_N-1
# Initialize simulation system
phi0 = 15*2*np.pi/360
x0 = np.array([0, 0, phi0, 0]) # initial state
t0 = 0
system_dyn = ode(f_ODE).set_integrator('vode', method='bdf')
system_dyn.set_initial_value(x0, t0)
_ = system_dyn.set_f_params(0.0)
# Prediction horizon
Np = 150
Nc = 75
# Instantiate and initialize MPC controller
K = MPCController(Ad, Bd, Np=Np, Nc=Nc, x0=x0, xref=xref, uminus1=uminus1,
Qx=Qx, QxN=QxN, Qu=Qu, QDu=QDu,
xmin=xmin, xmax=xmax, umin=umin, umax=umax, Dumin=Dumin, Dumax=Dumax)
K.setup()
# Basic Kalman filter design
Q_kal = np.diag([0.1, 10, 0.1, 10])
R_kal = np.eye(ny)
L,P,W = kalman_design_simple(Ad, Bd, Cd, Dd, Q_kal, R_kal, type='filter')
x0_est = x0
KF = LinearStateEstimator(x0_est, Ad, Bd, Cd, Dd,L)
# Simulate in closed loop
[nx, nu] = Bd.shape # number of states and number or inputs
len_sim = 10 # simulation length (s)
nsim = int(len_sim / Ts) # simulation length(timesteps)
x_vec = np.zeros((nsim, nx))
y_vec = np.zeros((nsim, ny))
y_meas_vec = np.zeros((nsim, ny))
y_est_vec = np.zeros((nsim, ny))
x_est_vec = np.zeros((nsim, nx))
x_ref_vec = np.zeros((nsim, nx))
u_vec = np.zeros((nsim, nu))
t_MPC_CPU = np.zeros((nsim,1))
t_vec = np.arange(0, nsim) * Ts
time_start = time.time()
x_step = x0
x_step_est = x0
t_step = t0
uMPC = uminus1
for i in range(nsim):
# Output for step i
# System
y_step = Cd.dot(system_dyn.y) # y[i] from the system
ymeas_step = y_step
ymeas_step[0] += std_npos * np.random.randn()
ymeas_step[1] += std_nphi * np.random.randn()
# Estimator
# MPC
uMPC = K.output() # u[i] = k(\hat x[i]) possibly computed at time instant -1
# Save output for step i
y_vec[i, :] = y_step # y[i]
y_meas_vec[i, :] = ymeas_step # y_meas[i]
x_vec[i, :] = system_dyn.y # x[i]
y_est_vec[i, :] = KF.y # \hat y[i|i-1]
x_est_vec[i, :] = KF.x # \hat x[i|i-1]
x_ref_vec[i, :] = xref #xref_fun(t_step)
u_vec[i, :] = uMPC # u[i]
# Update to i+1
# System
system_dyn.set_f_params(uMPC) # set current input value to uMPC
system_dyn.integrate(system_dyn.t + Ts) # integrate system dynamics for a time step
# Kalman filter: update and predict
KF.update(ymeas_step) # \hat x[i|i]
KF.predict(uMPC) # \hat x[i+1|i]
# MPC update for step i+1
time_MPC_start = time.time()
K.update(KF.x, uMPC) # update with measurement (and possibly pre-compute u[i+1])
t_MPC_CPU[i] = time.time() - time_MPC_start
# Time update
t_step += Ts
time_sim = time.time() - time_start
# Plot results
fig, axes = plt.subplots(3, 1, figsize=(10, 10), sharex=True)
axes[0].plot(t_vec, x_est_vec[:, 0], "b", label="p_est")
axes[0].plot(t_vec, x_vec[:, 0], "k", label='p')
axes[0].plot(t_vec, x_ref_vec[:,0], "r--", linewidth=4, label="p_ref")
axes[0].set_ylabel("Position (m)")
axes[1].plot(t_vec, x_est_vec[:, 2] * 360 / 2 / np.pi, "b", label="phi_est")
axes[1].plot(t_vec, x_vec[:, 2] * 360 / 2 / np.pi, label="phi")
axes[1].plot(t_vec, x_ref_vec[:,2] * 360 / 2 / np.pi, "r--", linewidth=4, label="phi_ref")
axes[1].set_ylabel("Angle (deg)")
axes[2].plot(t_vec, u_vec[:, 0], label="u")
axes[2].plot(t_vec, uref * np.ones(np.shape(t_vec)), "r--", linewidth=4, label="u_ref")
axes[2].set_ylabel("Force (N)")
for ax in axes:
ax.grid(True)
ax.legend()
# Histogram of the MPC CPU time
fig,ax = plt.subplots(1,1, figsize=(5,5))
ax.hist(t_MPC_CPU*1000, bins=100)
ax.grid(True)
_ = ax.set_xlabel('MPC computation CPU time (ms)')
###Output
_____no_output_____ |