markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Machine Learning Pipeline - Feature EngineeringIn the following notebooks, we will go through the implementation of each one of the steps in the Machine Learning Pipeline. We will discuss:1. Data Analysis2. **Feature Engineering**3. Feature Selection4. Model Training5. Obtaining Predictions / ScoringWe will use the house price dataset available on [Kaggle.com](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). See below for more details.=================================================================================================== Predicting Sale Price of HousesThe aim of the project is to build a machine learning model to predict the sale price of homes based on different explanatory variables describing aspects of residential houses. Why is this important? Predicting house prices is useful to identify fruitful investments, or to determine whether the price advertised for a house is over or under-estimated. What is the objective of the machine learning model?We aim to minimise the difference between the real price and the price estimated by our model. We will evaluate model performance with the:1. mean squared error (mse)2. root squared of the mean squared error (rmse)3. r-squared (r2). How do I download the dataset?- Visit the [Kaggle Website](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data).- Remember to **log in**- Scroll down to the bottom of the page, and click on the link **'train.csv'**, and then click the 'download' blue button towards the right of the screen, to download the dataset.- The download the file called **'test.csv'** and save it in the directory with the notebooks.**Note the following:**- You need to be logged in to Kaggle in order to download the datasets.- You need to accept the terms and conditions of the competition to download the dataset- If you save the file to the directory with the jupyter notebook, then you can run the code as it is written here. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
# to handle datasets import pandas as pd import numpy as np # for plotting import matplotlib.pyplot as plt # for the yeo-johnson transformation import scipy.stats as stats # to divide train and test set from sklearn.model_selection import train_test_split # feature scaling from sklearn.preprocessing import MinMaxScaler # to save the trained scaler class import joblib # to visualise al the columns in the dataframe pd.pandas.set_option('display.max_columns', None) # load dataset data = pd.read_csv('train.csv') # rows and columns of the data print(data.shape) # visualise the dataset data.head()
(1460, 81)
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
# Let's separate into train and test set # Remember to set the seed (random_state for this sklearn function) X_train, X_test, y_train, y_test = train_test_split( data.drop(['Id', 'SalePrice'], axis=1), # predictive variables data['SalePrice'], # target test_size=0.1, # portion of dataset to allocate to test set random_state=0, # we are setting the seed here ) X_train.shape, X_test.shape
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Put the variables in a similar scale TargetWe apply the logarithm
y_train = np.log(y_train) y_test = np.log(y_test)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
# let's identify the categorical variables # we will capture those of type object cat_vars = [var for var in data.columns if data[var].dtype == 'O'] # MSSubClass is also categorical by definition, despite its numeric values # (you can find the definitions of the variables in the data_description.txt # file available on Kaggle, in the same website where you downloaded the data) # lets add MSSubClass to the list of categorical variables cat_vars = cat_vars + ['MSSubClass'] # cast all variables as categorical X_train[cat_vars] = X_train[cat_vars].astype('O') X_test[cat_vars] = X_test[cat_vars].astype('O') # number of categorical variables len(cat_vars) # make a list of the categorical variables that contain missing values cat_vars_with_na = [ var for var in cat_vars if X_train[var].isnull().sum() > 0 ] # print percentage of missing values per variable X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False) # variables to impute with the string missing with_string_missing = [ var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1] # variables to impute with the most frequent category with_frequent_category = [ var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1] with_string_missing # replace missing values with new label: "Missing" X_train[with_string_missing] = X_train[with_string_missing].fillna('Missing') X_test[with_string_missing] = X_test[with_string_missing].fillna('Missing') for var in with_frequent_category: # there can be more than 1 mode in a variable # we take the first one with [0] mode = X_train[var].mode()[0] print(var, mode) X_train[var].fillna(mode, inplace=True) X_test[var].fillna(mode, inplace=True) # check that we have no missing information in the engineered variables X_train[cat_vars_with_na].isnull().sum() # check that test set does not contain null values in the engineered variables [var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
# now let's identify the numerical variables num_vars = [ var for var in X_train.columns if var not in cat_vars and var != 'SalePrice' ] # number of numerical variables len(num_vars) # make a list with the numerical variables that contain missing values vars_with_na = [ var for var in num_vars if X_train[var].isnull().sum() > 0 ] # print percentage of missing values per variable X_train[vars_with_na].isnull().mean() # replace missing values as we described above for var in vars_with_na: # calculate the mean using the train set mean_val = X_train[var].mean() print(var, mean_val) # add binary missing indicator (in train and test) X_train[var + '_na'] = np.where(X_train[var].isnull(), 1, 0) X_test[var + '_na'] = np.where(X_test[var].isnull(), 1, 0) # replace missing values by the mean # (in train and test) X_train[var].fillna(mean_val, inplace=True) X_test[var].fillna(mean_val, inplace=True) # check that we have no more missing values in the engineered variables X_train[vars_with_na].isnull().sum() # check that test set does not contain null values in the engineered variables [var for var in vars_with_na if X_test[var].isnull().sum() > 0] # check the binary missing indicator variables X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Temporal variables Capture elapsed timeWe learned in the previous notebook, that there are 4 variables that refer to the years in which the house or the garage were built or remodeled. We will capture the time elapsed between those variables and the year in which the house was sold:
def elapsed_years(df, var): # capture difference between the year variable # and the year in which the house was sold df[var] = df['YrSold'] - df[var] return df for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']: X_train = elapsed_years(X_train, var) X_test = elapsed_years(X_test, var) # now we drop YrSold X_train.drop(['YrSold'], axis=1, inplace=True) X_test.drop(['YrSold'], axis=1, inplace=True)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
for var in ["LotFrontage", "1stFlrSF", "GrLivArea"]: X_train[var] = np.log(X_train[var]) X_test[var] = np.log(X_test[var]) # check that test set does not contain null values in the engineered variables [var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0] # same for train set [var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
# the yeo-johnson transformation learns the best exponent to transform the variable # it needs to learn it from the train set: X_train['LotArea'], param = stats.yeojohnson(X_train['LotArea']) # and then apply the transformation to the test set with the same # parameter: see who this time we pass param as argument to the # yeo-johnson X_test['LotArea'] = stats.yeojohnson(X_test['LotArea'], lmbda=param) print(param) # check absence of na in the train set [var for var in X_train.columns if X_train[var].isnull().sum() > 0] # check absence of na in the test set [var for var in X_train.columns if X_test[var].isnull().sum() > 0]
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.
skewed = [ 'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch', '3SsnPorch', 'ScreenPorch', 'MiscVal' ] for var in skewed: # map the variable values into 0 and 1 X_train[var] = np.where(X_train[var]==0, 0, 1) X_test[var] = np.where(X_test[var]==0, 0, 1)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
# re-map strings to numbers, which determine quality qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0} qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond', 'HeatingQC', 'KitchenQual', 'FireplaceQu', 'GarageQual', 'GarageCond', ] for var in qual_vars: X_train[var] = X_train[var].map(qual_mappings) X_test[var] = X_test[var].map(qual_mappings) exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4} var = 'BsmtExposure' X_train[var] = X_train[var].map(exposure_mappings) X_test[var] = X_test[var].map(exposure_mappings) finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6} finish_vars = ['BsmtFinType1', 'BsmtFinType2'] for var in finish_vars: X_train[var] = X_train[var].map(finish_mappings) X_test[var] = X_test[var].map(finish_mappings) garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3} var = 'GarageFinish' X_train[var] = X_train[var].map(garage_mappings) X_test[var] = X_test[var].map(garage_mappings) fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4} var = 'Fence' X_train[var] = X_train[var].map(fence_mappings) X_test[var] = X_test[var].map(fence_mappings) # check absence of na in the train set [var for var in X_train.columns if X_train[var].isnull().sum() > 0]
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
# capture all quality variables qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence'] # capture the remaining categorical variables # (those that we did not re-map) cat_others = [ var for var in cat_vars if var not in qual_vars ] len(cat_others) def find_frequent_labels(df, var, rare_perc): # function finds the labels that are shared by more than # a certain % of the houses in the dataset df = df.copy() tmp = df.groupby(var)[var].count() / len(df) return tmp[tmp > rare_perc].index for var in cat_others: # find the frequent categories frequent_ls = find_frequent_labels(X_train, var, 0.01) print(var, frequent_ls) print() # replace rare categories by the string "Rare" X_train[var] = np.where(X_train[var].isin( frequent_ls), X_train[var], 'Rare') X_test[var] = np.where(X_test[var].isin( frequent_ls), X_test[var], 'Rare')
MSZoning Index(['FV', 'RH', 'RL', 'RM'], dtype='object', name='MSZoning') Street Index(['Pave'], dtype='object', name='Street') Alley Index(['Grvl', 'Missing', 'Pave'], dtype='object', name='Alley') LotShape Index(['IR1', 'IR2', 'Reg'], dtype='object', name='LotShape') LandContour Index(['Bnk', 'HLS', 'Low', 'Lvl'], dtype='object', name='LandContour') Utilities Index(['AllPub'], dtype='object', name='Utilities') LotConfig Index(['Corner', 'CulDSac', 'FR2', 'Inside'], dtype='object', name='LotConfig') LandSlope Index(['Gtl', 'Mod'], dtype='object', name='LandSlope') Neighborhood Index(['Blmngtn', 'BrDale', 'BrkSide', 'ClearCr', 'CollgCr', 'Crawfor', 'Edwards', 'Gilbert', 'IDOTRR', 'MeadowV', 'Mitchel', 'NAmes', 'NWAmes', 'NoRidge', 'NridgHt', 'OldTown', 'SWISU', 'Sawyer', 'SawyerW', 'Somerst', 'StoneBr', 'Timber'], dtype='object', name='Neighborhood') Condition1 Index(['Artery', 'Feedr', 'Norm', 'PosN', 'RRAn'], dtype='object', name='Condition1') Condition2 Index(['Norm'], dtype='object', name='Condition2') BldgType Index(['1Fam', '2fmCon', 'Duplex', 'Twnhs', 'TwnhsE'], dtype='object', name='BldgType') HouseStyle Index(['1.5Fin', '1Story', '2Story', 'SFoyer', 'SLvl'], dtype='object', name='HouseStyle') RoofStyle Index(['Gable', 'Hip'], dtype='object', name='RoofStyle') RoofMatl Index(['CompShg'], dtype='object', name='RoofMatl') Exterior1st Index(['AsbShng', 'BrkFace', 'CemntBd', 'HdBoard', 'MetalSd', 'Plywood', 'Stucco', 'VinylSd', 'Wd Sdng', 'WdShing'], dtype='object', name='Exterior1st') Exterior2nd Index(['AsbShng', 'BrkFace', 'CmentBd', 'HdBoard', 'MetalSd', 'Plywood', 'Stucco', 'VinylSd', 'Wd Sdng', 'Wd Shng'], dtype='object', name='Exterior2nd') MasVnrType Index(['BrkFace', 'None', 'Stone'], dtype='object', name='MasVnrType') Foundation Index(['BrkTil', 'CBlock', 'PConc', 'Slab'], dtype='object', name='Foundation') Heating Index(['GasA', 'GasW'], dtype='object', name='Heating') CentralAir Index(['N', 'Y'], dtype='object', name='CentralAir') Electrical Index(['FuseA', 'FuseF', 'SBrkr'], dtype='object', name='Electrical') Functional Index(['Min1', 'Min2', 'Mod', 'Typ'], dtype='object', name='Functional') GarageType Index(['Attchd', 'Basment', 'BuiltIn', 'Detchd'], dtype='object', name='GarageType') PavedDrive Index(['N', 'P', 'Y'], dtype='object', name='PavedDrive') PoolQC Index(['Missing'], dtype='object', name='PoolQC') MiscFeature Index(['Missing', 'Shed'], dtype='object', name='MiscFeature') SaleType Index(['COD', 'New', 'WD'], dtype='object', name='SaleType') SaleCondition Index(['Abnorml', 'Family', 'Normal', 'Partial'], dtype='object', name='SaleCondition') MSSubClass Int64Index([20, 30, 50, 60, 70, 75, 80, 85, 90, 120, 160, 190], dtype='int64', name='MSSubClass')
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
# this function will assign discrete values to the strings of the variables, # so that the smaller value corresponds to the category that shows the smaller # mean house sale price def replace_categories(train, test, y_train, var, target): tmp = pd.concat([X_train, y_train], axis=1) # order the categories in a variable from that with the lowest # house sale price, to that with the highest ordered_labels = tmp.groupby([var])[target].mean().sort_values().index # create a dictionary of ordered categories to integer values ordinal_label = {k: i for i, k in enumerate(ordered_labels, 0)} print(var, ordinal_label) print() # use the dictionary to replace the categorical strings by integers train[var] = train[var].map(ordinal_label) test[var] = test[var].map(ordinal_label) for var in cat_others: replace_categories(X_train, X_test, y_train, var, 'SalePrice') # check absence of na in the train set [var for var in X_train.columns if X_train[var].isnull().sum() > 0] # check absence of na in the test set [var for var in X_test.columns if X_test[var].isnull().sum() > 0] # let me show you what I mean by monotonic relationship # between labels and target def analyse_vars(train, y_train, var): # function plots median house sale price per encoded # category tmp = pd.concat([X_train, np.log(y_train)], axis=1) tmp.groupby(var)['SalePrice'].median().plot.bar() plt.title(var) plt.ylim(2.2, 2.6) plt.ylabel('SalePrice') plt.show() for var in cat_others: analyse_vars(X_train, y_train, var)
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
# create scaler scaler = MinMaxScaler() # fit the scaler to the train set scaler.fit(X_train) # transform the train and test set # sklearn returns numpy arrays, so we wrap the # array with a pandas dataframe X_train = pd.DataFrame( scaler.transform(X_train), columns=X_train.columns ) X_test = pd.DataFrame( scaler.transform(X_test), columns=X_train.columns ) X_train.head() # let's now save the train and test sets for the next notebook! X_train.to_csv('xtrain.csv', index=False) X_test.to_csv('xtest.csv', index=False) y_train.to_csv('ytrain.csv', index=False) y_test.to_csv('ytest.csv', index=False) # now let's save the scaler joblib.dump(scaler, 'minmax_scaler.joblib')
_____no_output_____
BSD-3-Clause
section-04-research-and-development/02-machine-learning-pipeline-feature-engineering.ipynb
chauthinh/machine-learning-deployment
Fictional Army - Filtering and Sorting Introduction:This exercise was inspired by this [page](http://chrisalbon.com/python/)Special thanks to: https://github.com/chrisalbon for sharing the dataset and materials. Step 1. Import the necessary libraries
import pandas as pd
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 2. This is the data given as a dictionary
# Create an example dataframe about a fictional army raw_data = {'regiment': ['Nighthawks', 'Nighthawks', 'Nighthawks', 'Nighthawks', 'Dragoons', 'Dragoons', 'Dragoons', 'Dragoons', 'Scouts', 'Scouts', 'Scouts', 'Scouts'], 'company': ['1st', '1st', '2nd', '2nd', '1st', '1st', '2nd', '2nd','1st', '1st', '2nd', '2nd'], 'deaths': [523, 52, 25, 616, 43, 234, 523, 62, 62, 73, 37, 35], 'battles': [5, 42, 2, 2, 4, 7, 8, 3, 4, 7, 8, 9], 'size': [1045, 957, 1099, 1400, 1592, 1006, 987, 849, 973, 1005, 1099, 1523], 'veterans': [1, 5, 62, 26, 73, 37, 949, 48, 48, 435, 63, 345], 'readiness': [1, 2, 3, 3, 2, 1, 2, 3, 2, 1, 2, 3], 'armored': [1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1], 'deserters': [4, 24, 31, 2, 3, 4, 24, 31, 2, 3, 2, 3], 'origin': ['Arizona', 'California', 'Texas', 'Florida', 'Maine', 'Iowa', 'Alaska', 'Washington', 'Oregon', 'Wyoming', 'Louisana', 'Georgia']}
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 3. Create a dataframe and assign it to a variable called army. Don't forget to include the columns names in the order presented in the dictionary ('regiment', 'company', 'deaths'...) so that the column index order is consistent with the solutions. If omitted, pandas will order the columns alphabetically.
army = pd.DataFrame(raw_data, columns = ['regiment', 'company', 'deaths', 'battles', 'size', 'veterans', 'readiness', 'armored', 'deserters', 'origin'])
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 4. Set the 'origin' colum as the index of the dataframe
army = army.set_index('origin') army
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 5. Print only the column veterans
army['veterans']
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 6. Print the columns 'veterans' and 'deaths'
army[['veterans', 'deaths']]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 7. Print the name of all the columns.
army.columns
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 8. Select the 'deaths', 'size' and 'deserters' columns from Maine and Alaska
# Select all rows with the index label "Maine" and "Alaska" army.loc[['Maine','Alaska'] , ["deaths","size","deserters"]]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 9. Select the rows 3 to 7 and the columns 3 to 6
# army.iloc[3:7, 3:6]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 10. Select every row after the fourth row
army.iloc[3:]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 11. Select every row up to the 4th row
army.iloc[:3]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 12. Select the 3rd column up to the 7th column
# the first : means all # after the comma you select the range army.iloc[: , 4:7]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 13. Select rows where df.deaths is greater than 50
army[army['deaths'] > 50]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 14. Select rows where df.deaths is greater than 500 or less than 50
army[(army['deaths'] > 500) | (army['deaths'] < 50)]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 15. Select all the regiments not named "Dragoons"
army[(army['regiment'] != 'Dragoons')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 16. Select the rows called Texas and Arizona
army.loc[['Arizona', 'Texas']]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 17. Select the third cell in the row named Arizona
army.loc[['Arizona'], ['deaths']] #OR army.iloc[[0], army.columns.get_loc('deaths')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Step 18. Select the third cell down in the column named deaths
army.loc['Texas', 'deaths'] #OR army.iloc[[2], army.columns.get_loc('deaths')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Fictional Army/Exercise_with_solutions.ipynb
ViniciusRFerraz/pandas_exercises
Title Spread Element Dependencies Matplotlib Backends Matplotlib Bokeh
import numpy as np import holoviews as hv hv.extension('matplotlib')
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
``Spread`` elements have the same data format as the [``ErrorBars``](ErrorBars.ipynb) element, namely x- and y-values with associated symmetric or asymmetric errors, but are interpreted as samples from a continuous distribution (just as ``Curve`` is the continuous version of ``Scatter``). These are often paired with an overlaid ``Curve`` to show an average trend along with a corresponding spread of values; see the [Tabular Datasets](../../../user_guide/07-Tabular_Datasets.ipynb) user guide for examples.Note that as the ``Spread`` element is used to add information to a plot (typically a ``Curve``) the default alpha value is less that one, making it partially transparent. Symmetric Given two value dimensions corresponding to the position on the y-axis and the error, ``Spread`` will visualize itself assuming symmetric errors:
np.random.seed(42) xs = np.linspace(0, np.pi*2, 20) err = 0.2+np.random.rand(len(xs)) hv.Spread((xs, np.sin(xs), err))
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
Asymmetric Given three value dimensions corresponding to the position on the y-axis, the negative error and the positive error, ``Spread`` can be used to visualize assymmetric errors:
%%opts Spread (facecolor='indianred' alpha=1) xs = np.linspace(0, np.pi*2, 20) hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))), vdims=['y', 'yerrneg', 'yerrpos'])
_____no_output_____
BSD-3-Clause
examples/reference/elements/matplotlib/Spread.ipynb
stonebig/holoviews
Run in Colab View on GitHub Vertex AI: Track parameters and metrics for custom training jobs OverviewThis notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data. DatasetThis example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone ObjectiveIn this notebook, you will learn how to use Vertex SDK for Python to: * Track training parameters and prediction metrics for a custom training job. * Extract and perform analysis for all parameters and metrics within an Experiment. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesRun the following commands to install the Vertex SDK for Python.
import sys if "google.colab" in sys.modules: USER_FLAG = "" else: USER_FLAG = "--user" !python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.
# Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
import os PROJECT_ID = "" # Get your Google Cloud project ID from gcloud if not os.getenv("IS_TESTING"): shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID: ", PROJECT_ID)
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Otherwise, set your project ID here.
if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Set gcloud config to your project ID.
!gcloud config set project $PROJECT_ID
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebooks, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**When you submit a training job using the Cloud SDK, you upload a Python packagecontaining your training code to a Cloud Storage bucket. Vertex AI runsthe code from this package. In this tutorial, Vertex AI also saves thetrained model that results from your job in the same bucket. Using this model artifact, you can thencreate Vertex AI model and endpoint resources in order to serveonline predictions.Set the name of your Cloud Storage bucket below. It must be unique across allCloud Storage buckets.You may also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Make sure to [choose a region where Vertex AI services areavailable](https://cloud.google.com/vertex-ai/docs/general/locationsavailable_regions). You maynot use a Multi-Regional Storage bucket for training with Vertex AI.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Import libraries and define constants Import required libraries.
import pandas as pd from google.cloud import aiplatform from sklearn.metrics import mean_absolute_error, mean_squared_error from tensorflow.python.keras.utils import data_utils
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Initialize Vertex AI and set an _experiment_ Define experiment name.
EXPERIMENT_NAME = "" # @param {type:"string"}
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
If EXEPERIMENT_NAME is not set, set a default one below:
if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None: EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Initialize the *client* for Vertex AI.
aiplatform.init( project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_NAME, experiment=EXPERIMENT_NAME, )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Tracking parameters and metrics in Vertex AI custom training jobs This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone
!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv !gsutil cp abalone_train.csv {BUCKET_NAME}/data/ gcs_csv_path = f"{BUCKET_NAME}/data/abalone_train.csv"
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Create a managed tabular dataset from a CSVA Managed dataset can be used to create an AutoML model or a custom model.
ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path]) ds.resource_name
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Write the training scriptRun the following cell to create the training script that is used in the sample custom training job.
%%writefile training_script.py import pandas as pd import argparse import os import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers parser = argparse.ArgumentParser() parser.add_argument('--epochs', dest='epochs', default=10, type=int, help='Number of epochs.') parser.add_argument('--num_units', dest='num_units', default=64, type=int, help='Number of unit for first layer.') args = parser.parse_args() # uncomment and bump up replica_count for distributed training # strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # tf.distribute.experimental_set_strategy(strategy) col_names = ["Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age"] target = "Age" def aip_data_to_dataframe(wild_card_path): return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names) for fp in tf.data.Dataset.list_files([wild_card_path])]) def get_features_and_labels(df): return df.drop(target, axis=1).values, df[target].values def data_prep(wild_card_path): return get_features_and_labels(aip_data_to_dataframe(wild_card_path)) model = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)]) model.compile(loss='mse', optimizer='adam') model.fit(*data_prep(os.environ["AIP_TRAINING_DATA_URI"]), epochs=args.epochs , validation_data=data_prep(os.environ["AIP_VALIDATION_DATA_URI"])) print(model.evaluate(*data_prep(os.environ["AIP_TEST_DATA_URI"]))) # save as Vertex AI Managed model tf.saved_model.save(model, os.environ["AIP_MODEL_DIR"])
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata
job = aiplatform.CustomTrainingJob( display_name="train-abalone-dist-1-replica", script_path="training_script.py", container_uri="gcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest", requirements=["gcsfs==0.7.1"], model_serving_container_image_uri="gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest", )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.
aiplatform.start_run("custom-training-run-1") # Change this to your desired run name parameters = {"epochs": 10, "num_units": 64} aiplatform.log_params(parameters) model = job.run( ds, replica_count=1, model_display_name="abalone-model", args=[f"--epochs={parameters['epochs']}", f"--num_units={parameters['num_units']}"], )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Deploy Model and calculate prediction metrics Deploy model to Google Cloud. This operation will take 10-20 mins.
endpoint = model.deploy(machine_type="n1-standard-4")
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Once model is deployed, perform online prediction using the `abalone_test` dataset and calculate prediction metrics. Prepare the prediction dataset.
def read_data(uri): dataset_path = data_utils.get_file("auto-mpg.data", uri) col_names = [ "Length", "Diameter", "Height", "Whole weight", "Shucked weight", "Viscera weight", "Shell weight", "Age", ] dataset = pd.read_csv( dataset_path, names=col_names, na_values="?", comment="\t", sep=",", skipinitialspace=True, ) return dataset def get_features_and_labels(df): target = "Age" return df.drop(target, axis=1).values, df[target].values test_dataset, test_labels = get_features_and_labels( read_data( "https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv" ) )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Perform online prediction.
prediction = endpoint.predict(test_dataset.tolist()) prediction
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Calculate and track prediction evaluation metrics.
mse = mean_squared_error(test_labels, prediction.predictions) mae = mean_absolute_error(test_labels, prediction.predictions) aiplatform.log_metrics({"mse": mse, "mae": mae})
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Extract all parameters and metrics created during this experiment.
aiplatform.get_experiment_df()
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
View data in the Cloud Console Parameters and metrics can also be viewed in the Cloud Console.
print("Vertex AI Experiments:") print( f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}" )
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:Training JobModelCloud Storage Bucket* Training Job* Model* Endpoint* Cloud Storage Bucket
delete_training_job = True delete_model = True delete_endpoint = True # Warning: Setting this to true will delete everything in your bucket delete_bucket = False # Delete the training job job.delete() # Delete the model model.delete() # Delete the endpoint endpoint.delete() if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil -m rm -r $BUCKET_NAME
_____no_output_____
Apache-2.0
ai-platform-unified/notebooks/unofficial/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb
thepycoder/ai-platform-samples
End-To-End Example: Password ProgramPassword Program:- 5 attempts for the password- On correct password, print: “Access Granted”, then end the program - On incorrect password “Invalid Password Attempt ” and give the user another try- After 5 attempts, print “You are locked out”. Then end the program.
secret = "rhubarb" attempts = 0 while True: password = input("Enter Password: ") attempts= attempts + 1 if password == secret: print("Access Granted!") break print("Invalid password attempt #",attempts) if attempts == 5: print("You are locked out") break
Enter Password: sd Invalid password attempt # 1 Enter Password: fds Invalid password attempt # 2 Enter Password: sd Invalid password attempt # 3 Enter Password: d Invalid password attempt # 4 Enter Password: d Invalid password attempt # 5 You are locked out
MIT
content/lessons/04/End-To-End-Example/ETEE-Password-Program.ipynb
MahopacHS/spring-2020-Lamk0810
Final Lab*Felix Rojo Lapalma* Main taskIn this notebook, we will apply transfer learning techniques to finetune the [MobileNet](https://arxiv.org/pdf/1704.04861.pdf) CNN on [Cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. ProceduresIn general, the main steps that we will follow are:1. Load data, analyze and split in *training*/*validation*/*testing* sets.2. Load CNN and analyze architecture.3. Adapt this CNN to our problem.4. Setup data augmentation techniques.5. Add some keras callbacks.6. Setup optimization algorithm with their hyperparameters.7. Train model!8. Choose best model/snapshot.9. Evaluate final model on the *testing* set.
# load libs import os import matplotlib.pyplot as plt from IPython.display import SVG # https://keras.io/applications/#documentation-for-individual-models from keras.applications.mobilenet import MobileNet from keras.datasets import cifar10 from keras.models import Model from keras.utils.vis_utils import model_to_dot from keras.layers import Dense, GlobalAveragePooling2D,Dropout from keras.preprocessing.image import ImageDataGenerator from keras.utils import plot_model, to_categorical from sklearn.model_selection import train_test_split import cv2 import numpy as np import tensorflow as tf
Using TensorFlow backend.
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
cuda
cuda_flag=False if cuda_flag: # Setup one GPU for tensorflow (don't be greedy). os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # The GPU id to use, "0", "1", etc. os.environ["CUDA_VISIBLE_DEVICES"] = "0" # Limit tensorflow gpu usage. # Maybe you should comment this lines if you run tensorflow on CPU. config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.per_process_gpu_memory_fraction = 0.3 sess = tf.Session(config=config)
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
1. Load data, analyze and split in *training*/*validation*/*testing* sets
# Cifar-10 class names # We will create a dictionary for each type of label # This is a mapping from the int class name to # their corresponding string class name LABELS = { 0: "airplane", 1: "automobile", 2: "bird", 3: "cat", 4: "deer", 5: "dog", 6: "frog", 7: "horse", 8: "ship", 9: "truck" } # Load dataset from keras (x_train_data, y_train_data), (x_test_data, y_test_data) = cifar10.load_data() ############ # [COMPLETE] # Add some prints here to see the loaded data dimensions ############ print("Cifar-10 x_train shape: {}".format(x_train_data.shape)) print("Cifar-10 y_train shape: {}".format(y_train_data.shape)) print("Cifar-10 x_test shape: {}".format(x_test_data.shape)) print("Cifar-10 y_test shape: {}".format(y_test_data.shape)) # from https://www.cs.toronto.edu/~kriz/cifar.html # The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. # The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks. # Some constants IMG_ROWS = 32 IMG_COLS = 32 NUM_CLASSES = 10 RANDOM_STATE = 2018 ############ # [COMPLETE] # Analyze the amount of images for each class # Plot some images to explore how they look ############ from genlib import get_classes_distribution,plot_label_per_class for y,yt in zip([y_train_data.flatten(),y_test_data.flatten()],['Train','Test']): print('{:>15s}'.format(yt)) get_classes_distribution(y,LABELS) plot_label_per_class(y,LABELS)
Train airplane : 5000 or 10.00% automobile : 5000 or 10.00% bird : 5000 or 10.00% cat : 5000 or 10.00% deer : 5000 or 10.00% dog : 5000 or 10.00% frog : 5000 or 10.00% horse : 5000 or 10.00% ship : 5000 or 10.00% truck : 5000 or 10.00%
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
Todo parece ir de acuerdo a la documentación. Veamos las imagenes,
from genlib import sample_images_data,plot_sample_images for xy,yt in zip([(x_train_data,y_train_data.flatten()),(x_test_data,y_test_data.flatten())],['Train','Test']): print('{:>15s}'.format(yt)) train_sample_images, train_sample_labels = sample_images_data(*xy,LABELS) plot_sample_images(train_sample_images, train_sample_labels,LABELS) ############ # [COMPLETE] # Split training set in train/val sets # Use the sampling method that you want ############ #init seed np.random.seed(seed=RANDOM_STATE) full_set_flag=False # True: uses all images / False only a subset specified by TRAIN Samples and Val Frac VAL_FRAC=0.2 TRAIN_SIZE_BFV=x_train_data.shape[0] TRAIN_FRAC=(1-VAL_FRAC) # calc TRAIN_SAMPLES_FULL=int(TRAIN_FRAC*TRAIN_SIZE_BFV) # if full_set_flag==True TRAIN_SAMPLES_RED=20000 # if full_set_flag==False VAL_SAMPLES_RED=int(VAL_FRAC*TRAIN_SAMPLES_RED) # if full_set_flag==False if full_set_flag: # Esta forma parece servir si barremos todo el set sino... # # Get Index train_idxs = np.random.choice(np.arange(TRAIN_SIZE_BFV), size=TRAIN_SAMPLES_FULL, replace=False) val_idx=np.array([x for x in np.arange(TRAIN_SIZE_BFV) if x not in train_idxs]) else: train_idxs = np.random.choice(np.arange(TRAIN_SIZE_BFV), size=TRAIN_SAMPLES_RED, replace=False) val_idx=np.random.choice(train_idxs, size=VAL_SAMPLES_RED, replace=False) # Split x_val_data = x_train_data[val_idx, :, :, :] y_val_data = y_train_data[val_idx] x_train_data = x_train_data[train_idxs, :, :, :] y_train_data = y_train_data[train_idxs] #### #### print("Cifar-10 x_train shape: {}".format(x_train_data.shape)) print("Cifar-10 y_train shape: {}".format(y_train_data.shape)) print("Cifar-10 x_val shape: {}".format(x_val_data.shape)) print("Cifar-10 y_val shape: {}".format(y_val_data.shape)) print("Cifar-10 x_test shape: {}".format(x_test_data.shape)) print("Cifar-10 y_test shape: {}".format(y_test_data.shape))
Cifar-10 x_train shape: (20000, 32, 32, 3) Cifar-10 y_train shape: (20000, 1) Cifar-10 x_val shape: (4000, 32, 32, 3) Cifar-10 y_val shape: (4000, 1) Cifar-10 x_test shape: (10000, 32, 32, 3) Cifar-10 y_test shape: (10000, 1)
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
Veamos si quedaron balanceados Train y Validation
for y,yt in zip([y_train_data.flatten(),y_val_data.flatten()],['Train','Validation']): print('{:>15s}'.format(yt)) get_classes_distribution(y,LABELS) plot_label_per_class(y,LABELS) # In order to use the MobileNet CNN pre-trained on imagenet, we have # to resize our images to have one of the following static square shape: [(128, 128), # (160, 160), (192, 192), or (224, 224)]. # If we try to resize all the dataset this will not fit on memory, so we have to save all # the images to disk, and then when loading those images, our datagenerator will resize them # to the desired shape on-the-fly. ############ # [COMPLETE] # Use the above function to save all your data, e.g.: # save_to_disk(x_train, y_train, 'train', 'cifar10_images') # save_to_disk(x_val, y_val, 'val', 'cifar10_images') # save_to_disk(x_test, y_test, 'test', 'cifar10_images') ############ save_image_flag=False # To avoid saving images every time!!! if save_image_flag: from genlib import save_to_disk save_to_disk(x_train_data, y_train_data, 'train', output_dir='cifar10_images') save_to_disk(x_val_data, y_val_data, 'val', output_dir='cifar10_images') save_to_disk(x_test_data, y_test_data, 'test', output_dir='cifar10_images')
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
2. Load CNN and analyze architecture
#Model NO_EPOCHS = 25 BATCH_SIZE = 32 NET_IMG_ROWS = 128 NET_IMG_COLS = 128 ############ # [COMPLETE] # Use the MobileNet class from Keras to load your base model, pre-trained on imagenet. # We wan't to load the pre-trained weights, but without the classification layer. # Check the notebook '3_transfer-learning' or https://keras.io/applications/#mobilenet to get more # info about how to load this network properly. ############ #Note that this model only supports the data format 'channels_last' (height, width, channels). #The default input size for this model is 224x224. base_model = MobileNet(input_shape=(NET_IMG_ROWS, NET_IMG_COLS, 3), # Input image size weights='imagenet', # Use imagenet pre-trained weights include_top=False, # Drop classification layer pooling='avg') # Global AVG pooling for the # output feature vector
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
3. Adapt this CNN to our problem
############ # [COMPLETE] # Having the CNN loaded, now we have to add some layers to adapt this network to our # classification problem. # We can choose to finetune just the new added layers, some particular layers or all the layer of the # model. Play with different settings and compare the results. ############ # get the output feature vector from the base model x = base_model.output # let's add a fully-connected layer x = Dense(1024, activation='relu')(x) # Add Drop Out Layer x=Dropout(0.5)(x) # and a logistic layer predictions = Dense(NUM_CLASSES, activation='softmax')(x) # this is the model we will train model = Model(inputs=base_model.input, outputs=predictions) # Initial Model Summary model.summary() model_png=False if model_png: plot_model(model, to_file='model.png') SVG(model_to_dot(model).create(prog='dot', format='svg')) # let's visualize layer names and layer indices to see how many layers # we should freeze: for i, layer in enumerate(model.layers): print(i, layer.name) # En esta instancia no pretendemos entrenar todas sino las ultimas agregadas for layer in model.layers[:88]: layer.trainable = False for layer in model.layers[88:]: layer.trainable = True model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 128, 128, 3) 0 _________________________________________________________________ conv1_pad (ZeroPadding2D) (None, 129, 129, 3) 0 _________________________________________________________________ conv1 (Conv2D) (None, 64, 64, 32) 864 _________________________________________________________________ conv1_bn (BatchNormalization (None, 64, 64, 32) 128 _________________________________________________________________ conv1_relu (ReLU) (None, 64, 64, 32) 0 _________________________________________________________________ conv_dw_1 (DepthwiseConv2D) (None, 64, 64, 32) 288 _________________________________________________________________ conv_dw_1_bn (BatchNormaliza (None, 64, 64, 32) 128 _________________________________________________________________ conv_dw_1_relu (ReLU) (None, 64, 64, 32) 0 _________________________________________________________________ conv_pw_1 (Conv2D) (None, 64, 64, 64) 2048 _________________________________________________________________ conv_pw_1_bn (BatchNormaliza (None, 64, 64, 64) 256 _________________________________________________________________ conv_pw_1_relu (ReLU) (None, 64, 64, 64) 0 _________________________________________________________________ conv_pad_2 (ZeroPadding2D) (None, 65, 65, 64) 0 _________________________________________________________________ conv_dw_2 (DepthwiseConv2D) (None, 32, 32, 64) 576 _________________________________________________________________ conv_dw_2_bn (BatchNormaliza (None, 32, 32, 64) 256 _________________________________________________________________ conv_dw_2_relu (ReLU) (None, 32, 32, 64) 0 _________________________________________________________________ conv_pw_2 (Conv2D) (None, 32, 32, 128) 8192 _________________________________________________________________ conv_pw_2_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_pw_2_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_dw_3 (DepthwiseConv2D) (None, 32, 32, 128) 1152 _________________________________________________________________ conv_dw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_dw_3_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_pw_3 (Conv2D) (None, 32, 32, 128) 16384 _________________________________________________________________ conv_pw_3_bn (BatchNormaliza (None, 32, 32, 128) 512 _________________________________________________________________ conv_pw_3_relu (ReLU) (None, 32, 32, 128) 0 _________________________________________________________________ conv_pad_4 (ZeroPadding2D) (None, 33, 33, 128) 0 _________________________________________________________________ conv_dw_4 (DepthwiseConv2D) (None, 16, 16, 128) 1152 _________________________________________________________________ conv_dw_4_bn (BatchNormaliza (None, 16, 16, 128) 512 _________________________________________________________________ conv_dw_4_relu (ReLU) (None, 16, 16, 128) 0 _________________________________________________________________ conv_pw_4 (Conv2D) (None, 16, 16, 256) 32768 _________________________________________________________________ conv_pw_4_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_pw_4_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_dw_5 (DepthwiseConv2D) (None, 16, 16, 256) 2304 _________________________________________________________________ conv_dw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_dw_5_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_pw_5 (Conv2D) (None, 16, 16, 256) 65536 _________________________________________________________________ conv_pw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024 _________________________________________________________________ conv_pw_5_relu (ReLU) (None, 16, 16, 256) 0 _________________________________________________________________ conv_pad_6 (ZeroPadding2D) (None, 17, 17, 256) 0 _________________________________________________________________ conv_dw_6 (DepthwiseConv2D) (None, 8, 8, 256) 2304 _________________________________________________________________ conv_dw_6_bn (BatchNormaliza (None, 8, 8, 256) 1024 _________________________________________________________________ conv_dw_6_relu (ReLU) (None, 8, 8, 256) 0 _________________________________________________________________ conv_pw_6 (Conv2D) (None, 8, 8, 512) 131072 _________________________________________________________________ conv_pw_6_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_6_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_7 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_7_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_7 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_7_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_8 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_8_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_8 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_8_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_9 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_9_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_9 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_9_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_10 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_10_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_10 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_10_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_dw_11 (DepthwiseConv2D) (None, 8, 8, 512) 4608 _________________________________________________________________ conv_dw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_dw_11_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pw_11 (Conv2D) (None, 8, 8, 512) 262144 _________________________________________________________________ conv_pw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048 _________________________________________________________________ conv_pw_11_relu (ReLU) (None, 8, 8, 512) 0 _________________________________________________________________ conv_pad_12 (ZeroPadding2D) (None, 9, 9, 512) 0 _________________________________________________________________ conv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608 _________________________________________________________________ conv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048 _________________________________________________________________ conv_dw_12_relu (ReLU) (None, 4, 4, 512) 0 _________________________________________________________________ conv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288 _________________________________________________________________ conv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ conv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216 _________________________________________________________________ conv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ conv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576 _________________________________________________________________ conv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 _________________________________________________________________ conv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0 _________________________________________________________________ global_average_pooling2d_1 ( (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 1024) 1049600 _________________________________________________________________ dropout_1 (Dropout) (None, 1024) 0 _________________________________________________________________ dense_2 (Dense) (None, 10) 10250 ================================================================= Total params: 4,288,714 Trainable params: 1,059,850 Non-trainable params: 3,228,864 _________________________________________________________________
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
4. Setup data augmentation techniques
############ # [COMPLETE] # Use data augmentation to train your model. # Use the Keras ImageDataGenerator class for this porpouse. # Note: Given that we want to load our images from disk, instead of using # ImageDataGenerator.flow method, we have to use ImageDataGenerator.flow_from_directory # method in the following way: # generator_train = dataget_train.flow_from_directory('resized_images/train', # target_size=(128, 128), batch_size=32) # generator_val = dataget_train.flow_from_directory('resized_images/val', # target_size=(128, 128), batch_size=32) # Note that we have to resize our images to finetune the MobileNet CNN, this is done using # the target_size argument in flow_from_directory. Remember to set the target_size to one of # the valid listed here: [(128, 128), (160, 160), (192, 192), or (224, 224)]. ############ data_get=ImageDataGenerator() generator_train = data_get.flow_from_directory(directory='cifar10_images/train', target_size=(128, 128), batch_size=BATCH_SIZE) generator_val = data_get.flow_from_directory(directory='cifar10_images/val', target_size=(128, 128), batch_size=BATCH_SIZE)
Found 40000 images belonging to 10 classes. Found 10000 images belonging to 10 classes.
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
5. Add some keras callbacks
############ # [COMPLETE] # Load and set some Keras callbacks here! ############ EXP_ID='experiment_003/' from keras.callbacks import ModelCheckpoint, TensorBoard if not os.path.exists(EXP_ID): os.makedirs(EXP_ID) callbacks = [ ModelCheckpoint(filepath=os.path.join(EXP_ID, 'weights.{epoch:02d}-{val_loss:.2f}.hdf5'), monitor='val_loss', verbose=1, save_best_only=False, save_weights_only=False, mode='auto'), TensorBoard(log_dir=os.path.join(EXP_ID, 'logs'), write_graph=True, write_images=False) ]
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
6. Setup optimization algorithm with their hyperparameters
############ # [COMPLETE] # Choose some optimization algorithm and explore different hyperparameters. # Compile your model. ############ from keras.optimizers import SGD from keras.losses import categorical_crossentropy #model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), # loss='categorical_crossentropy', # metrics=['accuracy']) model.compile(loss=categorical_crossentropy, optimizer='adam', metrics=['accuracy'])
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
7. Train model!
generator_train.n ############ # [COMPLETE] # Use fit_generator to train your model. # e.g.: # model.fit_generator( # generator_train, # epochs=50, # validation_data=generator_val, # steps_per_epoch=generator_train.n // 32, # validation_steps=generator_val.n // 32) ############ if full_set_flag: steps_per_epoch=generator_train.n // BATCH_SIZE validation_steps=generator_val.n // BATCH_SIZE else: steps_per_epoch=TRAIN_SAMPLES_RED // BATCH_SIZE validation_steps=VAL_SAMPLES_RED // BATCH_SIZE model.fit_generator(generator_train, epochs=NO_EPOCHS, validation_data=generator_val, steps_per_epoch=steps_per_epoch, validation_steps=validation_steps, callbacks=callbacks)
Epoch 1/25 625/625 [==============================] - 1911s 3s/step - loss: 0.7648 - acc: 0.7352 - val_loss: 2.4989 - val_acc: 0.2167 Epoch 00001: saving model to experiment_003/weights.01-2.50.hdf5 Epoch 2/25 625/625 [==============================] - 1927s 3s/step - loss: 0.7447 - acc: 0.7426 - val_loss: 2.7681 - val_acc: 0.1904 Epoch 00002: saving model to experiment_003/weights.02-2.77.hdf5 Epoch 3/25 625/625 [==============================] - 1902s 3s/step - loss: 0.6890 - acc: 0.7630 - val_loss: 2.9040 - val_acc: 0.2019 Epoch 00003: saving model to experiment_003/weights.03-2.90.hdf5 Epoch 4/25 625/625 [==============================] - 1933s 3s/step - loss: 0.6982 - acc: 0.7597 - val_loss: 2.9734 - val_acc: 0.1787 Epoch 00004: saving model to experiment_003/weights.04-2.97.hdf5 Epoch 5/25 625/625 [==============================] - 1914s 3s/step - loss: 0.6404 - acc: 0.7810 - val_loss: 2.3613 - val_acc: 0.2074 Epoch 00005: saving model to experiment_003/weights.05-2.36.hdf5 Epoch 6/25 625/625 [==============================] - 1903s 3s/step - loss: 0.6643 - acc: 0.7724 - val_loss: 2.6470 - val_acc: 0.2183 Epoch 00006: saving model to experiment_003/weights.06-2.65.hdf5 Epoch 7/25 625/625 [==============================] - 1924s 3s/step - loss: 0.6096 - acc: 0.7885 - val_loss: 2.4154 - val_acc: 0.2025 Epoch 00007: saving model to experiment_003/weights.07-2.42.hdf5 Epoch 8/25 625/625 [==============================] - 1935s 3s/step - loss: 0.6471 - acc: 0.7776 - val_loss: 2.5618 - val_acc: 0.2140 Epoch 00008: saving model to experiment_003/weights.08-2.56.hdf5 Epoch 9/25 625/625 [==============================] - 2020s 3s/step - loss: 0.5878 - acc: 0.7964 - val_loss: 3.1497 - val_acc: 0.1823 Epoch 00009: saving model to experiment_003/weights.09-3.15.hdf5 Epoch 10/25 625/625 [==============================] - 1981s 3s/step - loss: 0.6049 - acc: 0.7921 - val_loss: 3.1617 - val_acc: 0.1673 Epoch 00010: saving model to experiment_003/weights.10-3.16.hdf5 Epoch 11/25 624/625 [============================>.] - ETA: 5s - loss: 0.5667 - acc: 0.8012
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
8. Choose best model/snapshot
############ # [COMPLETE] # Analyze and compare your results. Choose the best model and snapshot, # justify your election. ############
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
9. Evaluate final model on the *testing* set
############ # [COMPLETE] # Evaluate your model on the testing set. ############
_____no_output_____
MIT
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_003.ipynb
felixlapalma/diplodatos_2018
导入库
import pandas as pd import numpy as np from sklearn.svm import LinearSVR, LinearSVC from sklearn.svm import * from sklearn.linear_model import Lasso, LogisticRegression, LinearRegression from sklearn.tree import DecisionTreeRegressor,DecisionTreeClassifier from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, GradientBoostingRegressor, GradientBoostingClassifier from sklearn.feature_selection import SelectFromModel from sklearn.discriminant_analysis import LinearDiscriminantAnalysis from sklearn.decomposition import PCA,LatentDirichletAllocation from sklearn.metrics import * from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
读取数据集
filePath = './data/138rows_after.xlsx' dataFrame = pd.read_excel(filePath) dataArray = np.array(dataFrame) dataFrame
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
获取标签列
name = [column for column in dataFrame] name = name[5:] pd.DataFrame(name)
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
查看数据规模
X_withLabel = dataArray[:92,5:] X_all = dataArray[:,5:] y_data = dataArray[:92,3] y_label= dataArray[:92,4].astype(int) print("有标签数据的规模:",X_withLabel.shape) print("所有数据的规模:",X_all.shape) print("回归标签的规模:",y_data.shape) print("分类标签的规模:",y_label.shape)
有标签数据的规模: (92, 76) 所有数据的规模: (138, 76) 回归标签的规模: (92,) 分类标签的规模: (92,)
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
回归 利用Lasso进行特征选择
lasso = Lasso(alpha = 0.5,max_iter=5000).fit(X_withLabel, y_data) modelLasso = SelectFromModel(lasso, prefit=True) X_Lasso = modelLasso.transform(X_withLabel) LassoIndexMask = modelLasso.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 LassoIndexMask = LassoIndexMask.tolist() LassoIndexTrue = [] LassoIndexFalse = [] for i in range(len(LassoIndexMask)): # 记录下被筛选的indicator的序号 if (LassoIndexMask[i]==True): LassoIndexTrue.append(i) if (LassoIndexMask[i]==False): LassoIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(LassoIndexTrue)): print(i+1,":",name[LassoIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(LassoIndexFalse)): print(i+1,":",name[LassoIndexFalse[i]]) dataFrameOfLassoRegressionFeature = dataFrame for i in range(len(LassoIndexFalse)): dataFrameOfLassoRegressionFeature = dataFrameOfLassoRegressionFeature.drop([name[LassoIndexFalse[i]]],axis=1) dataFrameOfLassoRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/LassoFeatureSelectionOfData.xlsx') dataFrameOfLassoRegressionFeature
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 α 节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 θ 节律, µV 5 : FP2-A2 α 节律, µV 6 : FP2-A2 β(LF)节律, µV 7 : F3-A1 α 节律, µV 8 : F4-A2 α 节律, µV 9 : FZ-A2 δ 节律,µV 10 : C3-A1 α 节律, µV 11 : C4-A2 θ 节律, µV 12 : C4-A2 α 节律, µV 13 : C4-A2 β(LF)节律, µV 14 : CZ-A1 α 节律, µV 15 : P3-A1 δ 节律,µV 16 : P4-A2 α 节律, µV 17 : P4-A2 β(LF)节律, µV 18 : PZ-A2 δ 节律,µV 19 : PZ-A2 α 节律, µV 20 : PZ-A2 β(LF)节律, µV 21 : O1-A1 δ 节律,µV 22 : O1-A1 θ 节律, µV 23 : O1-A1 α 节律, µV 24 : O2-A2 δ 节律,µV 25 : O2-A2 θ 节律, µV 26 : F7-A1 δ 节律,µV 27 : F8-A2 δ 节律,µV 28 : T3-A1 θ 节律, µV 29 : T3-A1 α 节律, µV 30 : T3-A1 β(LF)节律, µV 31 : T4-A2 δ 节律,µV 32 : T4-A2 α 节律, µV 33 : T4-A2 β(LF)节律, µV 34 : T5-A1 δ 节律,µV 35 : T5-A1 θ 节律, µV 36 : T5-A1 α 节律, µV 37 : T6-A2 θ 节律, µV 38 : T6-A2 α 节律, µV 39 : T6-A2 β(LF)节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 β(LF)节律, µV 3 : F3-A1 δ 节律,µV 4 : F3-A1 θ 节律, µV 5 : F3-A1 β(LF)节律, µV 6 : F4-A2 δ 节律,µV 7 : F4-A2 θ 节律, µV 8 : F4-A2 β(LF)节律, µV 9 : FZ-A2 θ 节律, µV 10 : FZ-A2 α 节律, µV 11 : FZ-A2 β(LF)节律, µV 12 : C3-A1 δ 节律,µV 13 : C3-A1 θ 节律, µV 14 : C3-A1 β(LF)节律, µV 15 : C4-A2 δ 节律,µV 16 : CZ-A1 δ 节律,µV 17 : CZ-A1 θ 节律, µV 18 : CZ-A1 β(LF)节律, µV 19 : P3-A1 θ 节律, µV 20 : P3-A1 α 节律, µV 21 : P3-A1 β(LF)节律, µV 22 : P4-A2 δ 节律,µV 23 : P4-A2 θ 节律, µV 24 : PZ-A2 θ 节律, µV 25 : O1-A1 β(LF)节律, µV 26 : O2-A2 α 节律, µV 27 : O2-A2 β(LF)节律, µV 28 : F7-A1 θ 节律, µV 29 : F7-A1 α 节律, µV 30 : F7-A1 β(LF)节律, µV 31 : F8-A2 θ 节律, µV 32 : F8-A2 α 节律, µV 33 : F8-A2 β(LF)节律, µV 34 : T3-A1 δ 节律,µV 35 : T4-A2 θ 节律, µV 36 : T5-A1 β(LF)节律, µV 37 : T6-A2 δ 节律,µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用SVR进行特征选择
lsvr = LinearSVR(C=10,max_iter=10000,loss='squared_epsilon_insensitive',dual=False).fit(X_withLabel, y_data) modelLSVR = SelectFromModel(lsvr, prefit=True) X_LSVR = modelLSVR.transform(X_withLabel) SVRIndexMask = modelLSVR.get_support() # 获取筛选的mask value = X_withLabel[:,SVRIndexMask].tolist() # 被筛选出来的列的值 SVRIndexMask = SVRIndexMask.tolist() SVRIndexTrue = [] SVRIndexFalse = [] for i in range(len(SVRIndexMask)): # 记录下被筛选的indicator的序号 if (SVRIndexMask[i]==True): SVRIndexTrue.append(i) if (SVRIndexMask[i]==False): SVRIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(SVRIndexTrue)): print(i+1,":",name[SVRIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(SVRIndexFalse)): print(i+1,":",name[SVRIndexFalse[i]]) dataFrameOfLSVRegressionFeature = dataFrame for i in range(len(SVRIndexFalse)): dataFrameOfLSVRegressionFeature = dataFrameOfLSVRegressionFeature.drop([name[SVRIndexFalse[i]]],axis=1) dataFrameOfLSVRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/LSVRFeatureSelectionOfLabel.xlsx') dataFrameOfLSVRegressionFeature
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 β(LF)节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 θ 节律, µV 5 : FP2-A2 β(LF)节律, µV 6 : F3-A1 θ 节律, µV 7 : F4-A2 β(LF)节律, µV 8 : C3-A1 β(LF)节律, µV 9 : CZ-A1 θ 节律, µV 10 : CZ-A1 β(LF)节律, µV 11 : P3-A1 δ 节律,µV 12 : P3-A1 θ 节律, µV 13 : P3-A1 α 节律, µV 14 : P4-A2 δ 节律,µV 15 : P4-A2 θ 节律, µV 16 : P4-A2 α 节律, µV 17 : P4-A2 β(LF)节律, µV 18 : O1-A1 θ 节律, µV 19 : O1-A1 β(LF)节律, µV 20 : O2-A2 θ 节律, µV 21 : O2-A2 β(LF)节律, µV 22 : F7-A1 θ 节律, µV 23 : F7-A1 β(LF)节律, µV 24 : F8-A2 δ 节律,µV 25 : F8-A2 α 节律, µV 26 : F8-A2 β(LF)节律, µV 27 : T4-A2 β(LF)节律, µV 28 : T5-A1 β(LF)节律, µV 29 : T6-A2 δ 节律,µV 30 : T6-A2 θ 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 α 节律, µV 3 : FP2-A2 α 节律, µV 4 : F3-A1 δ 节律,µV 5 : F3-A1 α 节律, µV 6 : F3-A1 β(LF)节律, µV 7 : F4-A2 δ 节律,µV 8 : F4-A2 θ 节律, µV 9 : F4-A2 α 节律, µV 10 : FZ-A2 δ 节律,µV 11 : FZ-A2 θ 节律, µV 12 : FZ-A2 α 节律, µV 13 : FZ-A2 β(LF)节律, µV 14 : C3-A1 δ 节律,µV 15 : C3-A1 θ 节律, µV 16 : C3-A1 α 节律, µV 17 : C4-A2 δ 节律,µV 18 : C4-A2 θ 节律, µV 19 : C4-A2 α 节律, µV 20 : C4-A2 β(LF)节律, µV 21 : CZ-A1 δ 节律,µV 22 : CZ-A1 α 节律, µV 23 : P3-A1 β(LF)节律, µV 24 : PZ-A2 δ 节律,µV 25 : PZ-A2 θ 节律, µV 26 : PZ-A2 α 节律, µV 27 : PZ-A2 β(LF)节律, µV 28 : O1-A1 δ 节律,µV 29 : O1-A1 α 节律, µV 30 : O2-A2 δ 节律,µV 31 : O2-A2 α 节律, µV 32 : F7-A1 δ 节律,µV 33 : F7-A1 α 节律, µV 34 : F8-A2 θ 节律, µV 35 : T3-A1 δ 节律,µV 36 : T3-A1 θ 节律, µV 37 : T3-A1 α 节律, µV 38 : T3-A1 β(LF)节律, µV 39 : T4-A2 δ 节律,µV 40 : T4-A2 θ 节律, µV 41 : T4-A2 α 节律, µV 42 : T5-A1 δ 节律,µV 43 : T5-A1 θ 节律, µV 44 : T5-A1 α 节律, µV 45 : T6-A2 α 节律, µV 46 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用树进行特征选择
decisionTree = DecisionTreeRegressor(min_samples_leaf=1,random_state=1).fit(X_withLabel, y_data) modelDecisionTree = SelectFromModel(decisionTree, prefit=True) X_DecisionTree = modelDecisionTree.transform(X_withLabel) decisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 decisionTreeIndexMask = decisionTreeIndexMask.tolist() decisionTreeIndexTrue = [] decisionTreeIndexFalse = [] for i in range(len(decisionTreeIndexMask)): # 记录下被筛选的indicator的序号 if (decisionTreeIndexMask[i]==True): decisionTreeIndexTrue.append(i) if (decisionTreeIndexMask[i]==False): decisionTreeIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(decisionTreeIndexTrue)): print(i+1,":",name[decisionTreeIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(decisionTreeIndexFalse)): print(i+1,":",name[decisionTreeIndexFalse[i]]) dataFrameOfDecisionTreeRegressionFeature = dataFrame for i in range(len(decisionTreeIndexFalse)): dataFrameOfDecisionTreeRegressionFeature = dataFrameOfDecisionTreeRegressionFeature.drop([name[decisionTreeIndexFalse[i]]],axis=1) dataFrameOfDecisionTreeRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/DecisionTreeFeatureSelectionOfData.xlsx') dataFrameOfDecisionTreeRegressionFeature
被筛选后剩下的特征: 1 : F4-A2 θ 节律, µV 2 : F4-A2 α 节律, µV 3 : FZ-A2 θ 节律, µV 4 : FZ-A2 β(LF)节律, µV 5 : C3-A1 θ 节律, µV 6 : C3-A1 β(LF)节律, µV 7 : CZ-A1 β(LF)节律, µV 8 : P3-A1 δ 节律,µV 9 : P3-A1 β(LF)节律, µV 10 : PZ-A2 α 节律, µV 11 : O2-A2 δ 节律,µV 12 : O2-A2 α 节律, µV 13 : F8-A2 δ 节律,µV 14 : T3-A1 θ 节律, µV 15 : T5-A1 β(LF)节律, µV 16 : T6-A2 α 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 θ 节律, µV 3 : FP1-A1 α 节律, µV 4 : FP1-A1 β(LF)节律, µV 5 : FP2-A2 δ 节律,µV 6 : FP2-A2 θ 节律, µV 7 : FP2-A2 α 节律, µV 8 : FP2-A2 β(LF)节律, µV 9 : F3-A1 δ 节律,µV 10 : F3-A1 θ 节律, µV 11 : F3-A1 α 节律, µV 12 : F3-A1 β(LF)节律, µV 13 : F4-A2 δ 节律,µV 14 : F4-A2 β(LF)节律, µV 15 : FZ-A2 δ 节律,µV 16 : FZ-A2 α 节律, µV 17 : C3-A1 δ 节律,µV 18 : C3-A1 α 节律, µV 19 : C4-A2 δ 节律,µV 20 : C4-A2 θ 节律, µV 21 : C4-A2 α 节律, µV 22 : C4-A2 β(LF)节律, µV 23 : CZ-A1 δ 节律,µV 24 : CZ-A1 θ 节律, µV 25 : CZ-A1 α 节律, µV 26 : P3-A1 θ 节律, µV 27 : P3-A1 α 节律, µV 28 : P4-A2 δ 节律,µV 29 : P4-A2 θ 节律, µV 30 : P4-A2 α 节律, µV 31 : P4-A2 β(LF)节律, µV 32 : PZ-A2 δ 节律,µV 33 : PZ-A2 θ 节律, µV 34 : PZ-A2 β(LF)节律, µV 35 : O1-A1 δ 节律,µV 36 : O1-A1 θ 节律, µV 37 : O1-A1 α 节律, µV 38 : O1-A1 β(LF)节律, µV 39 : O2-A2 θ 节律, µV 40 : O2-A2 β(LF)节律, µV 41 : F7-A1 δ 节律,µV 42 : F7-A1 θ 节律, µV 43 : F7-A1 α 节律, µV 44 : F7-A1 β(LF)节律, µV 45 : F8-A2 θ 节律, µV 46 : F8-A2 α 节律, µV 47 : F8-A2 β(LF)节律, µV 48 : T3-A1 δ 节律,µV 49 : T3-A1 α 节律, µV 50 : T3-A1 β(LF)节律, µV 51 : T4-A2 δ 节律,µV 52 : T4-A2 θ 节律, µV 53 : T4-A2 α 节律, µV 54 : T4-A2 β(LF)节律, µV 55 : T5-A1 δ 节律,µV 56 : T5-A1 θ 节律, µV 57 : T5-A1 α 节律, µV 58 : T6-A2 δ 节律,µV 59 : T6-A2 θ 节律, µV 60 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用随机森林进行特征选择
randomForest = RandomForestRegressor().fit(X_withLabel, y_data) modelrandomForest = SelectFromModel(randomForest, prefit=True) X_randomForest = modelrandomForest.transform(X_withLabel) randomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask value = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出来的列的值 randomForestIndexMask = randomForestIndexMask.tolist() randomForestIndexTrue = [] randomForestIndexFalse = [] for i in range(len(randomForestIndexMask)): # 记录下被筛选的indicator的序号 if (randomForestIndexMask[i]==True): randomForestIndexTrue.append(i) if (randomForestIndexMask[i]==False): randomForestIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(randomForestIndexTrue)): print(i+1,":",name[randomForestIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(randomForestIndexFalse)): print(i+1,":",name[randomForestIndexFalse[i]]) dataFrameOfRandomForestRegressionFeature = dataFrame for i in range(len(randomForestIndexFalse)): dataFrameOfRandomForestRegressionFeature = dataFrameOfRandomForestRegressionFeature.drop([name[randomForestIndexFalse[i]]],axis=1) dataFrameOfRandomForestRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/RandomForestFeatureSelectionOfData.xlsx') dataFrameOfRandomForestRegressionFeature
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 α 节律, µV 3 : FP2-A2 θ 节律, µV 4 : FP2-A2 β(LF)节律, µV 5 : F3-A1 θ 节律, µV 6 : F4-A2 θ 节律, µV 7 : C3-A1 θ 节律, µV 8 : C4-A2 δ 节律,µV 9 : C4-A2 θ 节律, µV 10 : P3-A1 δ 节律,µV 11 : P4-A2 θ 节律, µV 12 : PZ-A2 β(LF)节律, µV 13 : O1-A1 θ 节律, µV 14 : O2-A2 δ 节律,µV 15 : O2-A2 θ 节律, µV 16 : O2-A2 β(LF)节律, µV 17 : F7-A1 θ 节律, µV 18 : F8-A2 δ 节律,µV 19 : F8-A2 θ 节律, µV 20 : F8-A2 α 节律, µV 21 : T3-A1 θ 节律, µV 22 : T3-A1 β(LF)节律, µV 23 : T4-A2 δ 节律,µV 24 : T4-A2 θ 节律, µV 25 : T4-A2 β(LF)节律, µV 26 : T5-A1 θ 节律, µV 27 : T5-A1 β(LF)节律, µV 28 : T6-A2 θ 节律, µV 29 : T6-A2 β(LF)节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 β(LF)节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 α 节律, µV 5 : F3-A1 δ 节律,µV 6 : F3-A1 α 节律, µV 7 : F3-A1 β(LF)节律, µV 8 : F4-A2 δ 节律,µV 9 : F4-A2 α 节律, µV 10 : F4-A2 β(LF)节律, µV 11 : FZ-A2 δ 节律,µV 12 : FZ-A2 θ 节律, µV 13 : FZ-A2 α 节律, µV 14 : FZ-A2 β(LF)节律, µV 15 : C3-A1 δ 节律,µV 16 : C3-A1 α 节律, µV 17 : C3-A1 β(LF)节律, µV 18 : C4-A2 α 节律, µV 19 : C4-A2 β(LF)节律, µV 20 : CZ-A1 δ 节律,µV 21 : CZ-A1 θ 节律, µV 22 : CZ-A1 α 节律, µV 23 : CZ-A1 β(LF)节律, µV 24 : P3-A1 θ 节律, µV 25 : P3-A1 α 节律, µV 26 : P3-A1 β(LF)节律, µV 27 : P4-A2 δ 节律,µV 28 : P4-A2 α 节律, µV 29 : P4-A2 β(LF)节律, µV 30 : PZ-A2 δ 节律,µV 31 : PZ-A2 θ 节律, µV 32 : PZ-A2 α 节律, µV 33 : O1-A1 δ 节律,µV 34 : O1-A1 α 节律, µV 35 : O1-A1 β(LF)节律, µV 36 : O2-A2 α 节律, µV 37 : F7-A1 δ 节律,µV 38 : F7-A1 α 节律, µV 39 : F7-A1 β(LF)节律, µV 40 : F8-A2 β(LF)节律, µV 41 : T3-A1 δ 节律,µV 42 : T3-A1 α 节律, µV 43 : T4-A2 α 节律, µV 44 : T5-A1 δ 节律,µV 45 : T5-A1 α 节律, µV 46 : T6-A2 δ 节律,µV 47 : T6-A2 α 节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用GBDT进行特征选择
GBDTRegressor = GradientBoostingRegressor().fit(X_withLabel, y_data) modelGBDTRegressor = SelectFromModel(GBDTRegressor, prefit=True) X_GBDTRegressor = modelGBDTRegressor.transform(X_withLabel) GBDTRegressorIndexMask = modelGBDTRegressor.get_support() # 获取筛选的mask value = X_withLabel[:,GBDTRegressorIndexMask].tolist() # 被筛选出来的列的值 GBDTRegressorIndexMask = GBDTRegressorIndexMask.tolist() GBDTRegressorIndexTrue = [] GBDTRegressorIndexFalse = [] for i in range(len(GBDTRegressorIndexMask)): # 记录下被筛选的indicator的序号 if (GBDTRegressorIndexMask[i]==True): GBDTRegressorIndexTrue.append(i) if (GBDTRegressorIndexMask[i]==False): GBDTRegressorIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(GBDTRegressorIndexTrue)): print(i+1,":",name[GBDTRegressorIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(GBDTRegressorIndexFalse)): print(i+1,":",name[GBDTRegressorIndexFalse[i]]) dataFrameOfGBDTRegressionFeature = dataFrame for i in range(len(GBDTRegressorIndexFalse)): dataFrameOfGBDTRegressionFeature = dataFrameOfGBDTRegressionFeature.drop([name[GBDTRegressorIndexFalse[i]]],axis=1) dataFrameOfGBDTRegressionFeature.to_excel('/content/drive/MyDrive/DataMining/final/GBDTRegressorFeatureSelectionOfData.xlsx') dataFrameOfGBDTRegressionFeature
被筛选后剩下的特征: 1 : FP2-A2 θ 节律, µV 2 : FP2-A2 β(LF)节律, µV 3 : F3-A1 θ 节律, µV 4 : C3-A1 δ 节律,µV 5 : C3-A1 θ 节律, µV 6 : C4-A2 δ 节律,µV 7 : C4-A2 θ 节律, µV 8 : CZ-A1 θ 节律, µV 9 : P3-A1 δ 节律,µV 10 : P3-A1 α 节律, µV 11 : P4-A2 θ 节律, µV 12 : P4-A2 α 节律, µV 13 : PZ-A2 α 节律, µV 14 : PZ-A2 β(LF)节律, µV 15 : O1-A1 θ 节律, µV 16 : O2-A2 δ 节律,µV 17 : O2-A2 θ 节律, µV 18 : O2-A2 β(LF)节律, µV 19 : F8-A2 δ 节律,µV 20 : F8-A2 α 节律, µV 21 : F8-A2 β(LF)节律, µV 22 : T3-A1 θ 节律, µV 23 : T4-A2 δ 节律,µV 24 : T4-A2 θ 节律, µV 25 : T4-A2 β(LF)节律, µV 26 : T6-A2 θ 节律, µV 27 : T6-A2 β(LF)节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 θ 节律, µV 3 : FP1-A1 α 节律, µV 4 : FP1-A1 β(LF)节律, µV 5 : FP2-A2 δ 节律,µV 6 : FP2-A2 α 节律, µV 7 : F3-A1 δ 节律,µV 8 : F3-A1 α 节律, µV 9 : F3-A1 β(LF)节律, µV 10 : F4-A2 δ 节律,µV 11 : F4-A2 θ 节律, µV 12 : F4-A2 α 节律, µV 13 : F4-A2 β(LF)节律, µV 14 : FZ-A2 δ 节律,µV 15 : FZ-A2 θ 节律, µV 16 : FZ-A2 α 节律, µV 17 : FZ-A2 β(LF)节律, µV 18 : C3-A1 α 节律, µV 19 : C3-A1 β(LF)节律, µV 20 : C4-A2 α 节律, µV 21 : C4-A2 β(LF)节律, µV 22 : CZ-A1 δ 节律,µV 23 : CZ-A1 α 节律, µV 24 : CZ-A1 β(LF)节律, µV 25 : P3-A1 θ 节律, µV 26 : P3-A1 β(LF)节律, µV 27 : P4-A2 δ 节律,µV 28 : P4-A2 β(LF)节律, µV 29 : PZ-A2 δ 节律,µV 30 : PZ-A2 θ 节律, µV 31 : O1-A1 δ 节律,µV 32 : O1-A1 α 节律, µV 33 : O1-A1 β(LF)节律, µV 34 : O2-A2 α 节律, µV 35 : F7-A1 δ 节律,µV 36 : F7-A1 θ 节律, µV 37 : F7-A1 α 节律, µV 38 : F7-A1 β(LF)节律, µV 39 : F8-A2 θ 节律, µV 40 : T3-A1 δ 节律,µV 41 : T3-A1 α 节律, µV 42 : T3-A1 β(LF)节律, µV 43 : T4-A2 α 节律, µV 44 : T5-A1 δ 节律,µV 45 : T5-A1 θ 节律, µV 46 : T5-A1 α 节律, µV 47 : T5-A1 β(LF)节律, µV 48 : T6-A2 δ 节律,µV 49 : T6-A2 α 节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
分类 利用Lasso进行特征选择
lasso = Lasso(alpha = 0.3,max_iter=5000).fit(X_withLabel, y_label) modelLasso = SelectFromModel(lasso, prefit=True) X_Lasso = modelLasso.transform(X_withLabel) LassoIndexMask = modelLasso.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 LassoIndexMask = LassoIndexMask.tolist() LassoIndexTrue = [] LassoIndexFalse = [] for i in range(len(LassoIndexMask)): # 记录下被筛选的indicator的序号 if (LassoIndexMask[i]==True): LassoIndexTrue.append(i) if (LassoIndexMask[i]==False): LassoIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(LassoIndexTrue)): print(i+1,":",name[LassoIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(LassoIndexFalse)): print(i+1,":",name[LassoIndexFalse[i]]) dataFrameOfLassoClassificationFeature = dataFrame for i in range(len(LassoIndexFalse)): dataFrameOfLassoClassificationFeature = dataFrameOfLassoClassificationFeature.drop([name[LassoIndexFalse[i]]],axis=1) dataFrameOfLassoClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/LassoFeatureSelectionOfLabel.xlsx') dataFrameOfLassoClassificationFeature
被筛选后剩下的特征: 1 : FP1-A1 α 节律, µV 2 : FZ-A2 δ 节律,µV 3 : C4-A2 δ 节律,µV 4 : CZ-A1 α 节律, µV 5 : P3-A1 δ 节律,µV 6 : P4-A2 α 节律, µV 7 : PZ-A2 δ 节律,µV 8 : O2-A2 δ 节律,µV 9 : F7-A1 δ 节律,µV 10 : F7-A1 α 节律, µV 11 : T3-A1 α 节律, µV 12 : T4-A2 δ 节律,µV 13 : T4-A2 α 节律, µV 14 : T5-A1 δ 节律,µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 θ 节律, µV 3 : FP1-A1 β(LF)节律, µV 4 : FP2-A2 δ 节律,µV 5 : FP2-A2 θ 节律, µV 6 : FP2-A2 α 节律, µV 7 : FP2-A2 β(LF)节律, µV 8 : F3-A1 δ 节律,µV 9 : F3-A1 θ 节律, µV 10 : F3-A1 α 节律, µV 11 : F3-A1 β(LF)节律, µV 12 : F4-A2 δ 节律,µV 13 : F4-A2 θ 节律, µV 14 : F4-A2 α 节律, µV 15 : F4-A2 β(LF)节律, µV 16 : FZ-A2 θ 节律, µV 17 : FZ-A2 α 节律, µV 18 : FZ-A2 β(LF)节律, µV 19 : C3-A1 δ 节律,µV 20 : C3-A1 θ 节律, µV 21 : C3-A1 α 节律, µV 22 : C3-A1 β(LF)节律, µV 23 : C4-A2 θ 节律, µV 24 : C4-A2 α 节律, µV 25 : C4-A2 β(LF)节律, µV 26 : CZ-A1 δ 节律,µV 27 : CZ-A1 θ 节律, µV 28 : CZ-A1 β(LF)节律, µV 29 : P3-A1 θ 节律, µV 30 : P3-A1 α 节律, µV 31 : P3-A1 β(LF)节律, µV 32 : P4-A2 δ 节律,µV 33 : P4-A2 θ 节律, µV 34 : P4-A2 β(LF)节律, µV 35 : PZ-A2 θ 节律, µV 36 : PZ-A2 α 节律, µV 37 : PZ-A2 β(LF)节律, µV 38 : O1-A1 δ 节律,µV 39 : O1-A1 θ 节律, µV 40 : O1-A1 α 节律, µV 41 : O1-A1 β(LF)节律, µV 42 : O2-A2 θ 节律, µV 43 : O2-A2 α 节律, µV 44 : O2-A2 β(LF)节律, µV 45 : F7-A1 θ 节律, µV 46 : F7-A1 β(LF)节律, µV 47 : F8-A2 δ 节律,µV 48 : F8-A2 θ 节律, µV 49 : F8-A2 α 节律, µV 50 : F8-A2 β(LF)节律, µV 51 : T3-A1 δ 节律,µV 52 : T3-A1 θ 节律, µV 53 : T3-A1 β(LF)节律, µV 54 : T4-A2 θ 节律, µV 55 : T4-A2 β(LF)节律, µV 56 : T5-A1 θ 节律, µV 57 : T5-A1 α 节律, µV 58 : T5-A1 β(LF)节律, µV 59 : T6-A2 δ 节律,µV 60 : T6-A2 θ 节律, µV 61 : T6-A2 α 节律, µV 62 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用SVC进行特征选择
lsvc = LinearSVC(C=10,max_iter=10000,dual=False).fit(X_withLabel, y_label.ravel()) modelLSVC = SelectFromModel(lsvc, prefit=True) X_LSVR = modelLSVR.transform(X_withLabel) SVCIndexMask = modelLSVC.get_support() # 获取筛选的mask value = X_withLabel[:,SVCIndexMask].tolist() # 被筛选出来的列的值 SVCIndexMask = SVCIndexMask.tolist() SVCIndexTrue = [] SVCIndexFalse = [] for i in range(len(SVCIndexMask)): # 记录下被筛选的indicator的序号 if (SVCIndexMask[i]==True): SVCIndexTrue.append(i) if (SVCIndexMask[i]==False): SVCIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(SVCIndexTrue)): print(i+1,":",name[SVCIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(SVCIndexFalse)): print(i+1,":",name[SVCIndexFalse[i]]) dataFrameOfLSVClassificationFeature = dataFrame for i in range(len(SVCIndexFalse)): dataFrameOfLSVClassificationFeature = dataFrameOfLSVClassificationFeature.drop([name[SVCIndexFalse[i]]],axis=1) dataFrameOfLSVClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/LSVCFeatureSelectionOfLabel.xlsx') dataFrameOfLSVClassificationFeature
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP2-A2 θ 节律, µV 3 : FP2-A2 α 节律, µV 4 : FP2-A2 β(LF)节律, µV 5 : FZ-A2 β(LF)节律, µV 6 : C3-A1 θ 节律, µV 7 : C3-A1 β(LF)节律, µV 8 : C4-A2 δ 节律,µV 9 : C4-A2 θ 节律, µV 10 : C4-A2 α 节律, µV 11 : CZ-A1 δ 节律,µV 12 : CZ-A1 θ 节律, µV 13 : CZ-A1 α 节律, µV 14 : P3-A1 β(LF)节律, µV 15 : P4-A2 θ 节律, µV 16 : P4-A2 β(LF)节律, µV 17 : PZ-A2 β(LF)节律, µV 18 : O2-A2 δ 节律,µV 19 : O2-A2 θ 节律, µV 20 : O2-A2 α 节律, µV 21 : F7-A1 θ 节律, µV 22 : F7-A1 α 节律, µV 23 : F7-A1 β(LF)节律, µV 24 : F8-A2 β(LF)节律, µV 25 : T4-A2 δ 节律,µV 26 : T4-A2 θ 节律, µV 27 : T4-A2 α 节律, µV 28 : T4-A2 β(LF)节律, µV 29 : T5-A1 θ 节律, µV 30 : T6-A2 δ 节律,µV 31 : T6-A2 α 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 α 节律, µV 3 : FP1-A1 β(LF)节律, µV 4 : FP2-A2 δ 节律,µV 5 : F3-A1 δ 节律,µV 6 : F3-A1 θ 节律, µV 7 : F3-A1 α 节律, µV 8 : F3-A1 β(LF)节律, µV 9 : F4-A2 δ 节律,µV 10 : F4-A2 θ 节律, µV 11 : F4-A2 α 节律, µV 12 : F4-A2 β(LF)节律, µV 13 : FZ-A2 δ 节律,µV 14 : FZ-A2 θ 节律, µV 15 : FZ-A2 α 节律, µV 16 : C3-A1 δ 节律,µV 17 : C3-A1 α 节律, µV 18 : C4-A2 β(LF)节律, µV 19 : CZ-A1 β(LF)节律, µV 20 : P3-A1 δ 节律,µV 21 : P3-A1 θ 节律, µV 22 : P3-A1 α 节律, µV 23 : P4-A2 δ 节律,µV 24 : P4-A2 α 节律, µV 25 : PZ-A2 δ 节律,µV 26 : PZ-A2 θ 节律, µV 27 : PZ-A2 α 节律, µV 28 : O1-A1 δ 节律,µV 29 : O1-A1 θ 节律, µV 30 : O1-A1 α 节律, µV 31 : O1-A1 β(LF)节律, µV 32 : O2-A2 β(LF)节律, µV 33 : F7-A1 δ 节律,µV 34 : F8-A2 δ 节律,µV 35 : F8-A2 θ 节律, µV 36 : F8-A2 α 节律, µV 37 : T3-A1 δ 节律,µV 38 : T3-A1 θ 节律, µV 39 : T3-A1 α 节律, µV 40 : T3-A1 β(LF)节律, µV 41 : T5-A1 δ 节律,µV 42 : T5-A1 α 节律, µV 43 : T5-A1 β(LF)节律, µV 44 : T6-A2 θ 节律, µV 45 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用树进行特征选择
decisionTree = DecisionTreeClassifier(random_state=1).fit(X_withLabel, y_label) modelDecisionTree = SelectFromModel(decisionTree, prefit=True) X_DecisionTree = modelDecisionTree.transform(X_withLabel) decisionTreeIndexMask = modelDecisionTree.get_support() # 获取筛选的mask value = X_withLabel[:,LassoIndexMask].tolist() # 被筛选出来的列的值 decisionTreeIndexMask = decisionTreeIndexMask.tolist() decisionTreeIndexTrue = [] decisionTreeIndexFalse = [] for i in range(len(decisionTreeIndexMask)): # 记录下被筛选的indicator的序号 if (decisionTreeIndexMask[i]==True): decisionTreeIndexTrue.append(i) if (decisionTreeIndexMask[i]==False): decisionTreeIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(decisionTreeIndexTrue)): print(i+1,":",name[decisionTreeIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(decisionTreeIndexFalse)): print(i+1,":",name[decisionTreeIndexFalse[i]]) dataFrameOfDecisionTreeClassificationFeature = dataFrame for i in range(len(decisionTreeIndexFalse)): dataFrameOfDecisionTreeClassificationFeature = dataFrameOfDecisionTreeClassificationFeature.drop([name[decisionTreeIndexFalse[i]]],axis=1) dataFrameOfDecisionTreeClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/DecisionTreeFeatureSelectionOfLabel.xlsx') dataFrameOfDecisionTreeClassificationFeature
被筛选后剩下的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 α 节律, µV 3 : F3-A1 θ 节律, µV 4 : C3-A1 θ 节律, µV 5 : CZ-A1 δ 节律,µV 6 : CZ-A1 β(LF)节律, µV 7 : P3-A1 α 节律, µV 8 : PZ-A2 β(LF)节律, µV 9 : O2-A2 δ 节律,µV 10 : O2-A2 β(LF)节律, µV 11 : F7-A1 θ 节律, µV 12 : T4-A2 δ 节律,µV 13 : T5-A1 α 节律, µV 14 : T6-A2 α 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 θ 节律, µV 2 : FP1-A1 β(LF)节律, µV 3 : FP2-A2 δ 节律,µV 4 : FP2-A2 θ 节律, µV 5 : FP2-A2 α 节律, µV 6 : FP2-A2 β(LF)节律, µV 7 : F3-A1 δ 节律,µV 8 : F3-A1 α 节律, µV 9 : F3-A1 β(LF)节律, µV 10 : F4-A2 δ 节律,µV 11 : F4-A2 θ 节律, µV 12 : F4-A2 α 节律, µV 13 : F4-A2 β(LF)节律, µV 14 : FZ-A2 δ 节律,µV 15 : FZ-A2 θ 节律, µV 16 : FZ-A2 α 节律, µV 17 : FZ-A2 β(LF)节律, µV 18 : C3-A1 δ 节律,µV 19 : C3-A1 α 节律, µV 20 : C3-A1 β(LF)节律, µV 21 : C4-A2 δ 节律,µV 22 : C4-A2 θ 节律, µV 23 : C4-A2 α 节律, µV 24 : C4-A2 β(LF)节律, µV 25 : CZ-A1 θ 节律, µV 26 : CZ-A1 α 节律, µV 27 : P3-A1 δ 节律,µV 28 : P3-A1 θ 节律, µV 29 : P3-A1 β(LF)节律, µV 30 : P4-A2 δ 节律,µV 31 : P4-A2 θ 节律, µV 32 : P4-A2 α 节律, µV 33 : P4-A2 β(LF)节律, µV 34 : PZ-A2 δ 节律,µV 35 : PZ-A2 θ 节律, µV 36 : PZ-A2 α 节律, µV 37 : O1-A1 δ 节律,µV 38 : O1-A1 θ 节律, µV 39 : O1-A1 α 节律, µV 40 : O1-A1 β(LF)节律, µV 41 : O2-A2 θ 节律, µV 42 : O2-A2 α 节律, µV 43 : F7-A1 δ 节律,µV 44 : F7-A1 α 节律, µV 45 : F7-A1 β(LF)节律, µV 46 : F8-A2 δ 节律,µV 47 : F8-A2 θ 节律, µV 48 : F8-A2 α 节律, µV 49 : F8-A2 β(LF)节律, µV 50 : T3-A1 δ 节律,µV 51 : T3-A1 θ 节律, µV 52 : T3-A1 α 节律, µV 53 : T3-A1 β(LF)节律, µV 54 : T4-A2 θ 节律, µV 55 : T4-A2 α 节律, µV 56 : T4-A2 β(LF)节律, µV 57 : T5-A1 δ 节律,µV 58 : T5-A1 θ 节律, µV 59 : T5-A1 β(LF)节律, µV 60 : T6-A2 δ 节律,µV 61 : T6-A2 θ 节律, µV 62 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用随机森林进行特征选择
randomForest = RandomForestRegressor().fit(X_withLabel, y_label) modelrandomForest = SelectFromModel(randomForest, prefit=True) X_randomForest = modelrandomForest.transform(X_withLabel) randomForestIndexMask = modelrandomForest.get_support() # 获取筛选的mask value = X_withLabel[:,randomForestIndexMask].tolist() # 被筛选出来的列的值 randomForestIndexMask = randomForestIndexMask.tolist() randomForestIndexTrue = [] randomForestIndexFalse = [] for i in range(len(randomForestIndexMask)): # 记录下被筛选的indicator的序号 if (randomForestIndexMask[i]==True): randomForestIndexTrue.append(i) if (randomForestIndexMask[i]==False): randomForestIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(randomForestIndexTrue)): print(i+1,":",name[randomForestIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(randomForestIndexFalse)): print(i+1,":",name[randomForestIndexFalse[i]]) dataFrameOfRandomForestClassificationFeature = dataFrame for i in range(len(randomForestIndexFalse)): dataFrameOfRandomForestClassificationFeature = dataFrameOfRandomForestClassificationFeature.drop([name[randomForestIndexFalse[i]]],axis=1) dataFrameOfRandomForestClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/RandomForestFeatureSelectionOfLabel.xlsx') dataFrameOfRandomForestClassificationFeature
被筛选后剩下的特征: 1 : FP1-A1 θ 节律, µV 2 : FP2-A2 β(LF)节律, µV 3 : F4-A2 α 节律, µV 4 : F4-A2 β(LF)节律, µV 5 : FZ-A2 β(LF)节律, µV 6 : C3-A1 β(LF)节律, µV 7 : C4-A2 δ 节律,µV 8 : C4-A2 θ 节律, µV 9 : C4-A2 α 节律, µV 10 : CZ-A1 α 节律, µV 11 : P3-A1 δ 节律,µV 12 : P3-A1 α 节律, µV 13 : P3-A1 β(LF)节律, µV 14 : P4-A2 δ 节律,µV 15 : P4-A2 θ 节律, µV 16 : P4-A2 α 节律, µV 17 : PZ-A2 β(LF)节律, µV 18 : O2-A2 δ 节律,µV 19 : O2-A2 β(LF)节律, µV 20 : F7-A1 θ 节律, µV 21 : F8-A2 α 节律, µV 22 : F8-A2 β(LF)节律, µV 23 : T3-A1 θ 节律, µV 24 : T4-A2 δ 节律,µV 25 : T4-A2 θ 节律, µV 26 : T4-A2 α 节律, µV 27 : T4-A2 β(LF)节律, µV 28 : T5-A1 δ 节律,µV 29 : T5-A1 θ 节律, µV 30 : T5-A1 β(LF)节律, µV 31 : T6-A2 θ 节律, µV 32 : T6-A2 α 节律, µV 33 : T6-A2 β(LF)节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 α 节律, µV 3 : FP1-A1 β(LF)节律, µV 4 : FP2-A2 δ 节律,µV 5 : FP2-A2 θ 节律, µV 6 : FP2-A2 α 节律, µV 7 : F3-A1 δ 节律,µV 8 : F3-A1 θ 节律, µV 9 : F3-A1 α 节律, µV 10 : F3-A1 β(LF)节律, µV 11 : F4-A2 δ 节律,µV 12 : F4-A2 θ 节律, µV 13 : FZ-A2 δ 节律,µV 14 : FZ-A2 θ 节律, µV 15 : FZ-A2 α 节律, µV 16 : C3-A1 δ 节律,µV 17 : C3-A1 θ 节律, µV 18 : C3-A1 α 节律, µV 19 : C4-A2 β(LF)节律, µV 20 : CZ-A1 δ 节律,µV 21 : CZ-A1 θ 节律, µV 22 : CZ-A1 β(LF)节律, µV 23 : P3-A1 θ 节律, µV 24 : P4-A2 β(LF)节律, µV 25 : PZ-A2 δ 节律,µV 26 : PZ-A2 θ 节律, µV 27 : PZ-A2 α 节律, µV 28 : O1-A1 δ 节律,µV 29 : O1-A1 θ 节律, µV 30 : O1-A1 α 节律, µV 31 : O1-A1 β(LF)节律, µV 32 : O2-A2 θ 节律, µV 33 : O2-A2 α 节律, µV 34 : F7-A1 δ 节律,µV 35 : F7-A1 α 节律, µV 36 : F7-A1 β(LF)节律, µV 37 : F8-A2 δ 节律,µV 38 : F8-A2 θ 节律, µV 39 : T3-A1 δ 节律,µV 40 : T3-A1 α 节律, µV 41 : T3-A1 β(LF)节律, µV 42 : T5-A1 α 节律, µV 43 : T6-A2 δ 节律,µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
利用GBDT进行特征选择
GBDTClassifier = GradientBoostingClassifier().fit(X_withLabel, y_label) modelGBDTClassifier = SelectFromModel(GBDTClassifier, prefit=True) X_GBDTClassifier = modelGBDTClassifier.transform(X_withLabel) GBDTClassifierIndexMask = modelGBDTClassifier.get_support() # 获取筛选的mask value = X_withLabel[:,GBDTClassifierIndexMask].tolist() # 被筛选出来的列的值 GBDTClassifierIndexMask = GBDTClassifierIndexMask.tolist() GBDTClassifierIndexTrue = [] GBDTClassifierIndexFalse = [] for i in range(len(GBDTClassifierIndexMask)): # 记录下被筛选的indicator的序号 if (GBDTClassifierIndexMask[i]==True): GBDTClassifierIndexTrue.append(i) if (GBDTClassifierIndexMask[i]==False): GBDTClassifierIndexFalse.append(i) print("被筛选后剩下的特征:") for i in range(len(GBDTClassifierIndexTrue)): print(i+1,":",name[GBDTClassifierIndexTrue[i]]) print("\n被筛选后去掉的特征:") for i in range(len(GBDTClassifierIndexFalse)): print(i+1,":",name[GBDTClassifierIndexFalse[i]]) dataFrameOfGBDTClassificationFeature = dataFrame for i in range(len(GBDTClassifierIndexFalse)): dataFrameOfGBDTClassificationFeature = dataFrameOfGBDTClassificationFeature.drop([name[GBDTClassifierIndexFalse[i]]],axis=1) dataFrameOfGBDTClassificationFeature.to_excel('/content/drive/MyDrive/DataMining/final/GBDTClassifierFeatureSelectionOfLabel.xlsx') dataFrameOfGBDTClassificationFeature
被筛选后剩下的特征: 1 : FP1-A1 α 节律, µV 2 : FP2-A2 θ 节律, µV 3 : FP2-A2 β(LF)节律, µV 4 : C4-A2 θ 节律, µV 5 : P3-A1 α 节律, µV 6 : P4-A2 α 节律, µV 7 : P4-A2 β(LF)节律, µV 8 : PZ-A2 β(LF)节律, µV 9 : O2-A2 δ 节律,µV 10 : F7-A1 δ 节律,µV 11 : F8-A2 δ 节律,µV 12 : F8-A2 β(LF)节律, µV 13 : T3-A1 θ 节律, µV 14 : T4-A2 δ 节律,µV 15 : T4-A2 θ 节律, µV 16 : T5-A1 α 节律, µV 被筛选后去掉的特征: 1 : FP1-A1 δ 节律,µV 2 : FP1-A1 θ 节律, µV 3 : FP1-A1 β(LF)节律, µV 4 : FP2-A2 δ 节律,µV 5 : FP2-A2 α 节律, µV 6 : F3-A1 δ 节律,µV 7 : F3-A1 θ 节律, µV 8 : F3-A1 α 节律, µV 9 : F3-A1 β(LF)节律, µV 10 : F4-A2 δ 节律,µV 11 : F4-A2 θ 节律, µV 12 : F4-A2 α 节律, µV 13 : F4-A2 β(LF)节律, µV 14 : FZ-A2 δ 节律,µV 15 : FZ-A2 θ 节律, µV 16 : FZ-A2 α 节律, µV 17 : FZ-A2 β(LF)节律, µV 18 : C3-A1 δ 节律,µV 19 : C3-A1 θ 节律, µV 20 : C3-A1 α 节律, µV 21 : C3-A1 β(LF)节律, µV 22 : C4-A2 δ 节律,µV 23 : C4-A2 α 节律, µV 24 : C4-A2 β(LF)节律, µV 25 : CZ-A1 δ 节律,µV 26 : CZ-A1 θ 节律, µV 27 : CZ-A1 α 节律, µV 28 : CZ-A1 β(LF)节律, µV 29 : P3-A1 δ 节律,µV 30 : P3-A1 θ 节律, µV 31 : P3-A1 β(LF)节律, µV 32 : P4-A2 δ 节律,µV 33 : P4-A2 θ 节律, µV 34 : PZ-A2 δ 节律,µV 35 : PZ-A2 θ 节律, µV 36 : PZ-A2 α 节律, µV 37 : O1-A1 δ 节律,µV 38 : O1-A1 θ 节律, µV 39 : O1-A1 α 节律, µV 40 : O1-A1 β(LF)节律, µV 41 : O2-A2 θ 节律, µV 42 : O2-A2 α 节律, µV 43 : O2-A2 β(LF)节律, µV 44 : F7-A1 θ 节律, µV 45 : F7-A1 α 节律, µV 46 : F7-A1 β(LF)节律, µV 47 : F8-A2 θ 节律, µV 48 : F8-A2 α 节律, µV 49 : T3-A1 δ 节律,µV 50 : T3-A1 α 节律, µV 51 : T3-A1 β(LF)节律, µV 52 : T4-A2 α 节律, µV 53 : T4-A2 β(LF)节律, µV 54 : T5-A1 δ 节律,µV 55 : T5-A1 θ 节律, µV 56 : T5-A1 β(LF)节律, µV 57 : T6-A2 δ 节律,µV 58 : T6-A2 θ 节律, µV 59 : T6-A2 α 节律, µV 60 : T6-A2 β(LF)节律, µV
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
测试选取的特征 读入PCA和LDA降维后的数据 获取特征选取后的数据
RegressionFeatureSelection = [dataFrameOfLassoRegressionFeature,dataFrameOfLSVRegressionFeature,dataFrameOfDecisionTreeRegressionFeature, dataFrameOfRandomForestRegressionFeature,dataFrameOfGBDTRegressionFeature] ClassificationFeatureSelection = [dataFrameOfLassoClassificationFeature,dataFrameOfLSVClassificationFeature,dataFrameOfDecisionTreeClassificationFeature, dataFrameOfRandomForestClassificationFeature,dataFrameOfGBDTClassificationFeature]
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
筛选回归的特征
allMSEResult=[] allr2Result=[] print("LR测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LinearRegression() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempMSE=[] tempr2=[] tempMSE.append(mean_squared_error(test_y,pred_y)) tempr2.append(r2_score(test_y,pred_y)) if(i==len(RegressionFeatureSelection)-1): allMSEResult.append(min(tempMSE)) allr2Result.append(max(tempr2)) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('Coefficient of determination: %.2f' % r2_score(test_y, pred_y)) print("\nSVR测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=SVR() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempMSE=[] tempr2=[] tempMSE.append(mean_squared_error(test_y,pred_y)) tempr2.append(r2_score(test_y,pred_y)) if(i==len(RegressionFeatureSelection)-1): allMSEResult.append(min(tempMSE)) allr2Result.append(max(tempr2)) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('Coefficient of determination: %.2f' % r2_score(test_y, pred_y)) print("\n决策树测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=DecisionTreeRegressor(random_state=4) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempMSE=[] tempr2=[] tempMSE.append(mean_squared_error(test_y,pred_y)) tempr2.append(r2_score(test_y,pred_y)) if(i==len(RegressionFeatureSelection)-1): allMSEResult.append(min(tempMSE)) allr2Result.append(max(tempr2)) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('Coefficient of determination: %.2f' % r2_score(test_y, pred_y)) print("\nGBDT测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=GradientBoostingRegressor(random_state=4) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempMSE=[] tempr2=[] tempMSE.append(mean_squared_error(test_y,pred_y)) tempr2.append(r2_score(test_y,pred_y)) if(i==len(RegressionFeatureSelection)-1): allMSEResult.append(min(tempMSE)) allr2Result.append(max(tempr2)) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('Coefficient of determination: %.2f' % r2_score(test_y, pred_y)) print("\n随机森林测试结果") for i in range(len(RegressionFeatureSelection)): tempArray = np.array(RegressionFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3] train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=RandomForestRegressor(random_state=4) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempMSE=[] tempr2=[] tempMSE.append(mean_squared_error(test_y,pred_y)) tempr2.append(r2_score(test_y,pred_y)) if(i==len(RegressionFeatureSelection)-1): allMSEResult.append(min(tempMSE)) allr2Result.append(max(tempr2)) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('Coefficient of determination: %.2f' % r2_score(test_y, pred_y)) modelNamelist = ['LR','SVR','决策树','GBDT','随机森林'] for i in range(5): if(i==0): print() print(modelNamelist[i]+"测试结果") print('Best MSE -',i+1,': %.2f' % (allMSEResult)[i]) print('Best R2-Score -',i+1,': %.2f\n' % (allr2Result)[i])
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
原始特征回归表现
print("LR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LinearRegression() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('R2-Score: %.2f' % r2_score(test_y, pred_y)) print("\nSVR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=SVR() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('R2-Score: %.2f' % r2_score(test_y, pred_y)) print("\n决策树测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=DecisionTreeRegressor(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('R2-Score: %.2f' % r2_score(test_y, pred_y)) print("\nGBDT测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=GradientBoostingRegressor(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('R2-Score: %.2f' % r2_score(test_y, pred_y)) print("\n随机森林测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,3].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=RandomForestRegressor(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Mean squared error: %.2f' % mean_squared_error(test_y, pred_y)) print('R2-Score: %.2f' % r2_score(test_y, pred_y))
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
筛选分类的特征
allAccuracyResult=[] allF1Result=[] print("LR测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LogisticRegression(max_iter=10000) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempAccuracy=[] tempF1=[] tempAccuracy.append(accuracy_score(test_y,pred_y)) tempF1.append(f1_score(test_y,pred_y)) if(i==len(ClassificationFeatureSelection)-1): allAccuracyResult.append(max(tempAccuracy)) allF1Result.append(max(tempF1)) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f\n' % f1_score(test_y, pred_y)) print("\nSVC测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=SVC() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempAccuracy=[] tempF1=[] tempAccuracy.append(accuracy_score(test_y,pred_y)) tempF1.append(f1_score(test_y,pred_y)) if(i==len(ClassificationFeatureSelection)-1): allAccuracyResult.append(max(tempAccuracy)) allF1Result.append(max(tempF1)) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f\n' % f1_score(test_y, pred_y)) print("\n决策树测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=DecisionTreeClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempAccuracy=[] tempF1=[] tempAccuracy.append(accuracy_score(test_y,pred_y)) tempF1.append(f1_score(test_y,pred_y)) if(i==len(ClassificationFeatureSelection)-1): allAccuracyResult.append(max(tempAccuracy)) allF1Result.append(max(tempF1)) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f\n' % f1_score(test_y, pred_y)) print("\nGBDT测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=GradientBoostingClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempAccuracy=[] tempF1=[] tempAccuracy.append(accuracy_score(test_y,pred_y)) tempF1.append(f1_score(test_y,pred_y)) if(i==len(ClassificationFeatureSelection)-1): allAccuracyResult.append(max(tempAccuracy)) allF1Result.append(max(tempF1)) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f\n' % f1_score(test_y, pred_y)) print("\n随机森林测试结果") for i in range(len(ClassificationFeatureSelection)): tempArray = np.array(ClassificationFeatureSelection[i])[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=RandomForestClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) if(i==0): tempAccuracy=[] tempF1=[] tempAccuracy.append(accuracy_score(test_y,pred_y)) tempF1.append(f1_score(test_y,pred_y)) if(i==len(ClassificationFeatureSelection)-1): allAccuracyResult.append(max(tempAccuracy)) allF1Result.append(max(tempF1)) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f\n' % f1_score(test_y, pred_y)) modelNamelist = ['LR','SVR','决策树','GBDT','随机森林'] for i in range(5): if(i==0): print() print(modelNamelist[i]+"测试结果") print('Best Accuracy -',i+1,': %.2f' % (allAccuracyResult)[i]) print('Best F1-Score -',i+1,': %.2f\n' % (allF1Result)[i])
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
原始特征分类表现
print("LR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=LogisticRegression(max_iter=10000) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f' % f1_score(test_y, pred_y)) print("\nSVR测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=SVC() clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f' % f1_score(test_y, pred_y)) print("\n决策树测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=DecisionTreeClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f' % f1_score(test_y, pred_y)) print("\nGBDT测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=GradientBoostingClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f' % f1_score(test_y, pred_y)) print("\n随机森林测试结果") tempArray = dataArray[:92,:] temp_X = tempArray[:,5:] temp_y = tempArray[:,4].astype(int) train_X,test_X,train_y,test_y = train_test_split(temp_X,temp_y,test_size=0.2,random_state=4) clf=RandomForestClassifier(random_state=0) clf.fit(train_X,train_y) pred_y = clf.predict(test_X) print('Accuracy: %.2f' % accuracy_score(test_y, pred_y)) print('F1-Score: %.2f' % f1_score(test_y, pred_y))
_____no_output_____
MIT
Project-2/FinalProjectFeatureSelection.ipynb
JasonCZH4/SCNU-CS-2018-DataMining
CSX46: Class session 2 *Introduction to the igraph package and the Pathway Commons network in SIF format* Objective: load a network of human molecular interactions and create three igraph `Graph` objects from it (one for protein-protein interactions, one for metabolism interactions, and one for directed protein-protein interactions) OK, we are going to read in the Pathway Commons data in SIF format. Recall that a SIF file is a tab-separated value file. You can find the file as `shared/pathway_commons.sif`. Load it into a data frame `pcdf` using the built-in function `read.table`. Don't forget to specify that the separator is the tab `\t`, and that there is no quoting allowed (`quote=""`). Use the `col.names` argument to name the three columns `species1`, `interaction_type`, and `species2`. Make sure to specify that there is no header and that `stringsAsFactors=FALSE`.For help on using `read.table`, just type ?read.tableNote: for each row, the `interaction_type` column contains one of 11 different interaction types (identified by a string, like `interacts-with` or `controls-production-of`).
pcdf <- read.table("shared/pathway_commons.sif", sep="\t", quote="", comment.char="", stringsAsFactors=FALSE, header=FALSE, col.names=c("species1","interaction_type","species2"))
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Let's take a peek at `pcdf` using the `head` function:
head(pcdf) library(igraph) interaction_types_ppi <- c("interacts-with", "in-complex-with", "neighbor-of") interaction_types_metab <- c("controls-production-of", "consumption-controlled-by", "controls-production-of", "controls-transport-of-chemical") interaction_types_ppd <- c("catalysis-precedes", "controls-phosphorylation-of", "controls-state-change-of", "controls-transport-of", "controls-expression-of")
Attaching package: ‘igraph’ The following objects are masked from ‘package:stats’: decompose, spectrum The following object is masked from ‘package:base’: union
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Subset data frame `pcdf` to obtain only the rows whose interactions are in `interaction_types_ppi`, and select only columns 1 and 3:
pcdf_ppi <- pcdf[pcdf$interaction_type %in% interaction_types_ppi,c(1,3)]
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Use the `igraph` function `graph_from_data_farme` to build a network from the edge-list data in `pcdf_ppi`; use `print` to see a summary of the graph:
graph_ppi <- graph_from_data_frame(pcdf_ppi, directed=FALSE) print(graph_ppi)
IGRAPH ba9e496 UN-- 17020 523498 -- + attr: name (v/c) + edges from ba9e496 (vertex names): [1] A1BG--ABCC6 A1BG--ANXA7 A1BG--CDKN1A A1BG--CRISP3 A1BG--GDPD1 [6] A1BG--GRB2 A1BG--GRB7 A1BG--HNF4A A1BG--ONECUT1 A1BG--PIK3CA [11] A1BG--PIK3R1 A1BG--PRDX4 A1BG--PTPN11 A1BG--SETD7 A1BG--SMN1 [16] A1BG--SMN2 A1BG--SNCA A1BG--SOS1 A1BG--TK1 A1CF--ACBD3 [21] A1CF--ACLY A1CF--APOBEC1 A1CF--APOBEC1 A1CF--ATF2 A1CF--CELF2 [26] A1CF--CTNNB1 A1CF--E2F1 A1CF--E2F3 A1CF--E2F4 A1CF--FHL3 [31] A1CF--HNF1A A1CF--HNF4A A1CF--JUN A1CF--KAT5 A1CF--KHSRP [36] A1CF--MBD2 A1CF--MBD3 A1CF--NRF1 A1CF--RBL2 A1CF--REL + ... omitted several edges
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Do the same for the metabolic network:
pcdf_metab <- pcdf[pcdf$interaction_type %in% interaction_types_metab, c(1,3)] graph_metab <- graph_from_data_frame(pcdf_metab, directed=TRUE) print(graph_metab)
IGRAPH 77472bf DN-- 7620 38145 -- + attr: name (v/c) + edges from 77472bf (vertex names): [1] A4GALT->CHEBI:17659 A4GALT->CHEBI:17950 A4GALT->CHEBI:18307 [4] A4GALT->CHEBI:18313 A4GALT->CHEBI:58223 A4GALT->CHEBI:67119 [7] A4GNT ->CHEBI:17659 A4GNT ->CHEBI:58223 AAAS ->CHEBI:1604 [10] AAAS ->CHEBI:2274 AACS ->CHEBI:13705 AACS ->CHEBI:15345 [13] AACS ->CHEBI:17369 AACS ->CHEBI:18361 AACS ->CHEBI:29888 [16] AACS ->CHEBI:57286 AACS ->CHEBI:57287 AACS ->CHEBI:57288 [19] AACS ->CHEBI:57392 AACS ->CHEBI:58280 AADAC ->CHEBI:17790 [22] AADAC ->CHEBI:40574 AADAC ->CHEBI:4743 AADAC ->CHEBI:85505 + ... omitted several edges
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Do the same for the directed protein-protein interactions:
pcdf_ppd <- pcdf[pcdf$interaction_type %in% interaction_types_ppd, c(1,3)] graph_ppd <- graph_from_data_frame(pcdf_ppd, directed=TRUE) print(graph_ppd)
IGRAPH DN-- 16063 359713 -- + attr: name (v/c), interaction_type (e/c) IGRAPH DN-- 16063 359713 -- + attr: name (v/c), interaction_type (e/c) + edges (vertex names): [1] A1BG ->A2M A1BG ->AKT1 A1BG ->AKT1 A2M ->APOA1 [5] A2M ->CDC42 A2M ->RAC1 A2M ->RAC2 A2M ->RAC3 [9] A2M ->RHOA A2M ->RHOBTB1 A2M ->RHOBTB2 A2M ->RHOB [13] A2M ->RHOC A2M ->RHOD A2M ->RHOF A2M ->RHOG [17] A2M ->RHOH A2M ->RHOJ A2M ->RHOQ A2M ->RHOT1 [21] A2M ->RHOT2 A2M ->RHOU A2M ->RHOV A4GALT->ABO [25] A4GALT->AK3 A4GALT->B3GALNT1 A4GALT->B3GALT1 A4GALT->B3GALT2 [29] A4GALT->B3GALT4 A4GALT->B3GALT5 A4GALT->B3GALT6 A4GALT->B3GAT2 + ... omitted several edges
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology
Question: of the three networks that you just created, which has the most edges? Next, we need to create a small graph. Let's make a three-vertex undirected graph from an edge-list. Let's connect all vertices to all other vertices: 12, 23, 31. We'll once again use graph_from_data_farme to do this:
testgraph <- graph_from_data_frame(data.frame(c(1,2,3), c(2,3,1)), directed=FALSE)
_____no_output_____
Apache-2.0
class02a_igraph_R.ipynb
curiositymap/Networks-in-Computational-Biology